January 13, 2026

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Forum downtime.

Apologies to users who were hit by forum downtime from ~9:00 to 16:30 Japan time. An upstream server crash combined with an unplanned package upgrade meant some configurations had to be edited. I think all is well now.

13 January, 2026 12:00AM

BunsenLabs Carbon Release Candidate iso available for testing

As usual it was a longer road than planned, with some unexpected tasks, but there is now a Carbon release candidate iso file available for download here: https://sourceforge.net/projects/bunsen … hybrid.iso sha256 checksum: d0beb580ba500e2b562e1f39aa6ec02d03597d8f95d73fd86c1755e3fee1ef7d

If you have a free machine or VM to install it on, please give it some testing!

And please post any bugs here: https://forums.bunsenlabs.org/viewtopic.php?id=9656

When it seems as if there aren't any bugs left to squash, we can do an Official Release. cool

13 January, 2026 12:00AM

January 11, 2026

hackergotchi for SparkyLinux

SparkyLinux

Labwc

There is a new desktop available for Sparkers: Labwc, as well as Sparky 2026.01~dev Labwc ISO image. What is Labwc? Installation on Sparky testing (9): (packages installation only, requires your set up): or (with Sparky settings): via APTus (>= 20260108)-> Desktops-> Labwc or (with Sparky settings): via Sparky testing (9) MinimalGUI/ISO image. Then reboot to take effects…

Source

11 January, 2026 11:42AM by pavroo

January 10, 2026

hackergotchi for Grml developers

Grml developers

Michael Prokop: Bookdump 2025

Foto der hier vorgestellten Bücher

Mein Lesejahr 2025 war mit durchschnittlich bisschen mehr als einem Buch pro Woche vergleichbar mit 2024. Mein Best-Of der von mir 2025 fertig gelesenen Bücher (jene die ich besonders lesenswert fand bzw. empfehlen möchte, die Reihenfolge entspricht dem Foto und stellt keinerlei Reihung dar):

  • Russische Spezialitäten, Dmitrij Kapitelman. Was für ein Feuerwerk von einem Buch, sprachgewaltig, traurig, amüsant.
  • Die Jungfrau, Monika Helfer. Nach Helfers “Die Bagage”, “Löwenherz” und “Vati” war natürlich auch dieses Buch Pflichtlektüre für mich.
  • Das Buch zum Film, Clemens J. Setz. Wunderbare Alltagsbeobachtungen und Bonmots – ich hab eigentlich nur eine Kritik: mit 192 Seiten zu kurz.
  • Wackelkontakt, Wolf Haas. Jaja, ein bekannter Bestseller etc. Aber er ist und bleibt einer meiner Lieblingsautoren. Ich war bei seiner Lesung in Graz und habe das Buch im Anschluss sogar noch ein zweites Mal gelesen, und es keine Sekunde bereut. Sprachkünstler, Hilfsausdruck!
  • Fleisch ist mein Gemüse, Heinz Strunk. Ich liebe Background-Geschichten, speziell wenn es um Musik bzw. das Musikerleben geht, und das ist hier mit dem Ausflug in die Branche der Tanzmusik der Fall. Bis auf einige wenige Ausnahmen flutscht es beim Lesen.
  • Wut und Wertung: Warum wir über Geschmack streiten, Johannes Franzen. Warum eskalieren Konflikte über Geschmack, Kunst und Kanon? Warum ist Streiten über Geschmack eine wichtige Kulturtechnik? Franzen arbeitet das anhand von tatsächlich existierenden Kontroversen und Skandalen auf, lehrreich und anregend.
  • Klapper, Kurt Prödel. Fans von Clemens J. Setz kennen natürlich Prödel, und da ich auch Coming-of-Age-Romanen mag, war das ein doppelter Volltreffer. Ich freue mich schon auf sein neues Buch “Salto”!
  • Hier treibt mein Kartoffelherz, Anna Weidenholzer. Ich kann absolut nichts mehr zu diesem Buch sagen, aber ich hab’s echt gern gelesen.
  • Die Infantin trägt den Scheitel links, Helena Adler. Das Buch hatte einen interessanten Sog auf mich, ich wollte es einfach weiterlesen. Die verspielte Sprache und Wortspiele haben es noch feiner gemacht.
  • Das schöne Leben, Christiane Rösinger. Ich hab Rösingers Bücher von Kathrin Passig empfohlen bekommen (Volltreffer, danke!). Ich hab mir auch alle anderen Bücher von Rösinger (“Berlin – Baku. Meine Reise zum Eurovision Song Contest”, “Zukunft machen wir später: Meine Deutschstunden mit Geflüchteten”, “Liebe wird oft überbewertet”) besorgt, und sehr gerne gelesen.

10 January, 2026 05:29PM

January 09, 2026

hackergotchi for Proxmox VE

Proxmox VE

New Archive CDN for End-of-Life (EOL) Releases

Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older.

The archive is reachable via the following URLs:

To use the archive for an EOL release, you will need to change the domain in the apt repository configuration...

Read more

09 January, 2026 05:43PM by t.lamprecht (invalid@example.com)

January 08, 2026

hackergotchi for GreenboneOS

GreenboneOS

December 2025 Threat Report: Emergency End-of-Year Patches and New Exploit Campaigns

In 2025, Greenbone increased the total number of vulnerability tests in the OPENVAS ENTERPRISE FEED to over 227,000, adding almost 40,000 vulnerability checks. Since the first CVE was published in 1999, over 300,000 software vulnerabilities have been added to MITRE’s CVE repository. CVE disclosures continued to rocket upward, increasing roughly 21% compared to 2024. CISA […]

08 January, 2026 01:05PM by Joseph Lee

January 07, 2026

hackergotchi for Deepin

Deepin

January 06, 2026

hackergotchi for VyOS

VyOS

VyOS 1.4.4 LTS Achieves Nutanix Ready Validation for AOS 7.3

We’re excited to announce that VyOS 1.4.4 LTS has officially achieved Nutanix Ready validation for Nutanix Acropolis Operating System (AOS) 7.3 and AHV Hypervisor 10.3.

This milestone strengthens our collaboration with Nutanix and ensures full interoperability for customers deploying VyOS Universal Router within the Nutanix Cloud Infrastructure solution.

06 January, 2026 02:30PM by Santiago Blanquet (yago.blanquet@vyos.io)

hackergotchi for Deepin

Deepin

January 03, 2026

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

New utility: xml2xfconf

In the process of getting Blob - and Carbon - ready for release, a bug with blob's handling of xfconf settings came up: https://forums.bunsenlabs.org/viewtopic … 79#p148079

It turned out that while xfconf-query doesn't output the type of settings entries, it requires to know the type if adding a new entry. So running 'xfconf-query -c "<channel>" -lv' is not enough for backing up an xfce app which stores its settings in the xfconf  database - which most of them do these days. We need to store the type too. That data is luckily stored in the app's xml file in ~/.config/xfce4/xfconf/xfce-perchannel-xml/ so to back it up, all we need to do is save that file.

In principle it might be possible to restore the settings by copying the xml file back into place, overwriting whatever's there, but the apps don't always respond right away, often needing  a logout/in. There's a better way - if you know the missing type then you can run xfconf-query commands to restore the settings.

So, this script called  xml2xfconf . Passed an xfconf xml file - eg a backed-up copy of one of those in xfce-perchannel-xml/ - it will print out a list of xfconf-query commands to apply those settings to the xfconf database, and they'll take effect immediately. cool

Example usage:

restore=$(mktemp)
xml2xfconf -x /path/to/xfce4-terminal.xml -c xfce4-terminal > "$restore"
bash "$restore"

Here's what got written into $restore:

xfconf-query -c xfce4-terminal -p /font-name -n -t string -s Monospace\ 10
xfconf-query -c xfce4-terminal -p /color-use-theme -n -t bool -s false
xfconf-query -c xfce4-terminal -p /font-allow-bold -n -t bool -s true
xfconf-query -c xfce4-terminal -p /title-mode -n -t string -s TERMINAL_TITLE_REPLACE
xfconf-query -c xfce4-terminal -p /scrolling-lines -n -t uint -s 50000
xfconf-query -c xfce4-terminal -p /font-use-system -n -t bool -s false
xfconf-query -c xfce4-terminal -p /background-mode -n -t string -s TERMINAL_BACKGROUND_TRANSPARENT
xfconf-query -c xfce4-terminal -p /background-darkness -n -t double -s 0.94999999999999996
xfconf-query -c xfce4-terminal -p /color-bold-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold-is-bright -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-background-vary -n -t bool -s false
xfconf-query -c xfce4-terminal -p /color-foreground -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-background -n -t string -s \#2c2c2c
xfconf-query -c xfce4-terminal -p /color-cursor-foreground -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-cursor -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-cursor-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-selection -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-background -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-palette -n -t string -s \#3f3f3f\;#705050\;#60b48a\;#dfaf8f\;#9ab8d7\;#dc8cc3\;#8cd0d3\;#dcdcdc\;#709080\;#dca3a3\;#72d5a3\;#f0dfaf\;#94bff3\;#ec93d3\;#93e0e3\;#ffffff
xfconf-query -c xfce4-terminal -p /tab-activity-color -n -t string -s \#aa0000

xml2xfconf has been uploaded in the latest version of bunsen-utilities, so now I'm going to rewrite the bits of BLOB which use xfconf (only a couple of apps actually) to use xml2xfconf and with luck the bug which @Dave75 found will go away.

And then the Carbon release can get rolling again.

It wasn't a welcome interruption, but this new utility might be useful outside Blob for people who want to backup and restore xfce  app settings. smile

03 January, 2026 12:00AM

January 02, 2026

hackergotchi for ZEVENET

ZEVENET

How to Evaluate a WAF in 2026 for SaaS Environments

Web applications and APIs are now the operational core of most digital services. They process transactions, expose business logic, manage identities, and connect distributed systems that evolve continuously. In parallel, the volume and sophistication of attacks has increased, driven by automation, accessible tooling, and cloud-specific attack vectors.

Web Application Firewalls remain a critical part of the security stack—but in 2026, the challenge is no longer whether a WAF is deployed. The real question is whether it can be evaluated, measured, and trusted under real operating conditions, especially when consumed as a service.

As WAFs move to SaaS models, teams delegate infrastructure, scaling, and maintenance to the provider. This simplifies operations, but it also changes the evaluation criteria. When you no longer control the underlying system, visibility, isolation, and predictable behavior become non-negotiable technical requirements.

Evaluating a WAF in 2026 is fundamentally different

Traditional evaluations focused heavily on rule coverage or whether a solution “covers OWASP Top 10.” Those checks still matter—but they no longer reflect production reality.

A modern evaluation must answer practical, operational questions:

  • Can the WAF block malicious traffic without breaking legitimate flows?
  • Does it behave consistently in prevention mode and under load?
  • Can its decisions be observed, explained, and audited?

In SaaS environments, this becomes even more critical. When a false positive blocks production traffic or latency spikes unexpectedly, there is often no lower layer to compensate. The WAF’s behavior is the system’s behavior. If that behavior cannot be measured and understood, the evaluation is incomplete.

Why most SaaS WAF evaluations fall short

Many WAF evaluations fail not due to lack of expertise, but because the process itself is incomplete.
Common pitfalls include:

  • Testing in monitor-only mode instead of prevention
  • Relying on default configurations with no real traffic
  • Ignoring operational limits until production
  • Inability to trace why a request was blocked

In SaaS models, additional constraints often surface late: payload size limits, rule caps, log retention, export restrictions, or rate limits in the control plane. These are not secondary details—they directly affect detection quality and incident response.

A meaningful evaluation must be observable and reproducible. If you cannot trace decisions through logs, correlate them with metrics, and explain them after the fact, the WAF becomes a black box.

Detection quality is defined by false positives, not demos

Detection capability is often summarized by a single number, usually the True Positive Rate (TPR). While important, this metric alone is misleading.

A WAF that aggressively blocks everything will score well in detection tests—and fail catastrophically in production.

Real-world evaluation must consider both sides of the equation: blocking malicious traffic and allowing legitimate traffic to pass. False positives are not a usability issue—especially in API-driven systems, where payload structure, schemas, and request volume amplify the cost of false positives.

At scale, even a low False Positive Rate (FPR) can result in:

  • Broken user flows
  • Failed API calls
  • Increased operational load
  • Pressure to weaken or disable protections

This is where most evaluations break down in practice: not on attack detection, but on how much legitimate traffic is disrupted.

A realistic PoC should include scenarios like:

Source of false positives Real-world example What to test
Complex request bodies Deep JSON, multipart forms Recorded API and UI traffic
Business logic flows Search, filtering, checkout End-to-end navigation
Uploads PDFs, images, metadata Real upload paths
Atypical headers Large cookies, custom headers Reverse proxy captures

In SaaS environments, false positives are even more costly, as tuning depends on provider capabilities, change latency, and visibility into decisions.

SKUDONET Cloud Solution

SkudoCloud was designed to deliver application delivery and WAF capabilities as a SaaS service while preserving the technical properties advanced teams need to operate safely in production: transparent inspection, predictable isolation, and full visibility into traffic and security decisions. The goal is to remove infrastructure overhead without turning operations into a black box.

That same philosophy shapes how WAFs should be evaluated in 2026. Teams should assess real behavior: prevention mode, realistic traffic patterns, false positives, API payloads, and performance under load—especially when the service is managed and the underlying system is not directly accessible.

To support that evaluation, we have documented the full methodology in our technical guide:

👉 Download the full guide:

02 January, 2026 11:03AM by Nieves Álvarez

January 01, 2026

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2025/12

The 12th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.18.2, 6.12.63-LTS, 6.6.119-LTS – Added to “dev” repos: COSMIC desktop – Sparky 2025.12 & 2025.12 Special Editions released Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in January too, please.

Source

01 January, 2026 08:08PM by pavroo

December 30, 2025

hackergotchi for Deepin

Deepin

December 29, 2025

hackergotchi for ZEVENET

ZEVENET

Why Application Delivery Visibility Breaks in Secure Architectures

Modern application delivery architectures are built with the right goals in mind. Load balancers distribute traffic, Web Application Firewalls enforce security policies, TLS protects data in transit, and monitoring systems promise observability across the stack. On paper, everything seems covered.

In real production environments, however, many of these architectures operate with critical blind spots. Especially when security components start making decisions that engineers cannot fully see, trace, or explain. This is rarely caused by a lack of tools. More often, it is the result of how security is embedded into the delivery path.

As security becomes more deeply integrated into application delivery, visibility does not automatically follow.

When security turns into a black box

In most production environments, security is no longer a separate layer. WAFs sit directly in the traffic path, inspecting requests, evaluating rules, applying reputation checks and deciding — in real time — whether traffic is allowed or blocked. TLS inspection happens inline, and policies are often updated automatically.

The problem is not that these decisions exist. The problem is that, very often, they cannot be clearly explained after the fact.

In many deployments, teams quickly run into the same limitations:

  • Engineers cannot determine which specific rule caused a request to be blocked
  • WAF logic is exposed only through high-level categories or abstract scores
  • Encrypted traffic is inspected, but the inspection process itself remains invisible
  • Logs are available, but without enough context to correlate decisions with behaviour

The result is a paradox that experienced teams recognize immediately: security coverage increases, while operational visibility decreases.

Common Security Blind Spots in Application Delivery Architectures

These blind spots rarely appear during normal operation. They tend to surface under pressure: traffic spikes, false positives, performance degradation or partial outages. When they do, troubleshooting becomes significantly more complex, because the information engineers need is often incomplete or fragmented.

1. Encrypted traffic without explainability

TLS encryption is essential, but it fundamentally changes how visibility works. In many application delivery stacks, traffic is decrypted at some point, inspected, and then re-encrypted. Security decisions are made, but the path between request, rule and outcome is not always traceable.

When something breaks, engineers are often left with little more than a generic message: “Request blocked by WAF.”

What is missing is the ability to correlate:

  • the original request,
  • the specific rule or condition involved,
  • the security decision that was applied,
  • and the downstream impact on the application.

Without that correlation, root cause analysis turns into guesswork rather than engineering.

2. Abstracted or hidden WAF rule logic

Many WAF platforms expose protection logic through simplified models such as risk scores, rule categories or predefined profiles. While these abstractions make dashboards easier to read, they remove critical detail from day-to-day operations.

When rule logic cannot be inspected directly:

  • false positives are harder to tune with precision,
  • rule conflicts remain invisible,
  • behaviour changes appear without an obvious trigger.

Over time, this erodes trust in automated protection. Teams stop understanding why something happens and start compensating by weakening policies instead of fixing the underlying issue.

3. Security decisions that impact delivery

Security controls do more than allow or block requests. They influence how connections are handled, how retries behave, how sessions persist, and how backend health is perceived by the delivery layer.

When these effects are not visible, delivery problems are often misdiagnosed:

  • backend instability may actually be selective blocking,
  • uneven load distribution may come from upstream filtering,
  • timeouts may be caused by inspection delays under load.

Engineers end up debugging load balancing logic or application behaviour, while the real cause sits silently inside the security layer.

4. Logs without operational context

Logs are often treated as a substitute for visibility. In practice, they are frequently:

  • sampled or rate-limited,
  • delayed,
  • detached from real-time behaviour,
  • owned or processed externally.

A log entry that explains what happened, but not why, is not observability, it is a post-mortem artifact. In critical environments, teams need actionable insight while an incident is unfolding, not hours later.

What Modern WAF Architectures Must Provide

A WAF integrated into the application delivery path should not act as an opaque enforcement layer. Instead, it should provide visibility at each critical stage of the decision process.

In practical terms, this means enabling teams to:Trace each security decision end to end, from the incoming request to the final action applied.

  • Trace each security decision end to end, from the incoming request to the final action applied.
  • Inspect WAF rule logic directly, without relying on abstract categories or risk scores.
  • Correlate blocked or modified requests with delivery behaviour, such as backend health, session persistence or retries.
  • Analyze encrypted traffic transparently, without losing context once TLS inspection is performed.
  • Maintain consistent visibility under load, during traffic peaks or active incidents.

Without these capabilities, security controls may protect applications, but they also introduce operational blind spots that slow down troubleshooting and increase risk.

Visibility in Secure Application Delivery with SKUDONET

SKUDONET Enterprise Edition is designed around a simple principle: security must protect traffic without breaking visibility.

Instead of treating security as a separate black box, SKUDONET integrates WAF and traffic management into a single, observable application delivery platform. This approach ensures that security decisions remain transparent, traceable and actionable for engineers working in real production conditions.

SKUDONET Application Delivery Visibility

Key aspects of this design include:

  • Full visibility and control over WAF rules and behaviour, allowing administrators to inspect rule logic, modify or disable existing rules, and define new ones based on real traffic patterns and application requirements.
  • Clear correlation between security decisions and application delivery impact, making it possible to understand exactly where a request failed, why it was blocked or modified, and how that decision affected backend behaviour.
  • Transparent inspection of encrypted traffic, preserving full request context throughout the entire lifecycle, from decryption to enforcement and delivery.
  • Actionable logging and diagnostics, designed to explain not only what happened, but why it happened, enabling effective tuning, troubleshooting and auditing.

By removing opacity from security enforcement, SKUDONET helps teams retain control over both protection and performance—especially in high-traffic or business-critical environments where visibility is essential.

A 30-day, fully functional evaluation of SKUDONET Enterprise Edition is available for teams who want to validate this level of visibility and control under real workloads.

29 December, 2025 11:31AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

December 26, 2025

hackergotchi for SparkyLinux

SparkyLinux

COSMIC

There is a new desktop available for Sparkers: COSMIC What is COSMIC? COSMIC desktop is available via Sparky ‘dev’ repositories so the repo has to be enabled to install COSMIC desktop on the top of Sparky 8 stable or testing (9). It uses the Wayland session as default. The Sparky meta package installs ‘xwayland’ package as default, but some application can not work or launch. That’s why I…

Source

26 December, 2025 04:21PM by pavroo

December 25, 2025

hackergotchi for Deepin

Deepin

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Carbon version of bunsen-blob is now available

BLOB, the utility that lets people try different desktop theming sets (eg go back to the Boron look, or even Crunchbang) has been upgraded to 13.1-1 on the Carbon repository. This brings a lot of improvements, like support for xfce4-panel profiles, xfconf settings for xfce4-terminal, flexibility over wallpaper settings with a switch to feh by default, and more.

See the changelog: https://github.com/BunsenLabs/bunsen-bl … /changelog
or all the commits: https://github.com/BunsenLabs/bunsen-bl … ts/carbon/

Right now "BLOB Themes Manager" is commented out of jgmenu's prepend.csv, but if you're on a Carbon system you can install the package 'bunsen-blob' and uncomment that menu line to use it. Please check it out. cool

Soon, an upgraded bunsen-configs will include it in the menu by default, and it will be added to the meta - and iso - package lists. A Release Candidate Carbon iso is not far away...

25 December, 2025 12:00AM

December 23, 2025

hackergotchi for Deepin

Deepin

December 21, 2025

hackergotchi for ArcheOS

ArcheOS

The OpArc Project at the SFSCON

Hello everyone, 

this short post is to let anyone interested know that our presentation at SFSCON, held in Bolzano/Bozen at the NOI Techpark, has been published online. SFSCON stays for SFSCON stands for South Tyrol Free Software Conference and is one of Europe's most established annual conferences on Free Software.

This year we at Arc-Team decided to participate with a talk that summarized our approximately 20 years of experience in applying the Open Source philosophy to archaeology (both in the software and hardware fields).

The presentation was titled "Arc-Team and the OpArc Project" and can be viewed both on the conference's official website (where you can also download a PDF version) and on the conference's YouTube channel

I hope the presentation can be interesting for someone. Have a nice day! 

21 December, 2025 02:47PM by Luca Bezzi (noreply@blogger.com)

hackergotchi for Qubes

Qubes

Qubes OS 4.3.0 has been released!

We’re pleased to announce the stable release of Qubes OS 4.3.0! This minor release includes a host of new features, improvements, and bug fixes. The ISO and associated verification files are available on the downloads page.

What’s new in Qubes 4.3?

  • Dom0 upgraded to Fedora 41 (#9402).
  • Xen upgraded to version 4.19 (#9420).
  • Default Fedora template upgraded to Fedora 42 (older versions not supported).
  • Default Debian template upgraded to Debian 13 (versions older than 12 not supported).
  • Default Whonix templates upgraded to Whonix 18 (older versions not supported).
  • Preloaded disposables (#1512)
  • Device “self-identity oriented” assignment (a.k.a. New Devices API) (#9325)
  • Qubes Windows Tools reintroduced with improved features (#1861).

These are just a few highlights from the many changes included in this release. For a more comprehensive list of changes, see the Qubes OS 4.3 release notes.

How to get Qubes OS 4.3.0

  • If you’d like to install Qubes OS for the first time or perform a clean reinstallation on an existing system, there’s never been a better time to do so! Simply download the Qubes 4.3.0 ISO and follow our installation guide.

  • If you’re currently using Qubes 4.2, learn how to upgrade to Qubes 4.3.

  • If you’re currently using a Qubes 4.3 release candidate (RC), update normally (which includes upgrading any EOL templates and standalones you might have) in order to make your system effectively equivalent to the stable Qubes 4.3.0 release. No reinstallation or other special action is required.

In all cases, we strongly recommend making a full backup beforehand.

Known issues in Qubes OS 4.3.0

Templates restored in 4.3.0 from a pre-4.3 backup may continue to target their original Qubes OS release repos (#8701). After restoring such templates in 4.3.0, you must perform the following additional steps:

sudo qubes-dom0-update -y qubes-dist-upgrade
qubes-dist-upgrade --releasever=4.3 --template-standalone-upgrade -y

This will automatically choose the templates that need to be updated.

Fresh templates on a clean 4.3.0 installation are not affected. Users who perform an in-place upgrade from 4.2 to 4.3 (instead of restoring templates from a backup) are also not affected, since the in-place upgrade process already includes the above fix in stage 4. For more information, see issue #8701.

View the full list of known bugs affecting Qubes 4.3 in our issue tracker.

Support for older releases

In accordance with our release support policy, Qubes 4.2 will remain supported for six months after the release of Qubes 4.3, until 2026-06-21. After that, Qubes 4.2 will no longer receive security updates or bug fixes.

Whonix templates are created and supported by our partner, the Whonix Project. The Whonix Project has set its own support policy for Whonix templates in Qubes. For more information, see the Whonix Support Schedule.

Thank you to our partners, donors, contributors, and testers!

This release would not be possible without generous support from our partners and donors, as well as contributions from our active community members, especially bug reports from our testers. We are eternally grateful to our excellent community for making the Qubes OS Project a great example of open-source collaboration.

21 December, 2025 12:00AM

December 20, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2025.12 Special Editions

There are new iso images of Sparky 2025.12 Special Editions out there: GameOver, Multimedia and Rescue. This release is based on Debian testing “Forky”. The December update of Sparky Special Edition iso images feature Linux kernel 6.17, updated packages from Debian and Sparky testing repos as of December 20, 2025, and most changes introduced at the 2025.12 release. The Linux kernels 6.18.2, 6.

Source

20 December, 2025 10:10PM by pavroo

hackergotchi for Purism PureOS

Purism PureOS

A Quarter Century After Cyberselfish, Big Tech Proves Borsook Right

In her book Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of Silicon Valley, published in 2000, Borsook who is based in Palo Alto, California and has previously written for Wired and a host of other industry publications, took aim at what she saw as disturbing trends among the tech industry.

The post A Quarter Century After Cyberselfish, Big Tech Proves Borsook Right appeared first on Purism.

20 December, 2025 05:10AM by Purism

December 19, 2025

hackergotchi for GreenboneOS

GreenboneOS

New Actively Exploited CVSS 10 Flaw in Cisco AsyncOS Spam Quarantine Remote Access

A new maximum-severity zero-day vulnerability in Cisco AsyncOS was published in emergency fashion on Wednesday, December 17th. Cisco has indicated that the flaw, tracked as CVE-2025-20393, has been actively exploited in the wild by Chinese-nexus APT actors since late November 2025, and that it has been aware of the activity for at least a week […]

19 December, 2025 11:54AM by Joseph Lee

hackergotchi for VyOS

VyOS

VyOS 1.4.4 released: syslog over TLS, AWS GLB support, and 50+ bug fixes

Hello, Community!

Customers and holders of contributor subscriptions can now download VyOS 1.4.4 release images and the corresponding source tarball. This release adds TLS support for syslog, support for AWS gateway load balancer tunnel handler (on AWS only), an option to match BGP prefix origin validation extended communities in route maps, and more. It also fixes over fifty bugs. Additionally, there's now a proper validation to prevent manually assigned multicast addresses, which may break some old malformed configs, so pay attention to it. Last but not least, there's a deprecation warning for SSH DSA keys that will stop working in VyOS releases after 1.5 due to changes in OpenSSH, so make sure to update your user accounts to more secure algorithm keys while you still have the time.

19 December, 2025 09:22AM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for ZEVENET

ZEVENET

SKUDONET 2025: A Technical Recap of Product, Security and Platform Growth

2025 has been a defining year for SKUDONET — not because of a single announcement or isolated launch, but due to sustained progress across product development, security reinforcement and strategic expansion.

Throughout the year, our focus has remained consistent: strengthening the core platform, improving the operational experience for administrators, and ensuring that security and reliability evolve in line with real-world infrastructure demands.

This approach has translated into continuous, incremental improvements rather than disruptive changes, allowing teams to adopt new capabilities without compromising stability in production environments.

Product Evolution Throughout 2025

Over the course of eight product releases, SKUDONET continued to mature as an application delivery and security platform designed for critical environments.

Across these updates, we introduced:

  • 11 new features
  • 31 functional improvements
  • 23 stability fixes
  • 31 resolved security vulnerabilities (CVEs)

Rather than isolated enhancements, these updates reflect a continuous effort to simplify daily operations while reinforcing security and performance at scale.

Key areas of evolution included a renewed Web GUI, designed to be faster, more consistent and easier to navigate in complex environments, as well as meaningful progress in RBAC, enabling more precise and adaptable access control models.

Certificate management also saw significant improvements, with extended Let’s Encrypt automation, broader DNS provider support and fully automated renewal workflows. Alongside this, we reduced execution times for critical operations such as farm start/stop actions and API metric retrieval.

Security and Reliability Reinforcement

During the year, 31 CVEs were resolved, continuously hardening the platform’s attack surface. Beyond vulnerability remediation, SKUDONET focused on reinforcing internal consistency and predictability under load.
Key improvements were made across:

  • Traffic inspection and validation pipelines, improving consistency and traceability when processing and filtering requests
  • Logging and remote log forwarding, ensuring more reliable log handling and easier integration with external logging systems
  • Internal module stability, particularly within IPDS and RBAC, reducing edge-case behaviour under load

Several updates also introduced additional hardening measures, including:

  • Improved handling of client headers, mitigating spoofing and trust issues in proxied environments
  • More secure cookie insertion in HTTP/S services, with stronger defaults to reduce exposure to common web vulnerabilities
  • Stricter security defaults in the management interface, reinforcing protection of administrative access

Together, these enhancements contribute to a platform that behaves more predictably under pressure and is easier to audit and troubleshoot in production.

Automation and Operational Efficiency

Reducing operational overhead for administrators was another consistent theme throughout 2025.
Several improvements were introduced to simplify day-to-day operations and reduce manual intervention, including:

  • AutoUpdate, enabling systems to automatically check, download and install updates, helping teams stay current with security patches and platform improvements while minimizing maintenance windows
  • End-to-end SSL/TLS automation, covering the full certificate lifecycle from creation to renewal and notification, reducing manual certificate management effort
  • Performance optimizations in the Stats API and backend metrics collection, making integrations with monitoring and automation tools faster and more efficient

Together, these enhancements allow teams to spend less time on routine maintenance tasks and more time on capacity planning, optimization and higher-level architectural decisions.

SkudoCloud: A Strategic Step Forward

One of the most significant milestones of the year was the launch of SkudoCloud, SKUDONET’s fully managed SaaS platform for application delivery and security.

SkudoCloud introduces a new operational model in which teams can deploy secure application delivery infrastructure in minutes, without managing the underlying system lifecycle. From the first deployment, users benefit from:

  • Fully managed application delivery and security, removing the need to operate and maintain the platform
  • Integrated traffic management and protection, aligned with the same delivery and security principles as SKUDONET Enterprise Edition
  • Immediate availability of advanced security controls, applied from the initial deployment
  • A simplified operational model, focused on usage rather than infrastructure management

This launch represents a strategic expansion of the SKUDONET ecosystem, complementing on-premise and self-managed deployments with a cloud-native option designed for teams that prioritize simplicity, speed and operational focus.

Expanding Our Global Partner Ecosystem

Alongside product evolution, SKUDONET continued to expand its international presence.

During 2025, seven new partners joined our ecosystem across Europe, Asia and Latin America, strengthening our ability to support customers globally while maintaining close technical collaboration at a regional level:

  • 🇪🇸 Virtual Cable (Spain)
  • 🇹🇷 Fortiva (Turkey)
  • 🇮🇩 SINUX (Indonesia)
  • 🇪🇸 Secra Solutions (Spain)
  • 🇮🇳 Bluella (India)
  • 🇹🇼 Global OMC TECH Inc. (Taiwan)
  • 🇵🇪 BCloud Services SAC (Peru)

This growth reflects increasing demand for open, transparent and flexible application delivery solutions across diverse markets.

Looking Ahead to 2026

2026 will begin with an important milestone: the launch of the SkudoManager, the SKUDONET Central Console.

This unified interface will enable teams to manage multiple nodes, services and products from a single control plane, providing global infrastructure visibility, centralized user and policy management, and integrated monitoring of farms, certificates, security and performance.

Alongside this, we will continue expanding SkudoCloud and reinforcing the Enterprise Edition’s core architecture, staying aligned with our principles of transparency, performance and security.

Thank You for Being Part of the Journey

The progress achieved in 2025 has been possible thanks to our customers, partners and community. We look forward to continuing this journey together in 2026, building an application delivery and security platform that evolves with real operational needs.

19 December, 2025 09:13AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

December 16, 2025

hackergotchi for Purism PureOS

Purism PureOS

2025 Year-End Sale

Announcing Purism's Year End Sale. Offering 15% off your purchases through the end of the year. Just use YEAREND as your coupon code for hardware purchases through December 31, 2025!
Please note that orders placed after December 17th will not ship until January.

The post 2025 Year-End Sale appeared first on Purism.

16 December, 2025 09:17PM by Purism

hackergotchi for GreenboneOS

GreenboneOS

Greenbone’s OPENVAS SCAN Now Supports the Proxmox VE Hypervisor

Users appreciate when software can easily integrate into their existing IT environment. For vendors, this means supporting a cross-platform mix of operating systems and infrastructure. We’re excited to expand our virtualization platform support, bringing Proxmox VE into our family of supported hypervisors. This addition enables more flexibility for deploying OPENVAS SCAN in diverse IT environments. […]

16 December, 2025 09:11AM by Greenbone AG

hackergotchi for ZEVENET

ZEVENET

Open-Source Software Licensing in the SaaS Era

Open-source software has been one of the most transformative forces in the technology sector. Operating systems, databases, web servers, and encryption libraries that we now consider essential exist thanks to thousands of developers who chose to release their code so that anyone could study it, modify it, and improve it.

This model has enabled companies and organizations to build advanced solutions without relying exclusively on proprietary software. However, this openness also introduces a recurring challenge: how to ensure the sustainability of open-source software in a world where software is no longer distributed, but consumed as a service.

In this context, recent discussions around well-known projects have brought renewed attention to licenses such as AGPL (Affero General Public License), specifically designed to respond to this shift in how software is delivered and consumed. Beyond individual cases, the underlying message is clear: open-source software requires a balance between those who contribute and those who use it.

What Do We Mean by “Open Source” in 2025?

When people talk about open-source software, it is often confused with “free software” in the sense of cost. In reality, the term refers to a set of fundamental freedoms:

  • The freedom to run the program for any purpose
  • The freedom to study how it works
  • The freedom to modify it
  • The freedom to redistribute it, with or without changes

These freedoms are defined and enforced through licenses. Some licenses prioritize maximum adoption, while others aim to ensure that the ecosystem remains collaborative and sustainable.

During the 1990s and early 2000s, the dominant model was traditional software distribution: installers, CDs, and packaged binaries. In that context, licenses such as GPL ensured that if you redistributed the software, you were required to release your modifications.

Today, this landscape has changed completely, most software is delivered as a service. Companies can benefit from open-source software without ever distributing it; they simply run it on their own infrastructure. This shift is precisely where modern licensing models come into play.

2. MIT, Apache, GPL, AGPL: What Actually Sets Them Apart

When discussing open-source licenses, we are not just talking about legal text, but about different collaboration models.
Broadly speaking, there are two major families of licenses:

A) Permissive Software licenses (MIT, BSD, Apache 2.0)

Permissive licenses such as MIT, BSD, or Apache 2.0 allow organizations to take the code, modify it, integrate it into proprietary products, and redistribute it without any obligation to return improvements to the community. They are attractive to companies that want minimal legal constraints and maximum flexibility.

Their primary goal is typically to encourage widespread adoption, leaving the decision to contribute entirely up to each organization.

B) Copyleft Software licenses (GPL, LGPL, AGPL)

Copyleft licenses such as GPL, LGPL, and AGPL follow a different logic. If a project benefits from community-driven development, the improvements made on top of that code should remain accessible to the community. The intent is not to restrict commercial use, but to prevent open-source code from being absorbed into closed solutions without any return to the original project.

AGPL (Affero GPL) emerged to address a specific change in context: the transition from distributed software to software offered as a service. Traditional GPL licenses focused on redistribution—if you shipped a binary to a third party, you were required to provide the source code and your modifications.

Permissive vs Copyleft software license

What was the problem?

In the SaaS model, many companies began using open-source software internally or as part of web services without ever “distributing” it. They simply ran it on their own servers and exposed functionality through APIs or web interfaces. In these cases, modifications could be made without being shared.

AGPL extends the scope of reciprocity: if you offer the software to users over a network, your modifications must be made accessible.

When Open Source Forces a Change in Licensing Models

As open-source software has become deeply embedded in enterprise infrastructures, many projects have reached a point where usage grows faster than contributions. This is a natural outcome of how technology is consumed today, particularly in SaaS and cloud environments.

In recent years, several mature projects have adopted AGPL or dual-licensing models after observing recurring patterns:

  • The software becomes a critical component in enterprise environments
  • Internal or commercial forks evolve independently
  • Improvements remain isolated and never reach the core project
  • The cost of ongoing development falls almost entirely on the original maintainers

The result is that, despite widespread adoption, the project lacks the resources required for long-term sustainability.

In this context, adopting reciprocal licenses such as AGPL is not a restrictive move, but a mechanism to preserve the continuity of the project.

The Role of AGPL in Cloud and SaaS Ecosystems

The shift toward cloud and SaaS models has fundamentally transformed the software lifecycle. Many historical licenses were not designed for environments where software operates exclusively as a remote service.

Licenses such as AGPL introduce mechanisms intended to protect the integrity of open-source ecosystems in this new context. Their purpose is not to limit commercial use, but to ensure that significant improvements do not remain locked inside private implementations—especially when the software underpins critical infrastructure.

From a technical and organizational perspective, this approach provides several benefits:

  • Project cohesion, by preventing parallel versions from diverging into incompatible branches
  • Greater transparency and security, as shared improvements can be audited and reviewed collectively
  • Reduced duplication of effort, allowing innovation to build on a common foundation
  • A more balanced ecosystem, where the cost of evolution is not borne by a single actor

As a result, more projects are adopting hybrid models that combine open foundations, reciprocity mechanisms, and commercial offerings that fund ongoing development. This is not an exception, but a natural response to how software is built and maintained today.

5. Open Source and Long-Term Sustainability

Open-source software remains a fundamental driver of innovation. Its sustainability, however, depends on maintaining a fair balance between those who create and those who rely on it.

As SaaS models continue to redefine software consumption, licensing frameworks evolve alongside them. AGPL and other reciprocal licenses do not aim to restrict adoption, but to ensure that critical projects can continue to grow, improve, and remain viable over time.

Ultimately, the goal is to protect the continuity of the open-source ecosystem in a technical landscape that is changing rapidly.

At SKUDONET, we work closely with open-source technologies in security and application delivery. Understanding how licensing models evolve is a key part of building sustainable infrastructure.

16 December, 2025 07:13AM by Nieves Álvarez

December 15, 2025

hackergotchi for Purism PureOS

Purism PureOS

PureOS Crimson Development Report: November 2025

With our sights set on the beta release milestone, one key component still remains: a way to upgrade from Byzantium to Crimson.

If you're a Linux expert, you might already know how Debian handles release upgrades. Some eager individuals have already upgraded from Byzantium to the Crimson alpha this way. However, we need an easy, graphical upgrade procedure, so everyone with a Librem 5 can get the improvements coming in Crimson.

The post PureOS Crimson Development Report: November 2025 appeared first on Purism.

15 December, 2025 08:59PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Native integration available between Canonical LXD and HPE Alletra MP B10000

The integration combines efficient open source virtualization with high performance, enterprise-grade storage

We are pleased to announce a new,  native integration between Canonical LXD and HPE Alletra. This integration brings together Canonical’s securely designed, open source virtualization platform with HPE’s enterprise-grade storage to deliver a simple, scalable, high-performance experience. 

Enterprise-grade storage meets efficient open source virtualization

HPE Alletra is designed to deliver mission-critical storage at mid range economics, with a consistent data experience across various cloud environments. With this integration, Canonical LXD and MicroCloud users can now provision and manage Alletra block storage directly through the LXD interface, without the need for any third-party plugins or additional abstraction layers. 

The integration enables users to seamlessly create, attach, snapshot, and manage thin-provisioned block volumes as easily as working with local storage, while retaining the full performance, resilience, and enterprise data services of HPE Alletra. 

Simplified operations and scalable performance

HPE Alletra NVMe-based architecture ensures sub-millisecond latency for demanding workloads, with built-in services such as thin-provisioning and data deduplication minimizing storage costs while maintaining consistent performance. Paired with LXD’s lightweight control plane and streamlined UI, users can easily operate their environments, combining the best of open source with enterprise storage functionality.

As demands grow, new volumes and storage capacity can be allocated on the fly. Alletra’s scale-out, modular architecture ensures the platform can expand without disrupting running workloads. 

A foundation for modern infrastructure deployments 

LXD provides a unified open source virtualization platform to run system containers and virtual machines with a focus on efficiency, security, and ease of management. With native support for HPE Alletra, enterprises can now build, deploy, and manage their workloads with enterprise storage guarantees, whether in private clouds, on-premises data centers, or edge environments. 

The combined solution empowers teams to deliver predictable performance for critical and data-intensive workloads while reducing complexity and ensuring agility.

Availability

The integration between LXD and HPE Alletra is available starting with the LXD 6.6 feature release, and requires a HPE Alletra WSAPI version 1. LXD currently supports connecting to HPE Alletra storage through NVMe/TCP or iSCSI protocols. For detailed information, visit Canonical’s LXD documentation.

15 December, 2025 03:57PM

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2025.12

There are new SparkyLinux 2025.12 codenamed “Tiamat” ISO images available of the semi-rolling line. This new release is based on the Debian testing “Forky”. Main changes: – Packages updated from the Debian and Sparky testing repositories as of December 14, 2025. – Linux kernel 6.17.11 (6.18.1, 6.12.62-LTS & 6.6.119-LTS in Sparky repos) – Firefox 140.5.0esr (146.0 in Sparky repos) …

Source

15 December, 2025 09:59AM by pavroo

hackergotchi for Deepin

Deepin

December 14, 2025

hackergotchi for Grml developers

Grml developers

Evgeni Golov: Home Assistant, Govee Lights Local, VLANs, Oh my!

We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.

Our network is not that complicated, but there is a dedicated VLAN for IOT devices. Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily. So far, this has never been a problem.

Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.

The API involves sending JSON over multicast, which the Govee device will answer to.

No devices found on the network

After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:

DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2
DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2

That's not the IP address in the IOT VLAN!

Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.

You need to go to SettingsNetworkNetwork adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.

Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.

14 December, 2025 03:48PM

December 13, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Why you should retire your Microsoft Azure Consumption Commitment (MACC) with Ubuntu Pro

When your organization first signed its Microsoft Azure Consumption Commitment (MACC), it was a strategic step to unlock better pricing and enable cloud growth. However, fulfilling that commitment efficiently requires planning. Organizations often look for ways to retire their MACC that drive strategic value, rather than simply increasing consumption to meet a deadline.

The goal is to meet your commitment while delivering long-term benefits to the business.

With Ubuntu Pro in the Azure Marketplace, you can retire your MACC at 100% of the pretax purchase amount. In practice, this allows you to meet consumption goals on your standard Azure invoice, while securing your open source supply chain and automating compliance.

Turn a spend target into an open source security strategy

Instead of simply increasing consumption to hit a target, effective IT and FinOps teams align their MACC with broader strategic goals. Open source support and security maintenance is a priority for enterprises, as a recent Linux Foundation report shows: 54% of enterprises want long-term guarantees, and 53% expect rapid security patching.

Ubuntu Pro offers both. By choosing software that strengthens your security and operations, you can retire your MACC while funding capabilities your organization prioritizes.

Allocating MACC to Ubuntu Pro is a direct investment in your open source estate:

  • Expanded Security Maintenance (ESM): extend security coverage to the critical open source applications running above the operating system layer. ESM provides up to 15 years of security updates for the OS, plus tens of thousands of packages. You might already see alerts for these missing updates in your Azure portal – learn how to check your exposure in our blog: [A complete security view for every Ubuntu LTS VM on Azure].
  • Kernel Livepatch: reduce maintenance windows by applying critical kernel patches without requiring a reboot for most workloads.
  • Compliance tooling: access options for CIS hardening and FIPS 140-3 validated cryptographic modules to support meeting compliance and regulatory needs.
  • Optional enterprise support: add enterprise SLAs, direct access to Canonical engineers for break-fix and bug-fix, and guidance on operating Ubuntu and ESM-covered packages on Azure.

By choosing Ubuntu Pro, you convert your MACC spend into a maintained open source foundation across the development lifecycle.

Maximize value and streamline procurement

Retiring your commitment should be financially efficient and administratively simple. While standard Marketplace listings are MACC-eligible, many organizations use private offers to secure tailored commercial terms, like custom pricing or volume discounts, without sacrificing eligibility.


We support both standard private offers and multiparty private offers for rollouts involving resellers in the US/UK. In all cases, checking that your purchase counts toward your commitment is straightforward:

  • Confirm Eligibility: verify the listing or private offer is marked as “Azure benefit-eligible.”
  • Purchase Correctly: execute the transaction in the Azure portal under the tenant and subscription tied to your MACC agreement.

This approach guarantees that every dollar spent satisfies your financial goals while delivering the specific security coverage your organization needs.

Ready to align Ubuntu Pro with your MACC? Talk to our team.

13 December, 2025 12:07AM

December 12, 2025

hackergotchi for Grml developers

Grml developers

grml development blog: Grml - new stable release 2025.12 available

We are proud to announce our new stable release 🚢 version 2025.12, code-named ‘Postwurfsendung’!

Grml is a bootable live system (Live CD) based on Debian. Grml 2025.12 brings you fresh software packages from Debian testing/forky, enhanced hardware support and addresses known bugs from previous releases.

Like in the previous release 2025.08, Live ISOs 📀 are available for 64-bit x86 (amd64) and 64-bit ARM CPUs (arm64).

❤️ Thanks ❤️

Once again netcup contributed financially, this time specifically to this release. Thank you, netcup ❤️

12 December, 2025 11:12AM

hackergotchi for Deepin

Deepin

Hidden Gems of the deepin File Manager: Work Smarter, Not Harder

“Where’s the image I just saved?” “Which folder did I put last week’s budget sheet in?” “Opening my asset library feels like digging through seven layers of folders—my mouse is practically smoking!” If these questions sound familiar, chances are you’re only using your file manager at its most basic level. In fact, the deepin file manager is packed with a whole suite of efficient features designed to make file management smooth, quick, and organized. Today, we’re uncovering these “hidden gems you might not know about”! Whether you’re a longtime deepin user or just getting started, these 9 practical tips—centered around finding, using, ...Read more

12 December, 2025 03:10AM by xiaofei

December 11, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Studio: Coming to 26.04 LTS: Three Layouts

Xfce Legacy

A lot of people have asked us why Ubuntu Studio comes with a panel on top as the default. For that, it’s a simple answer: Legacy.

When Ubuntu Studio 12.04 LTS (Precise Pangolin) released over 13 years ago, it was released with a top panel by default as that was the default for our desktop envirionment: Xfce.

Fast-forward eight years to 20.10 and Xfce was no longer our default desktop environment: we had switched to KDE’s Plasma Desktop. Plasma has a bottom panel by default, similar to Windows. However, to ease the transition for our long-time users, we kept the panel on top by default, resizing it to be similar to the default top panel of Xfce.

A macOS-Like Layout

With 25.10’s release, we included an additional layout: two panels. One panel is on top with a global menu, and the bottom contains some default applications, a trash can, and a full-screen application launcher. This is a way to feel familiar to those with a similar layout from where they may be coming from, being an operating system for creativity: macOS.

Familiarity and Traditionalism: Windows-like Layout

Starting with 26.04 LTS, we’ll also include one more layout: a bottom, Windows 10-like layout. This is to ease the transition for those coming from Windows, and due to popular request and reports.

Should We Change The Default?

It has been 13 years since we defaulted to a top panel, but is that the right idea anymore?

Right now, on the Ubuntu Discourse, we have a poll to decide if we should change the default layout starting with 26.04 LTS. This will not affect layouts for anyone upgrading from a prior release, but only new installations or new users going forward.

If you would like to participate in the poll, head on over to the Ubuntu Discourse and cast a vote!

11 December, 2025 06:41PM

Ubuntu Blog: Java 25 now available on Google Cloud Serverless

[December 11, 2025] Today Canonical, the publisher of Ubuntu, announced the immediate availability of Java 25 across Google Cloud’s serverless portfolio, including Cloud Run, App Engine, and Cloud Functions.

This release is the result of a collaboration between Google Cloud and Canonical, and it will allow developers to access the latest Java features the moment they are released publicly. All three serverless products use Ubuntu 24.04 as the base image, with Canonical actively maintaining the runtime and ensuring timely security patches.

Simplified deployment with Buildpacks

Deploying Java 25 is easy and fast thanks to Google Cloud Buildpacks. You do not need to create manual Dockerfiles or manage complex container configurations.

Buildpacks are designed to transform your source code into a production-ready container image automatically. When you deploy your application, the Buildpacks system detects your requested Java version and automatically provisions the Ubuntu-based Java 25 environment, which Canonical team continuously updates with security fixes. This “source-to-deploy” workflow allows you to focus entirely on writing code while Google Cloud and Canonical handle the underlying OS and runtime security.

Get started

To get started, simply use the GOOGLE_RUNTIME_VERSION environment variable to specify the JDK version to 25.  

pack build java-app –builder=gcr.io/buildpacks/builder –env GOOGLE_RUNTIME_VERSION=25

To learn more about Canonical support on Java, please read our reference documentation.

More reading and resources

11 December, 2025 02:38PM

Ubuntu Blog: How to launch a Deep Learning VM on Google Cloud

Setting up a local Deep Learning environment can be a headache. Between managing CUDA drivers, resolving Python library conflicts, and ensuring you have enough GPU power, you often spend more time configuring than coding.

Google Cloud and Canonical work together to solve this with Deep Learning VM Images, which use Ubuntu Accelerator Optimized OS as the base OS. These are pre-configured virtual machines optimized for data science and machine learning tasks. They come pre-installed with popular frameworks, such as PyTorch, and the necessary NVIDIA drivers.

In this guide, I’ll walk you through how to launch a Deep Learning VM on GCP using the Console, and how to verify your software stack so you can start training immediately.

Why use a Deep Learning VM?

  • Pre-installed frameworks: No need to pip install generic libraries manually.
  • GPU-ready: NVIDIA drivers are pre-installed and verified.
  • Jupyter integration: Seamless access to JupyterLab right out of the box.

How to make a Deep Learning VM in GCP

Step 1: Navigate to the GCP Marketplace

First, log in to your Google Cloud Console. Instead of creating a generic Compute Engine instance, we want to use a specialized image from the Marketplace.

  1. Open the Google Cloud Console.
  2. In the search bar at the top, type “Deep Learning VM”.
  3. Select the product named Deep Learning VM published by Google.

Step 2: Configure your instance

Once you are on the Marketplace Deep Learning VM listing page, click Launch. This will take you to the deployment configuration screen. This is where you define the power behind your model.

Here are the key settings you need to pay attention to:

  • Zone: Make sure to select a zone that supports the specific GPU you want to use (in my case, I selected the us-central1-f zone).
  • Machine Type: Choose a CPU/RAM combination that meets your requirements if you don’t need a GPU.
  • GPU Type: You can add your GPU type, such as the NVIDIA T4, A100, or H100. 

Configuring the VM instance in the Google Cloud Console.

Once you have made your selections, click Deploy.

Step 3: Connect and verify

After a minute or two, your VM will be deployed. You can find it listed in your Compute Engine > VM Instances page.

To access the machine, click the SSH button next to your new instance. This opens a terminal window directly in your browser.

Step 4: Check the software stack & drivers

Now, let’s make sure everything is working under the hood.

1. Verify NVIDIA drivers

If you have attached a GPU, the most important check is to ensure the drivers have loaded correctly. Run the following command in your SSH terminal:

nvidia-smi

You should see a table listing your GPU (e.g., A100) and the CUDA version.

2. Check pre-installed software

Google’s Deep Learning VMs usually come with PyTorch pre-configured. You can check the installed packages to ensure your favorite libraries are there:

pip show torch

Conclusion

And that’s it! In just a few minutes, you have built a fully configured Deep Learning environment. You can now start running training scripts directly from the terminal.

Don’t forget: Deep Learning VMs with GPUs can be expensive. Remember to stop your instance when you aren’t using it to avoid unexpected charges!

Learn  more about Canonical’s offerings on GCP

Read more

11 December, 2025 02:31PM

hackergotchi for Tails

Tails

Tails 7.3.1

Today, we are releasing 7.3.1 instead of 7.3 because a security vulnerability was fixed in a software library included in Tails while we were preparing 7.3. We started the release process again to include this fix.

Changes and updates

  • Update Tor Browser to 15.0.3.

  • Update the Tor client to 0.4.8.21.

  • Update Thunderbird to 140.5.0.

For more details, read our changelog.

Get Tails 7.3.1

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.3.1.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.3.1 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.3.1 directly:

11 December, 2025 12:00AM

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E368 Ensino De TIC, Com Artur Coelho

Desta feita, raptámos…errr…recebemos um convidado muito especial: Artur Coelho. Professor de TIC, formador, apaixonado pela educação e criatividade com robótica, IA, 3D e programação de software, mantém uma visão crítica, actual e informada sobre o ensino das Tecnologias de Informação e Comunicação (TIC) nas escolas portuguesas - onde foi pioneiro na introdução de diversas tecnologias e construiu um percurso invejável, influenciando positivamente um grande número de alunos e professores. A conversa foi tão longa e interessante que vamos tentar rapt…convidá-lo outra vez no futuro.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada pelo Luís Louro - a mascote Faruk - vencedora do terceiro lugar no concurso de mascote para o Software Freedom Day.

11 December, 2025 12:00AM

December 10, 2025

Ubuntu Blog: Harnessing the potential of 5G with Kubernetes: a cloud-native telco transformation perspective

Telecommunications networks are undergoing a cloud-native revolution. 5G promises ultra-fast connectivity and real-time services, but achieving those benefits requires an infrastructure that is agile, low-latency, and highly reliable. Kubernetes has emerged as a cornerstone for telecom operators to meet 5G demands. In 2025, Canonical Kubernetes delivers a single, production-grade Kubernetes platform with long-term support (LTS) and telco-specific optimizations, deployable across clouds, data centers, and the far edge.

This blog explores how Canonical Kubernetes empowers 5G and cloud-native telco workloads with high performance, enhanced platform awareness (EPA), and robust security, while offering flexible deployment via snaps, Juju, or Cluster API. We’ll also highlight its integration into industry initiatives like Sylva, support for GPU/DPU acceleration, and synergy with MicroCloud for scalable edge infrastructure.

The rise of the cloud-native telco

Telecom decision-makers face immense pressure to evolve their networks rapidly and cost-effectively. Traditional, hardware-centric architectures struggle to keep pace with 5G’s requirements for low latency, high throughput, and dynamic scaling. This is where Kubernetes – the de facto platform for cloud-native applications – comes in. Kubernetes brings powerful automation, scalability, and resiliency that allow telcos to manage network functions like software across data centers, public clouds, and far-edge deployments. The result is a more agile operational model: services can be rolled out faster, resources automatically optimized to demand, and updates applied continuously without disrupting critical services. In the 5G era, such agility is essential for delivering innovations like network slicing, multi-access edge computing (MEC), and AI-driven services.

At the same time, Kubernetes opens the door for telcos to refactor their network functions into microservices. Instead of relying on monolithic appliances or heavy virtual machines, operators can deploy cloud-native network functions (CNFs) – essentially containerized network services – that are lighter and faster to roll out than traditional virtual network functions (VNFs). By shifting to CNFs, new network features (whether a 5G core component or a firewall) can be introduced or updated in a fraction of the time, using automated CI/CD pipelines instead of lengthy manual upgrades. This approach helps telcos simplify the migration from legacy systems to a more agile, software-driven network model.

However, adopting Kubernetes for telecom workloads also means meeting rigorous performance and reliability standards. Carrier-grade services like voice, video, and core network functions can’t tolerate unpredictable delays or downtime. Telco leaders need a Kubernetes platform that combines cloud-native flexibility with telco-grade performance, security, and support. Canonical Kubernetes answers that call, providing a Kubernetes distribution specifically tuned for telecommunications needs.

Canonical Kubernetes: optimized for cloud-native 5G networks and edge computing

Canonical’s Kubernetes distribution has been engineered from the ground up to address the unique challenges of 5G and cloud-native telco cloud deployments. It is a single, unified Kubernetes offering that blends the ease of use of lightweight deployments with the robustness of an enterprise-grade platform. Importantly, Canonical Kubernetes can be deployed and managed in whatever way best fits a telco’s environment – whether installed as a secure snap package or integrated with full automation tooling like Juju (model-driven operations) or Kubernetes Cluster API (CAPI). This flexibility means operators can start small at the network edge or scale up to carrier-core clusters, all using the same consistent platform. Notably, Canonical Kubernetes brings cloud-native telco-friendly capabilities in the areas of performance, networking, operations, and support:

High performance & low latency

Real-time linux kernel support ensures that high-priority network workloads execute with predictable, ultra-low latency, a critical requirement for functions like the 5G user plane function (UPF). In parallel, built-in support for advanced networking (including SR-IOV and DPDK) enables fast packet processing by giving containerized network functions direct access to hardware, dramatically reducing network I/O latency for high bandwidth 5G applications. Together, these features allow cloud-native network functions to meet stringent performance and determinism once only achievable on specialized telecom hardware.

GPU acceleration

Canonical Kubernetes integrates seamlessly with acceleration technologies to support emerging cloud-native telco workloads. It works with NVIDIA’s GPU and networking operators to leverage hardware accelerators (GPUs, SmartNICs, DPUs) for intensive tasks. It supports NVIDIA’s Multi-Instance GPU (MIG), which expands the performance and value of the NVIDIA’s data center GPUs, such as the latest GB200 and RTX PRO 6000 Blackwell Server Edition by partitioning the GPU into up to seven instances, each fully hardware isolated with its own high-bandwidth memory, cache, and streaming multiprocessors. The partitioned instances are transparent to workloads which greatly optimizes the use of resources and allows for serving workloads with guaranteed QoS.

This means telecom operators can run AI/ML analytics, media processing, or virtual RAN computations that take advantage of GPUs and DPU offloading within their Kubernetes clusters – all managed under the same platform. By tapping into hardware acceleration, telcos can deliver advanced services (like AI-driven network optimization or AR/VR streaming) with high performance, without needing separate siloed infrastructure.

Operational efficiency and automation

Day-0 to Day-2 operations are streamlined through automation in Canonical’s stack. The distribution supports full lifecycle management – clusters can be deployed, scaled, and updated via one-step commands or integrated CI/CD pipelines, reducing manual effort and errors. Using Juju charms, Canonical’s model-driven operations further simplify complex orchestration, enabling teams to configure and update Kubernetes and related services in a repeatable, declarative way. Built-in self-healing and high availability features ensure that the platform can recover from failures automatically, keeping services running without intervention.

This high degree of automation translates into faster rollout of new network functions and updates (with minimal downtime), allowing telco teams to focus on innovation rather than routine ops tasks.

Edge flexibility

Canonical Kubernetes is designed to run from the core to the far edge with equal ease. Its lightweight, efficient design (delivered as a single snap package) results in a low resource footprint, making it viable even on a one- or two-node edge cluster in a remote site. At the same time, it scales up to multi-node deployments for central networks. The platform supports a variety of configurations – from a single node for an ultra-compact edge appliance, to a dual-node high-availability cluster, to large multi-node clusters for data centers – all with the same tooling and consistent experience.

This flexibility allows operators to extend cloud capabilities to edge locations (for ultra-low latency processing) while managing everything in a unified way. In practice, Canonical’s solution can power cloud-native telco IT workloads, 5G core functions, and edge applications under one umbrella, meeting the specific performance and latency needs of each environment.

Long-Term support and stability

Canonical backs its Kubernetes with long-term support options far exceeding the typical open-source release cycle. Each Canonical Kubernetes LTS version can receive security patches and maintenance for up to 15 years, ensuring a stable foundation for cloud-native telco services over the entire 5G rollout and beyond. (For comparison, upstream Kubernetes offers roughly 1 year of support per release).

This extended support window means carriers can avoid frequent, disruptive upgrades and rest assured that their infrastructure remains compliant over the long term. Such a commitment to stability is a key reason telecom operators choose Canonical – long-term maintenance provides confidence that critical network workloads will run on a hardened, well-maintained platform for many years.

Cost efficiency and vendor neutrality

As an open-source, upstream-aligned distribution, Canonical Kubernetes has no licensing costs and prevents vendor lock-in. Telcos are free to deploy it on their preferred hardware or cloud, and they benefit from a large ecosystem of Kubernetes-compatible tools and operators. The platform’s efficient resource usage and automation also help drive down operating costs – by improving hardware utilization and simplifying management, it enables operators to serve growing traffic loads without linear cost increases. In short, Canonical’s Kubernetes offers carrier-grade performance and features at a fraction of the cost of proprietary alternatives, all while keeping the operator in control of their technology roadmap.

Enabling a new wave of cloud-native telco services

Using Canonical Kubernetes, cloud-native telcos can position themselves to innovate faster and operate more efficiently in the 5G era. They can readily stand up cloud-native 5G Core functions, scale out Open RAN deployments, and push applications to the network edge – all on a consistent Kubernetes foundation. In fact, Kubernetes makes it feasible for telcos to transition from traditional VNFs on virtual machines to containerized CNFs, reducing resource overhead and speeding up deployment of network features. This means legacy network applications can be modernized step-by-step and run alongside new microservices on the same platform, avoiding risky “big bang” overhauls.

The result is not only technical efficiency but business agility: operators can launch new services (from enhanced mobile broadband to IoT analytics) in weeks instead of months, respond quickly to customer demand spikes, and streamline the integration of new network functions or vendors.

Early adopters in the industry are already seeing the benefits. For example, Canonical’s Kubernetes has been embraced in initiatives like the European Sylva open telco cloud project, in part due to its security, flexibility and long-term support advantages. This momentum underscores that a performant, open Kubernetes platform is becoming a strategic asset for telcos aiming to stay ahead in a competitive landscape. Perhaps most importantly, Canonical Kubernetes lets telcos focus on delivering value to subscribers – ultra-reliable connectivity, rich digital services, tailored enterprise solutions – rather than getting bogged down in infrastructure complexity. It abstracts away much of the heavy lifting of deploying and upgrading distributed systems, while providing the controls needed to meet strict cloud-native telco requirements. The combination of automation, performance tuning, and openness creates a powerful engine for telecom innovation.

Cloud-native at any scale: Canonical Kubernetes meets MicroCloud

At the edge, complexity is the enemy. That’s why Canonical Kubernetes pairs naturally with MicroCloud, our lightweight production-grade cloud infrastructure for distributed environments. MicroCloud fits the edge use case extremely well: it is easy to deploy, fully automated, and optimized for bare-metal and low-power sites. Drop it into a telco cabinet, regional hub, or remote data center, and you get a resilient control plane for running Kubernetes, virtualization, and storage with zero overhead.

In such deployments, MicroCloud and Canonical Kubernetes form a tightly integrated stack that brings cloud-native operations to the far edge. Need to orchestrate CNFs next to VMs? Spin up a single-node cluster with high availability? Scale to dozens of locations without rearchitecting? This combo makes it possible, with snaps for simple updates, Juju for full automation, and long-term support built in.

Conclusion: building the future of cloud-native telco on open source Kubernetes

5G and edge computing are reshaping telecom networks, and Kubernetes has proven to be an essential technology powering this evolution. Industrial IoT, automotive applications , smart cities, robotics, remote health care, and the gaming industry rely on high data transfer, close to real time latency, very high availability and reliability. Canonical Kubernetes brings the best of cloud-native innovation to the telecom domain in a form that aligns with carriers’ operational realities and performance needs. It delivers a rare mix of benefits – agility and efficiency from automation, high performance for demanding workloads, freedom from lock-in, and assured long-term support – making it a compelling choice for any telco modernizing its infrastructure.

Telecommunications leaders looking to become cloud-native telcos should consider how an open-source platform like Canonical Kubernetes can serve as a foundation for growth. Whether the goal is to reduce operating costs in the core network, roll out programmable 5G services at the edge, or simply break free from proprietary constraints, Canonical’s Kubernetes distribution provides a proven path forward.

Explore further

To dive deeper into how Canonical Kubernetes meets telco performance and reliability requirements, we invite you to read our detailed white paper: Addressing telco performance requirements with Canonical Kubernetes. It offers in-depth insights and benchmark results from real-world cloud-native telco scenarios. Additionally, visit our blogs on Ubuntu.com and Canonical.com for more success stories and technical guides – from 5G network modernization strategies to edge:

Visiting MWC 2026? Book a meeting with Canonical to find out more.

10 December, 2025 03:47PM

hackergotchi for Purism PureOS

Purism PureOS

Purism Liberty Phone Exists vs. Delayed T1 Phone

NBC News reports that Trump Mobile customers have been waiting months for a promised ‘Made in the USA’ smartphone, originally announced for August delivery. The T1 phone was marketed as domestically produced, but delays and vague updates have raised skepticism. References to ‘Made in the USA’ have been removed from the company’s site, and leaked images suggest the device resembles existing Chinese-made models. This situation underscores the complexity of building smartphones in America without established infrastructure.

The post Purism Liberty Phone Exists vs. Delayed T1 Phone appeared first on Purism.

10 December, 2025 03:36PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The rhythm of reliability: inside Canonical’s operational cadence

In software engineering, we often talk about the “iron triangle” of constraints: time, resources, and features. You can rarely fix all three. At many companies, when scope creeps or resources get tight, the timeline is often the first element of the triangle to slip.

At Canonical, we take a different approach. For us, time is the fixed constraint.

This isn’t just about strict project management. It is a mechanism of trust. Our users, customers, and the open source community need to know exactly when the next Ubuntu release is coming. To deliver that reliability externally, we need a rigorous operational rhythm internally, and for over 20 years, we have honored this commitment.

Here is how we orchestrate the business of building software, from our six-month cycles to the daily pulse of engineering:

Fig. 1 Canonical’s Operating Cycle

The six-month cycle

Our entire engineering organization operates on a six-month cycle that aligns with the Ubuntu release cadence. This cycle is our heartbeat. It drives accountability and ensures we ship features on time.

To make this work, we rely on three critical control points:

  • Sprint Readiness Review (SRR): This is where prioritization happens. Before a cycle begins, we don’t just ask “what fits?”: we ask “what matters?” We go through feedback to find the most valuable engineering opportunities, ensuring we prioritize quality and impact over volume. We don’t start the work until we know the scope is worth the effort.
  • Product Roadmap Sprint: The SRR culminates in this one-week, face-to-face event. This is the formal moment of truth where we close out the previous cycle and leadership signs off on the plan for the next one. It ensures that every team leaves the room with a clear, approved mandate.
  • Midcycle Review: Three months in, we hold a “Virtual Sprint” to check our progress. Crucially, we review any “bad news”, in which we immediately identify items that will not ship or are at risk. By addressing what won’t happen upfront, leadership can make informed decisions to course-correct immediately rather than letting a deadline slip.

This discipline ensures we stay agile, and that we can adjust our trajectory halfway through without derailing the entire delivery.

The two-week pulse

While the six-month cycle sets the destination, the “pulse” gets us there. A pulse is our version of a two-week agile sprint.

Crucially, these pulses are synchronized across the entire company, on a cross-functional basis. Marketing, Sales, and Support all operate on this same frequency. When a team member says, “we will do it next pulse,” everyone, regardless of department, knows exactly what that means. This creates a shared expectation of delivery that keeps the whole organization moving in lockstep.

Sprints are for in-person connection

We distinguish between a “pulse” (our virtual, two-week work iteration) and a “sprint.” For us, a sprint is a physical, one-week event where teams meet face-to-face.

We are a remote-first company, which makes these moments invaluable. Sprints provide the high-bandwidth communication and human connection needed to sustain us through months of remote execution.

We also stagger these sprints to separate context. Our Engineering Sprints happen in May and November (immediately after an Ubuntu release) so teams can focus purely on technical roadmapping. Commercial Sprints happen in January and July, aligning with our fiscal half-years to focus on business value. This “dual-clock” system ensures that commercial goals and technical realities are synchronized without overwhelming the teams.

Managing the exceptions

Of course, market reality doesn’t always adhere to a six-month schedule. Customers have urgent needs, and high-value opportunities appear unexpectedly. To handle this without breaking our rhythm, we use the Commercial Review (CR) process.

The CR process protects our engineering teams from chaos while giving us the agility to say “yes” to the right opportunities.

  • Protection: We don’t let unverified requests disrupt the roadmap. A Review Board assesses every non-standard request before we make a promise.
  • Conscious trade-offs: If a new request is critical, we ask: “What are we removing to make space for this?” It forces a conscious decision. We review the roadmap and agree on what gets deprioritized to satisfy the new request.

This ensures that when we do deviate from the plan, it is a strategic choice, not an accident.

Quality as a natural habit

Underpinning this entire rhythm is a commitment to quality standards. We follow the Plan, Do, Check, Act (PDCA) cycle, a concept rooted in ISO 9001. While we align with these formal frameworks, it has become a natural habit for us at Canonical.

This operational discipline is what enables up to 15 years of LTS commitment on a vast portfolio of open source components, providing Long-Term Support for the entire, integrated collection of application software, libraries, and toolchains. Offering 15 years of security maintenance on our entire stack is only possible because we are operationally consistent. Long-term stability is the direct result of short-term discipline.

By sticking to this rhythm, we ensure that Canonical remains not just a source of great technology, but a reliable partner for the long haul.

10 December, 2025 12:22PM

hackergotchi for Deepin

Deepin

The COSCon'25 Held: deepin Community Showcases Innovative Achievements in Intelligent Operating Systems

On December 6-7, the 10th China Open Source Conference (COSCon'25) was successfully held at the Legendale Hotel in Haidian District, Beijing. As an annual event that has witnessed the growth of open source in China since 2016, this year's conference brought together open source practitioners from around the globe. Among them, the deepin community presented its latest technological achievements and practical experiences in the field of intelligent operating systems. Through keynote speeches on core technologies and interactive exhibits, deepin demonstrated the path toward intelligent development for open source operating systems. UOS AI: Reshaping the Intelligent Experience of Operating Systems "Open ...Read more

10 December, 2025 09:07AM by xiaofei

hackergotchi for Qubes

Qubes

Qubes Canary 045

We have published Qubes Canary 045. The text of this canary and its accompanying cryptographic signatures are reproduced below. For an explanation of this announcement and instructions for authenticating this canary, please see the end of this announcement.

Qubes Canary 045


                    ---===[ Qubes Canary 045 ]===---


Statements
-----------

The Qubes security team members who have digitally signed this file [1]
state the following:

1. The date of issue of this canary is December 10, 2025.

2. There have been 109 Qubes security bulletins published so far.

3. The Qubes Master Signing Key fingerprint is:

       427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494

4. No warrants have ever been served to us with regard to the Qubes OS
   Project (e.g. to hand out the private signing keys or to introduce
   backdoors).

5. We plan to publish the next of these canary statements in the first
   fourteen days of March 2026. Special note should be taken if no new
   canary is published by that time or if the list of statements changes
   without plausible explanation.


Special announcements
----------------------

None.


Disclaimers and notes
----------------------

We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently compromised.
This means that we assume NO trust in any of the servers or services
which host or provide any Qubes-related data, in particular, software
updates, source code repositories, and Qubes ISO downloads.

This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other means,
like blackmail or compromising the signers' laptops, to coerce us to
produce false declarations.

The proof of freshness provided below serves to demonstrate that this
canary could not have been created prior to the date stated. It shows
that a series of canaries was not created in advance.

This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to anybody.
None of the signers should be ever held legally responsible for any of
the statements made here.


Proof of freshness
-------------------

Wed, 10 Dec 2025 01:14:56 +0000

Source: DER SPIEGEL - International (https://www.spiegel.de/international/index.rss)
Confidential Conference on Ukraine Peace: "We Must Not Leave Ukraine and Volodymyr Alone with These Guys"
Project 2025 Author: "We Won't Let Anyone Stop US from Using Our Oil and Gas"
Remnants of the War: Syrians from Germany Helping with Reconstruction - But Remain Wary of Moving Back
Germany's Queen Mum: Nostalgia for the Merkel Era Alive and Well
Director Nadav Lapid on Israel after Gaza: It Was Our Duty to Scream

Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
Hundreds of Thousands of Thais and Cambodians Flee
Canada’s Northwest Territories Diamond Mines Are Closing
Another Front in the War in Ukraine: Who Gets to Claim a Famed Artist?
With Cheap Tickets and Lax Etiquette, a Theater Builds an Older Fan Base
Between Pakistan and Afghanistan, a Trade War With No End in Sight

Source: BBC News (https://feeds.bbci.co.uk/news/world/rss.xml)
Trump criticises 'decaying' European countries and 'weak' leaders
Nobel officials unsure when Peace Prize winner will arrive for ceremony
Congress ups pressure to release boat strike video with threat to Hegseth's travel budget
French feminists outraged by Brigitte Macron's comment about activists
'What's your name?' - Moment police confront Luigi Mangione at McDonald's

Source: Blockchain.info
0000000000000000000028650dc7d328ea9c1b7e2b5376ce14089586c8ca3041


Footnotes
----------

[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this canary
in the qubes-secpack.git repo, and (2) via digital signatures on the
corresponding qubes-secpack.git repo tags. [2]

[2] Don't just trust the contents of this file blindly! Verify the
digital signatures! Instructions for doing so are documented here:
https://www.qubes-os.org/security/pack/

--
The Qubes Security Team
https://www.qubes-os.org/security/

Source: canary-045-2025.txt

Marek Marczykowski-Górecki’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmk4zrgACgkQ1lWk8hgw
4GpYrhAAiV6nfQk7dgKTljSSM2Bf22DFUl9+7eQOAV2ULbr+7G/CgRKMvezaYtgy
X/0s2NZIXsJDTulhh2j9yAujZqlHv3xbSJuoA7lqo91jrFUr2qYpCadL91uBWcxk
xkxBz5z01ApqHT8kgk/galfWqRj7f54A+YkmKHw3hynhMd+KaD10V3t2xBEW8a2X
7lsWRrhXGRVWsahDHgG5uZOA4spUlbRUiSlBkIo+ijeMQYxwu8CXQl3mwBeaI7jB
/D/J9dNz8denRDknD1Fr8NvRFbKchL9S0ntAt3yvZqDLhwGX5J0bnEDpS2fDWi15
mDjLe7RtGfI77P9yjwvv/XXb64Mhdta5v4nXeKD+IdnM3IAmPvlkXDWvrNHA9jiF
31sYC0J0K/8qHniIZ61tjtPbTAMF7uXS5FSMnSt2xlhVsZEBXBN+4wKwVrSTCvwL
7H5zKWY0eWllf1Va3ez5rM9lCSkvWZAC/yDIqDZ2ZclxlKUOOCdHNOqBzjR1+Nex
CyPrZ6wV/nDyVOpkZcMd+a7Q/brdT2zfELL39oeKDVv8w8+APC1wEbrTv56BvYVN
TgCG62gLIdN/RwtwjjDR9IHB0rIH/x7Rimhsu5gHBUBSNtyIwwK/vIS3hKv1qvZI
MxTVBDAZfZbJH1SwhioXChwTRsVXILZOFCTGgr3iltudNSn3uqY=
=JXd7
-----END PGP SIGNATURE-----

Source: canary-045-2025.txt.sig.marmarek

Simon Gaiser (aka HW42)’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmk5NNsACgkQSsGN4REu
FJCT3hAAn0C8K0+3573tTDIcXnZU0SClejmgkmmeY1wgYktMysQKjw/T9FtkOt1f
+e52uo2dwJ93Df+uJyQIvhd2UUA+p8yQtg8rA9svsOqiN6LfUO+hSTGPMUM63BSS
T3YuaFEsO9ll31iOssmm3CaQj5ERUMIiGdHDgHbOx0hAMDPKjBRfshTt1IJ6OOc6
q4DGOgeXNiv6wvlKIgByA/d41K9prkXm/DQ95PfV2cGBPk5fw2DrM0ISij2eyGHk
Z9r4BI15fj36OjtzfM+f1KMUeR/UDKtn4+tmVr/dLbEA9gRMcgy7Uzh1soKJcfyv
L8TK9GMSzypKk1oJTojvoLPjU0CikNnEhr4YzsOpeJ0tYRC+oSM7anh84QR4coyZ
jJxZLQnmIz3KUTOeJDoxuFguk2ItygRxxvYJuwMB3Y34dY1vP0TOOPHkEVhLllgK
HPi+78al3YkjPctZ04UpbqoI2wnRSCpQcd8JH8hBhi57LnPfOkYeKpBQX6Q8gqtG
RtwfB5cdrl7Y7EkZCbp/E+ieOt2MSzhwwyqAMsouQTnyuGcOdmZmu937Q03+kr9H
VMOFrwrmgzYXSw7oUzD3TLIScWxGf2ZLacU4ShWa0HGi0z/tg7a2F4xiVJPzRuQ7
8gBi69St+rZfCUdmywvtpH11htMZZSFHddtFWUl+EwQoO59fGqI=
=a3wa
-----END PGP SIGNATURE-----

Source: canary-045-2025.txt.sig.simon

What is the purpose of this announcement?

The purpose of this announcement is to inform the Qubes community that a new Qubes canary has been published.

What is a Qubes canary?

A Qubes canary is a security announcement periodically issued by the Qubes security team consisting of several statements to the effect that the signers of the canary have not been compromised. The idea is that, as long as signed canaries including such statements continue to be published, all is well. However, if the canaries should suddenly cease, if one or more signers begin declining to sign them, or if the included statements change significantly without plausible explanation, then this may indicate that something has gone wrong.

The name originates from the practice in which miners would bring caged canaries into coal mines. If the level of methane gas in the mine reached a dangerous level, the canary would die, indicating to miners that they should evacuate. (See the Wikipedia article on warrant canaries for more information, but bear in mind that Qubes Canaries are not strictly limited to legal warrants.)

Why should I care about canaries?

Canaries provide an important indication about the security status of the project. If the canary is healthy, it’s a strong sign that things are running normally. However, if the canary is unhealthy, it could mean that the project or its members are being coerced in some way.

What are some signs of an unhealthy canary?

Here is a non-exhaustive list of examples:

  • Dead canary. In each canary, we state a window of time during which you should expect the next canary to be published. If no canary is published within that window of time and no good explanation is provided for missing the deadline, then the canary has died.
  • Missing statement(s). Canaries include a set of numbered statements at the top. These statements are generally the same across canaries, except for specific numbers and dates that have changed since the previous canary. If an important statement was present in older canaries but suddenly goes missing from new canaries with no correction or explanation, then this may be an indication that the signers can no longer truthfully make that statement.
  • Missing signature(s). Qubes canaries are signed by the members of the Qubes security team (see below). If one of them has been signing all canaries but suddenly and permanently stops signing new canaries without any explanation, then this may indicate that this person is under duress or can no longer truthfully sign the statements contained in the canary.

No, there are many canary-related possibilities that should not worry you. Here is a non-exhaustive list of examples:

  • Unusual reposts. The only canaries that matter are the ones that are validly signed in the Qubes security pack (qubes-secpack). Reposts of canaries (like the one in this announcement) do not have any authority (except insofar as they reproduce validly-signed text from the qubes-secpack). If the actual canary in the qubes-secpack is healthy, but reposts are late, absent, or modified on the website, mailing lists, forum, or social media platforms, you should not be concerned about the canary.
  • Last-minute signature(s). If the canary is signed at the last minute but before the deadline, that’s okay. (People get busy and procrastinate sometimes.)
  • Signatures at different times. If one signature is earlier or later than the other, but both are present within a reasonable period of time, that’s okay. (For example, sometimes one signer is out of town, but we try to plan the deadlines around this.)
  • Permitted changes. If something about a canary changes without violating any of the statements in prior canaries, that’s okay. (For example, canaries are usually scheduled for the first fourteen days of a given month, but there’s no rule that says they have to be.)
  • Unusual but planned changes. If something unusual happens, but it was announced in advance, and the appropriate statements are signed, that’s okay (e.g., when Joanna left the security team and Simon joined it).

In general, it would not be realistic for an organization to exist that never changed, had zero turnover, and never made mistakes. Therefore, it would be reasonable to expect such events to occur periodically, and it would be unreasonable to regard every unusual or unexpected canary-related event as a sign of compromise. For example, if something usual happens with a canary, and we say it was a mistake and correct it (with valid signatures), you will have to decide for yourself whether it’s more likely that it really was just a mistake or that something is wrong and that this is how we chose to send you a subtle signal about it. This will require you to think carefully about which among many possible scenarios is most likely given the evidence available to you. Since this is fundamentally a matter of judgment, canaries are ultimately a social scheme, not a technical one.

What are the PGP signatures that accompany canaries?

A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all canaries so that Qubes users have a reliable way to check whether canaries are genuine. The only way to be certain that a canary is authentic is by verifying its PGP signatures.

Why should I care whether a canary is authentic?

If you fail to notice that a canary is unhealthy or has died, you may continue to trust the Qubes security team even after they have signaled via the canary (or lack thereof) that they been compromised or coerced.

Alternatively, an adversary could fabricate a canary in an attempt to deceive the public. Such a canary would not be validly signed, but users who neglect to check the signatures on the fake canary would not be aware of this, so they may mistakenly believe it to be genuine, especially if it closely mimics the language of authentic canaries. Such falsified canaries could include manipulated text designed to sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.

How do I verify the PGP signatures on a canary?

The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)

  1. Obtain the Qubes Master Signing Key (QMSK), e.g.:

    $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
    gpg: directory '/home/user/.gnupg' created
    gpg: keybox '/home/user/.gnupg/pubring.kbx' created
    gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
    gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
    gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
    gpg: Total number processed: 1
    gpg:               imported: 1
    

    (For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)

  2. View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)

    $ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
    gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
       
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    gpg> fpr
    pub   rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
     Primary key fingerprint: 427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
    
  3. Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.

    Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.

  4. Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.

    gpg> trust
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    Please decide how far you trust this user to correctly verify other users' keys
    (by looking at passports, checking fingerprints from different sources, etc.)
       
      1 = I don't know or won't say
      2 = I do NOT trust
      3 = I trust marginally
      4 = I trust fully
      5 = I trust ultimately
      m = back to the main menu
       
    Your decision? 5
    Do you really want to set this key to ultimate trust? (y/N) y
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: ultimate      validity: unknown
    [ unknown] (1). Qubes Master Signing Key
    Please note that the shown key validity is not necessarily correct
    unless you restart the program.
       
    gpg> q
    
  5. Use Git to clone the qubes-secpack repo.

    $ git clone https://github.com/QubesOS/qubes-secpack.git
    Cloning into 'qubes-secpack'...
    remote: Enumerating objects: 4065, done.
    remote: Counting objects: 100% (1474/1474), done.
    remote: Compressing objects: 100% (742/742), done.
    remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
    Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
    Resolving deltas: 100% (1910/1910), done.
    
  6. Import the included PGP keys. (See our PGP key policies for important information about these keys.)

    $ gpg --import qubes-secpack/keys/*/*
    gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
    gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
    gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
    gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
    gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
    gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
    gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
    gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
    gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
    gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
    gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
    gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
    gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
    gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
    gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
    gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
    gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
    gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
    gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
    gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
    gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
    gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
    gpg: Total number processed: 17
    gpg:               imported: 16
    gpg:              unchanged: 1
    gpg: marginals needed: 3  completes needed: 1  trust model: pgp
    gpg: depth: 0  valid:   1  signed:   6  trust: 0-, 0q, 0n, 0m, 0f, 1u
    gpg: depth: 1  valid:   6  signed:   0  trust: 6-, 0q, 0n, 0m, 0f, 0u
    
  7. Verify signed Git tags.

    $ cd qubes-secpack/
    $ git tag -v `git describe`
    object 266e14a6fae57c9a91362c9ac784d3a891f4d351
    type commit
    tag marmarek_sec_266e14a6
    tagger Marek Marczykowski-Górecki 1677757924 +0100
       
    Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
    gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    

    The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.

  8. Verify PGP signatures, e.g.:

    $ cd QSBs/
    $ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    $ cd ../canaries/
    $ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    

    Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.

For this announcement (Qubes Canary 045), the commands are:

$ gpg --verify canary-045-2025.txt.sig.marmarek canary-045-2025.txt
$ gpg --verify canary-045-2025.txt.sig.simon canary-045-2025.txt

You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the Qubes Canary 045 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.

10 December, 2025 12:00AM

December 09, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical to distribute AMD ROCm AI/ML and HPC libraries in Ubuntu

Canonical is pleased to announce an expanded collaboration with AMD to package and maintain AMD ROCm™ software directly in Ubuntu. AMD ROCm is an open software ecosystem to enable hardware-accelerated AI/ML and HPC workloads on AMD Instinct™ and AMD Radeon™ GPUs, simplifying the deployment of AI infrastructure with long term support from Canonical.

Canonical has formed a dedicated team of engineers to package the AMD ROCm software libraries to streamline installation, support, and long-term maintenance on Ubuntu. Canonical will also submit these packages for consideration in Debian.

This work will simplify the delivery of AMD AI solutions in data centers, workstations, laptops, Windows Subsystem for Linux, and edge environments. AMD ROCm software will be available as a dependency for any Debian package, snap, or Docker image (OCI) build.  Performance fixes and security patches will automatically be available to production systems.

This collaboration aims to make AMD ROCm software available in Ubuntu starting with Ubuntu 26.04 LTS, with updates available in every subsequent Ubuntu release.  

AMD ROCm software: a commitment to open source 

Canonical works with silicon industry leaders to incorporate the software libraries and drivers that accelerate applications on their silicon directly into Ubuntu. Comprehensive support for the latest silicon dramatically accelerates developer adoption and production deployments.

For AMD, the software that enables hardware-accelerated AI processing is called ROCm. It is an open software platform that includes runtimes, compilers, libraries, kernel components, and drivers that together accelerate industry standard frameworks such as PyTorch, Tensorflow, Jax, and more on supported AMD GPUs and APUs. 

“AMD ROCm software enables open, high-performance acceleration for AI and HPC on AMD hardware. Working with Canonical to package AMD ROCm for Ubuntu makes it easier for developers and enterprises to deploy AMD solutions on supported systems,” said Andrej Zdravkovic, Senior Vice President, GPU Technologies and Engineering Software and Chief Software Officer at AMD.     

Packaging AMD ROCm in Ubuntu underscores the strong AMD commitment to developer experience and enterprise experience:

  • Simpler installation with ‘apt install rocm’ or as an automatic dependency for other projects, like ollama-amd.
  • Both stable LTS and fresh ROCm versions every six months will be available, to ensure immediate support for the latest hardware and software.
  • Easy security fixes and performance improvements (just “apt upgrade”).
  • Up to 15 years of support for AMD ROCm in Ubuntu LTS versions under Ubuntu Pro. 
  • Personal Ubuntu Pro subscriptions are free.

“We are delighted to work alongside AMD and the community to package AMD ROCm libraries directly into Ubuntu,” said Cindy Goldberg, SVP of Silicon and Cloud Alliances at Canonical. “This will simplify the use of AMD hardware and software for AI workloads, and enable organizations to meet security and maintenance requirements for production use at scale.”

Improved hardware support

Canonical works closely with hardware manufactures to test, optimize, and certify Ubuntu for their devices, and to integrate the required software drivers and kernel patches to support that hardware. Thanks to this extensive hardware program, Ubuntu runs equally well on laptops, workstations, servers, and IoT/edge devices, and developers have a seamless path from development through to deployment.

09 December, 2025 05:38PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

UCS 5.2-4 released

Before the year comes to an end, we are releasing UCS 5.2-4, the final cumulative update for 2025. It includes several noteworthy enhancements, among them a preview of the new delegated administration in UDM.

Updates and Improvements

As always, the new release contains numerous updates and smaller improvements. A selection:

  • Synchronization of the “locked” status of user accounts across Nubus, Active Directory and Samba 4 has been unified. This ensures that, for example, a lock set in Active Directory after too many failed login attempts also applies in Nubus.
  • The App Center has been improved in its handling of filtering proxy servers and will therefore support even more environments in the future.
  • Several security updates for Keycloak were included, most recently a short-notice update to Keycloak 26.4.4 addressing an issue affecting accounts with uppercase characters in their names.
  • Default values for the size of the OpenLDAP database have been adjusted and incorporated into the monitoring checks in Nagios and Prometheus as well as UMC.
  • The OX Connector now supports a new operating mode that stores occurring issues in an error log for later review.

A Farewell: UCS as a PXE Server

With the release of UCS 5.2-4, we are discontinuing support for UCS as a PXE server. Until now, a UCS instance could be used to provide the network installation environment for additional UCS installations. However, customer surveys show that this function is not being used. Instead, alternatives such as hypervisor images or software deployment solutions with PXE support, such as OPSI, are typically employed.

This change does not affect the actual installation of UCS, which can still be automated via profiles.

Preview of the New Delegated Administration

Through errata updates in recent months, Nubus has been prepared for a new form of delegated administration. This will make it even easier to delegate user and group management for specific parts of the directory.

A typical use case is the organization of accounts into Organizational Units (OUs), that is, separate subtrees in the Directory Service. Organizations can store all accounts belonging to a department, division or subsidiary within such an OU. Administrative rights for managing users and groups within this OU can then be assigned to individuals without granting them any additional administrative privileges. These delegated administrators see only the contents of “their” OUs and can edit users and groups there or create new users.

The implementation is already available as a preview in UCS and is being tested in close collaboration with project partners. If you are interested in testing it, please contact us before using the feature in a production environment.

Outlook: Recycle Bin

UCS 5.2-4 also includes preparations for additional new features in Univention Nubus. One of these is a frequently requested enhancement: a recycle bin for the Univention Directory Manager. This will make it possible to easily restore accidentally deleted users and groups. We will share more information about this soon here on the blog and at the Univention Summit at the end of January.

Summary and Outlook

As always, the update includes numerous other security and feature improvements, which are detailed in the release notes and the help article.

We look forward to your feedback, either here on the blog or at help.univention.com.

Der Beitrag UCS 5.2-4 released erschien zuerst auf Univention.

09 December, 2025 02:25PM by Ingo Steuwer

hackergotchi for GreenboneOS

GreenboneOS

React2Shell: A Critical React and Next.js Flaw Is Actively Exploited

! Update Three additional React Server Components (RSC) flaws have been identified, which require further patching: • CVE-2025-55184 CVSS 7.5 and CVE-2025-67779 CVSS 7.5: Both flaws allow pre-authenticated Denial of Service (DoS). CVE-2025-67779 is considered a bypass of the original React2Shell patch. However, exploitation does not allow remote code execution (RCE). • CVE-2025-55183 CVSS 5.3: […]

09 December, 2025 12:50PM by Joseph Lee

hackergotchi for Deepin

Deepin

December 08, 2025

hackergotchi for VyOS

VyOS

VyOS Project November 2025 Update

Hello, Community!

The update for November is here! There are two big features: TLS support for syslog and IPFIX support in VPP, good progress in replacing the old configuration backend, and multiple bug fixes.

08 December, 2025 01:03PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How telco companies can reduce 5G infrastructure costs with modern open source cloud-native technologies

5G continues to transform the telecommunications landscape, enabling massive device density, edge computing, and new enterprise use cases. However, operators still face significant cost pressures: from accelerating RAN modernization and 5G SA rollouts to energy demands and the shift to cloud-native network functions (CNFs). As telcos redesign their infrastructure strategies, open source has become a key lever to reduce costs, increase flexibility, and accelerate innovation.

This blog outlines today’s primary 5G infrastructure challenges and highlights how modern open source cloud technologies from Canonical help operators address them.

The telco dilemma: 5G infrastructure challenges

With the advancements of 5G and more complex deployments, telcos face several challenges in building and maintaining 5G infrastructure, including:

  1. High investment costs: 5G infrastructure requires significant investment in new hardware and software, especially for hosting the virtualization infrastructure necessary to run 5G software
  2. Rising OPEX and energy costs: Power consumption of distributed 5G sites is now one of the largest operational expenses.
  3. Cloud-native complexity: Moving from virtualized network functions (VNFs) to cloud-native network functions (CNFs) increases the need for Kubernetes-scale automation and observability.
  4. Disaggregated RAN and multi-vendor integrations: Open RAN and virtualised RAN require consistent infrastructure, automation and lifecycle management.
  5. Limited spectrum: The available spectrum for 5G is limited and highly regulated, which can make it difficult for telcos to acquire and use.
  6. Edge footprint explosion: 5G MEC deployments increase the number of sites operators must manage.
  7. Talent and skills gaps: Cloud-native and Kubernetes skills remain scarce in telecom operations teams.
  8. Security: 5G networks are vulnerable to cyber attacks, which can compromise the security and privacy of users’ data. The attack surface is larger with 5G compared to previous generations of mobile networks.
  9. Vendor Lock-in: a telecom operator is heavily dependent on one or a few vendors for all of its 5G network infrastructure and services, making it difficult for the operator to switch to another vendor without incurring significant costs and disruption to its network.

How open source is changing the game

Open source plays a central role in enabling telcos to modernise their networks: from VNF virtualization to full cloud-native CNF deployments. By standardizing on open platforms like Ubuntu, Kubernetes and OpenStack, operators reduce infrastructure licensing costs, improve interoperability, and accelerate innovation. Today, most large operators run the majority of their 5G core workloads on open source infrastructure.

Shared standards

Open source communities, including CNCF, O-RAN Alliance and Project Sylva, provide common frameworks that reduce integration effort. By adopting open standards, operators can more easily mix vendors and ensure long-term ecosystem interoperability.

Avoid vendor lock-in

In line with the development of shared standards, open source solutions can help avoid vendor lock-in by providing access to code that can be modified and adapted to meet specific needs. This means that telcos and ISVs can avoid being tied to a particular vendor or technology stack and choose the best solutions for their specific requirements instead.

Meet specific-telco requirements

Telcos have demanding requirements when it comes to performance, reliability, and security. Long-term support (LTS) is important in the telco industry, as telcos often have long cycles of release deployment. Open source solutions that are supported over the long term, with no API breaks or major changes that could disrupt telco operations (i.e. 12-month release at least, and a few years on average) are the foremost choice for telcos. This is usually a vendor-driven decision, but choosing the right open source with the right vendor is the key here. The reason is, it is difficult to have a telco-grade system after dealing with all the interoperability and fixing into the puzzle challenges, so it is reasonable for an operator to expect the support cycle to be as long as possible.

Performance, flexibility, and automation are key requirements in the telco industry, as they enable telcos to operate more efficiently and effectively. By leveraging the expertise of the wider community, telcos, and ISVs can build solutions that are optimised for telco environments and that can be easily customised to meet specific requirements.

Cost optimization

Open source software offers cost savings compared to proprietary solutions, which can be especially beneficial for organizations with limited budgets. With open source software, organizations do not need to pay for licenses, and there are no vendor lock-ins. They can leverage the vast community of developers and users to troubleshoot issues and implement new features. In addition to removing licensing fees, open source automation frameworks significantly reduce operational costs by simplifying CNF lifecycle management, improving energy optimization, and enabling consistent operations across core, edge and RAN deployments.

Security

The telecom sector handles a vast amount of sensitive information, including personal and financial data, making it a prime target for cyber-attacks. There are several data privacy and security concerns that the telecom sector faces, including data breaches, malware attacks, insider threats, lack of compliance, etc. In this regard, open source software vulnerabilities are often patched more quickly than with proprietary software. In addition, open source software is transparent and customisable, making it easier to meet the operator’s unique needs and implement security features that align with their security requirements.

In the sections that follow we provide example applications for open source solutions across the telco stack, with a focus on tooling supported by Canonical.

Open source solutions for telcos

Canonical’s telco portfolio spans the entire network – from RAN compute nodes to edge clouds, MEC platforms, private clouds and public-cloud deployments. Ubuntu and our cloud-native infrastructure stack (MAAS, MicroCloud, OpenStack, Kubernetes and Juju) provide a consistent operational model across all layers of the 5G architecture.. This enables telcos to meet any current or future use cases – from OpenRAN to next-generation Core (5G and beyond) and AI at the edge. Ubuntu Pro is Canonical’s comprehensive subscription for enterprise security, compliance and support.


Open source solutions for telcos

Open source for RAN

vRAN and Open RAN deployments require high performance, low latency and hardware acceleration. Canonical works closely with Intel FlexRAN, NVIDIA Aerial and ARM ecosystem partners to optimize Ubuntu for RAN workloads.

Another major Canonical contribution for RANs edge use cases is MicroCloud, which reproduces the APIs and primitives of the big clouds at the scale of the edge. MicroClouds are typically targeted to easily deploy and lifecycle manage distributed micro clouds – bare metal compute clusters of between 3-100 nodes. A Canonical MicroCloud stack consists of certain building blocks. The details for each component are covered in our Telco 5G infrastructure whitepaper.

Open source for core networks

Most operators deploy their 5G Core on private clouds to maintain strict performance and security control. Canonical’s reference architecture – MAAS for bare metal, OpenStack for virtualised infrastructure, Kubernetes for CNFs, and Juju for automation – provides a proven, carrier-grade cloud foundation adopted by major network equipment providers (NEPs) and operators globally.


Canonical stack for private clouds

Open source for public and hybrid clouds

Ubuntu is known for its reliability, security, and versatility, making it a popular choice for telecom companies that require a stable and secure operating system to run Telco applications in the public cloud. A hybrid cloud architecture combines the usage of a private cloud and one or more public cloud services with a workload orchestration engine between the platforms. Using Juju, operators can orchestrate and lifecycle-manage the same CNF or VNF stack across private OpenStack, MicroCloud edge clusters, and hyperscalers. Juju automation provides a consistent approach that natively supports all major hyper scalers APIs and is a de-facto standard tool for MicroClouds in edge use cases. Additionally, Ubuntu Pro for Public Clouds provides telcos with capabilities based on their unique requirements. Details of these requirements and features from Ubuntu are given in this blog series: Amazon Web services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

Wrapping up

5G infrastructure modernization continues to introduce new operational and cost challenges. Open source cloud technologies, combined with Canonical’s automation and long-term support, help operators simplify their architectures, reduce OPEX, and accelerate the shift to cloud-native 5G.

To explore the latest best practices, speak with our telco specialists.

08 December, 2025 12:38PM

hackergotchi for GreenboneOS

GreenboneOS

Greenbone Is Preparing For The Post Quantum Age

Q-Day marks the moment when quantum computers will render classical cryptography standards obsolete. The risks posed by quantum computers demand a migration to Post Quantum Cryptography (PQC). Greenbone is proactively preparing for this future—upgrading our internal infrastructure, auditing partners, and enhancing the OPENVAS SECURITY INTELLIGENCE platform with upgraded detection and new auditing features. The goal […]

08 December, 2025 11:04AM by Greenbone AG

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

December 06, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Salih Emin: Fix Broken Updates: uCareSystem New Release uCareSystem v25.12

Let’s be honest. Dealing with broken updates is a nightmare. We’ve all experienced that moment of panic when you run an update, step away for coffee, and return to a terminal screen full of angry red error messages. That is exactly why uCareSystem exists, and today, it gets even better at preventing these issues.

Fix Broken Updates: uCareSystem New Release

I’m thrilled to announce the latest release of uCareSystem. As the sole developer, I feel the pain of broken updates personally. For this version, I spent a lot of time under the hood, focusing on making sure the tool doesn’t just work when everything is perfect, but proactively fixes issues before they break your system.

Here is why you should upgrade to the new version and say goodbye to broken updates for good:

Prevent Broken Updates with Pre-flight Checks

The most frustrating broken updates are the ones that fail because of something that happened last week.

The new uCareSystem introduces automated pre-flight checks. Think of it as a bouncer for your update process. Before it lets any new packages in, it checks the ID of your system to prevent broken updates. It now automatically detects and attempts to fix:

  • Those annoying stale dpkg locks that require a reboot.
  • Installations that were interrupted (ghosts of updates past).
  • Broken dependencies that threaten to ruin your day.

The goal is simple: You press the button, and it actually works.

A UI Upgrade for Better Monitoring

While staring at scrolling walls of monochrome text makes us feel like hackers in a 90s movie, it’s not great for spotting errors.

I’ve given the uCareSystem terminal interface a significant makeover. With improved color coding, better progress indicators, and real-time output logging, you’ll now have a much clearer idea of the process, helping you catch potential issues that could lead to broken updates.

Robustness: Avoiding Broken Updates in Containers

Linux runs everywhere now. To keep up, uCareSystem needs to be flexible.

  • Containers & WSL: I’ve improved systemd detection so the tool plays nicely even in environments like Docker containers and Windows Subsystem for Linux (WSL).
  • Internet Reality Check: I added better connectivity checks. Trying to download updates without a stable connection is a common cause of broken updates, and we now handle that gracefully.
  • Auto-Recovery: If a dpkg process trips and falls midway, the software now has mechanisms to help pick it back up automatically.

Spring Cleaning the Code with ShellCheck

For the code-peepers out there who like to look under the hood: I did some massive spring cleaning. Extensive refactoring and complying with ShellCheck standards mean the codebase is now cleaner and safer. This ensures better maintainability and fewer bugs in the future.


Give the new version a spin. Hopefully, it makes your system maintenance totally boring—and completely free of broken updates.

By the Numbers

This was a significant undertaking, reflected in the statistics for this release:

  • 38 files changed:
  • 2,030 additions,
  • 718 deletions

Please take a look to a comprehensive Release note in the repository: https://github.com/Utappia/uCareSystem/releases/tag/v25.12.04

This release is a testament to my commitment to quality and my vision for the future of uCareSystem as a one-stop system maintenance tool Debian Ubuntu. I am confident that I laid a stronger foundation that will allow for even more exciting features and faster development in the future.

I am deeply grateful to the community members who supported the previous development cycle through donations or code contributions:

  • P. Laoughman (Thanks for your continued support)
  • W. Schreinemachers (Thanks for your continued support)
  • D. Luchini (Thanks for your continued support)
  • M. Van Hoof
  • Frankie P.
  • M. Ryser
  • Th. Ploumis
  • M. Stade
  • K. J. Rasmussen

Every version, has also a code name dedicated as a release honored to one of the contributors. For historical reference, you can check all previous honored releases.

Where can I download uCareSystem ?

As always, I want to express my gratitude for your support over the past 15 years. I have received countless messages from inside and outside Greece about how useful they found the application. I hope you find the new version useful as well.

If you’ve found uCareSystem to be valuable and it has saved you time, consider showing your appreciation with a donation. You can contribute via PayPal or Debit/Credit Card by clicking on the banner.

Pay what you want Maybe next time
Click the donate button and enter the amount you want to donate. Then you will be navigated to the page with the latest version to download the installer If you don’t want to Donate this time, just click the download icon to be navigated to the page with the latest version to download the installer
btn_donateCC_LG ucare-system-download
   

Once installed, the updates for new versions will be installed along with your regular system updates.

The post Fix Broken Updates: uCareSystem New Release uCareSystem v25.12 appeared first on Utappia.

06 December, 2025 03:59PM

hackergotchi for Qubes

Qubes

Qubes OS 4.3.0-rc4 is available for testing

We’re pleased to announce that the fourth release candidate (RC) for Qubes OS 4.3.0 is now available for testing. This minor release includes many new features and improvements over Qubes OS 4.2.

What’s new in Qubes 4.3?

  • Dom0 upgraded to Fedora 41 (#9402).
  • Xen upgraded to version 4.19 (#9420).
  • Default Fedora template upgraded to Fedora 42 (older versions not supported).
  • Default Debian template upgraded to Debian 13 (versions older than 12 not supported).
  • Default Whonix templates upgraded to Whonix 18 (upgraded from 17.4.3 in RC2; versions older than 18 no longer supported).
  • Preloaded disposables (#1512)
  • Device “self-identity oriented” assignment (a.k.a. New Devices API) (#9325)
  • Qubes Windows Tools reintroduced with improved features (#1861).

These are just a few highlights from the many changes included in this release. For a more comprehensive list of changes, see the Qubes OS 4.3 release notes.

When is the stable release?

That depends on the number of bugs discovered in this RC and their severity. As explained in our release schedule documentation, our usual process after issuing a new RC is to collect bug reports, triage the bugs, and fix them. If warranted, we then issue a new RC that includes the fixes and repeat the process. We continue this iterative procedure until we’re left with an RC that’s good enough to be declared the stable release. No one can predict, at the outset, how many iterations will be required (and hence how many RCs will be needed before a stable release), but we tend to get a clearer picture of this as testing progresses.

Barring any surprises uncovered by testing, we expect this fourth RC to be the final one, which means that we hope to declare this RC to be the stable 4.3.0 release at the conclusion of its testing period.

How to test Qubes 4.3.0-rc4

Thanks to those who tested earlier 4.3 RCs and reported bugs they encountered, 4.3.0-rc4 now includes fixes for several bugs that were present in those prior RCs!

If you’d like to help us test this RC, you can upgrade to Qubes 4.3.0-rc4 with either a clean installation or an in-place upgrade from Qubes 4.2. (Note for in-place upgrade testers: qubes-dist-upgrade now requires --releasever=4.3 and may require --enable-current-testing for testing releases like this RC.) As always, we strongly recommend making a full backup beforehand and updating Qubes OS immediately afterward in order to apply all available bug fixes.

If you’re currently using an earlier 4.3 RC and wish to update to 4.3.0-rc4, please update normally with current-testing enabled. If you use Whonix, please also upgrade from Whonix 17 to 18, if you have not already done so.

Please help us improve the eventual stable release by reporting any bugs you encounter. If you’re an experienced user, we encourage you to join the testing team.

Known issues in Qubes OS 4.3.0-rc4

It is possible that templates restored in 4.3.0-rc4 from a pre-4.3 backup may continue to target their original Qubes OS release repos. This does not affect fresh templates on a clean 4.3.0-rc4 installation. For more information, see issue #8701.

View the full list of known bugs affecting Qubes 4.3 in our issue tracker.

What’s a release candidate?

A release candidate (RC) is a software build that has the potential to become a stable release, unless significant bugs are discovered in testing. RCs are intended for more advanced (or adventurous!) users who are comfortable testing early versions of software that are potentially buggier than stable releases. You can read more about Qubes OS supported releases and the version scheme in our documentation.

What’s a minor release?

The Qubes OS Project uses the semantic versioning standard. Version numbers are written as [major].[minor].[patch]. Hence, releases that increment the second value are known as “minor releases.” Minor releases generally include new features, improvements, and bug fixes that are backward-compatible with earlier versions of the same major release. See our supported releases for a comprehensive list of major and minor releases and our version scheme documentation for more information about how Qubes OS releases are versioned.

06 December, 2025 12:00AM

December 05, 2025

hackergotchi for Maemo developers

Maemo developers

Meow: Process log text files as if you could make cat speak

Some years ago I had mentioned some command line tools I used to analyze and find useful information on GStreamer logs. I’ve been using them consistently along all these years, but some weeks ago I thought about unifying them in a single tool that could provide more flexibility in the mid term, and also as an excuse to unrust my Rust knowledge a bit. That’s how I wrote Meow, a tool to make cat speak (that is, to provide meaningful information).

The idea is that you can cat a file through meow and apply the filters, like this:

cat /tmp/log.txt | meow appsinknewsample n:V0 n:video ht: \
ft:-0:00:21.466607596 's:#([A-za-z][A-Za-z]*/)*#'

which means “select those lines that contain appsinknewsample (with case insensitive matching), but don’t contain V0 nor video (that is, by exclusion, only that contain audio, probably because we’ve analyzed both and realized that we should focus on audio for our specific problem), highlight the different thread ids, only show those lines with timestamp lower than 21.46 sec, and change strings like Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.cpp to become just AppendPipeline.cpp“, to get an output as shown in this terminal screenshot:

Screenshot of a terminal output showing multiple log lines. Some of them have the word "appsinkNewSample" highlighted in red. Some lines have the hexadecimal id of the thread that printed them highlighed (purple for one thread, brown for the other)

Cool, isn’t it? After all, I’m convinced that the answer to any GStreamer bug is always hidden in the logs (or will be, as soon as I add “just a couple of log lines more, bro<span class=0 Add to favourites0 Bury

05 December, 2025 11:16AM by Enrique Ocaña González (eocanha@igalia.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: From cloud to dashboard: experience the future of infotainment development at CES 2026

Every year at CES, we try to go beyond showing technology; we want to give you an experience. This time, it’s the story of how in-vehicle infotainment development is transforming, and how developers can now build, test, and deploy immersive experiences faster than ever.

This year, we’re excited to show a demo that combines the strengths of both Anbox Cloud and Rightware’s Kanzi, the industry-leading software for creating rich, visually stunning infotainment interfaces. It demonstrates cloud-native development, automation, and how virtualization can open up completely new ways to design and test next-generation in-vehicle experiences.

Bridging design, development, and validation

Automotive software development has become incredibly complex. Teams are often focused on their own discipline, UI designers on immersive experiences, Android developers on building integrations, and validation engineers on reliability across hardware variants. These teams can’t always collaborate seamlessly.

Testing an infotainment system means being able to access specific hardware or prototypes, which makes iteration slow and collaboration difficult. Small design updates can take days to validate, and testing across different screen configurations or performance conditions is often limited by the availability of physical setups.

We wanted to change that by bringing agility and scalability to infotainment development.

Infotainment comes to life in the cloud

In our demo, we’ll show how Anbox Cloud turns this traditionally hardware-bound process into a fully virtualized, cloud-native experience. By running Android in the cloud, developers can instantly deploy and test infotainment environments built using Kanzi, on demand, at any scale, from anywhere.

Widescreen 8K infotainment CES demo

Our setup fits perfectly with Rightware’s widescreen 8K infotainment and cluster bench, powered by Kanzi. Developers can stream the exact same 8K rendering using Anbox Cloud. The result is an impressive, interactive experience, generated and streamed entirely from the cloud.

8K virtual Android device running on Anbox Cloud

Thanks to Anbox Cloud, Android can be virtualized to any resolution, with pixel-perfect rendering and responsiveness. It can scale to dozens of Android instances running simultaneously, so teams can run automated testing, validate UI performance, and work on the system updates in parallel. Your development becomes faster, collaborative, and independent of physical limitations.

Why choose cloud-native Android development?

When moving your development testing to the cloud, designers and developers can collaborate in real time and can see their changes without waiting for hardware to be available. Validation teams can run automated tests on multiple Android instances, across different configurations. For OEMs and Tier 1 suppliers, this means shorter development cycles, meaning more efficient resource use, and faster results.

“Kanzi has always been about empowering designers and developers to bring exceptional in-vehicle experiences to life,” says Tero Koivu, Co-CEO at Rightware. “Seeing a Kanzi made UI streamed at 8K through Anbox Cloud shows how cloud-native workflows can dramatically accelerate iteration and collaboration. It opens a powerful new path for teams building the next generation of connected, visually stunning automotive user interfaces.”

See it at CES 2026

Join us at LVCC, North Hall, Booth #10562, and check out the workflow for yourself. You’ll see how Kanzi and Anbox Cloud come together to deliver high-fidelity, scalable, cloud-native infotainment experiences, and how this is redefining the way developers can use Android in the cloud.

Book a meeting with our team

Come see the future of automotive software development, from cloud to dashboard.

In the meantime, learn more about Anbox Cloud, and Rightware.

Further reading

Official documentation
Anbox Cloud Appliance
Learn more about Anbox Cloud 


Android is a trademark of Google LLC. Anbox Cloud uses assets available through the Android Open Source Project.

05 December, 2025 08:00AM