November 09, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Colin Watson: Free software activity in October 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

OpenSSH upstream released 10.1p1 this month, so I upgraded to that. In the process, I reverted a Debian patch that changed IP quality-of-service defaults, which made sense at the time but has since been reworked upstream anyway, so it makes sense to find out whether we still have similar problems. So far I haven’t heard anything bad in this area.

10.1p1 caused a regression in the ssh-agent-filter package’s tests, which I bisected and chased up with upstream.

10.1p1 also had a few other user-visible regressions (#1117574, #1117594, #1117638, #1117720); I upgraded to 10.2p1 which fixed some of these, and contributed some upstream debugging help to clear up the rest. While I was there, I also fixed ssh-session-cleanup: fails due to wrong $ssh_session_pattern in our packaging.

Finally, I got all this into trixie-backports, which I intend to keep up to date throughout the forky development cycle.

Python packaging

For some time, ansible-core has had occasional autopkgtest failures that usually go away before anyone has a chance to look into them properly. I ran into these via openssh recently and decided to track them down. It turns out that they only happened when the libpython3.13-stdlib package had different versions in testing and unstable, because an integration test setup script made a change that would be reverted if that package was ever upgraded in the testbed, and one of the integration tests accidentally failed to disable system apt sources comprehensively enough while testing the behaviour of the ansible.builtin.apt module. I fixed this in Debian and contributed the relevant part upstream.

We’ve started working on enabling Python 3.14 as a supported version in Debian. I fixed or helped to fix a number of packages for this:

I upgraded these packages to new upstream versions:

I packaged python-blockbuster and python-pytokens, needed as new dependencies of various other packages.

Santiago Vila filed a batch of bugs about packages that fail to build when using the nocheck build profile, and I fixed several of these (generally just a matter of adjusting build-dependencies):

I helped out with the scikit-learn 1.7 transition:

I fixed or helped to fix several other build/test failures:

I fixed some other bugs:

I investigated a python-py build failure, which turned out to have been fixed in Python 3.13.9.

I adopted zope.hookable and zope.location for the Python team.

Following an IRC question, I ported linux-gpib-user to pybuild-plugin-pyproject, and added tests to make sure the resulting binary package layout is correct.

Rust packaging

Another Pydantic upgrade meant I had to upgrade a corresponding stack of Rust packages to new upstream versions:

  • rust-idna
  • rust-jiter
  • rust-pyo3
  • rust-regex
  • rust-regex-automata
  • rust-speedate
  • rust-uuid

I also upgraded rust-archery and rust-rpds.

Other bits and pieces

I fixed a few bugs in other packages I maintain:

I investigated a malware report against tini, which I think we can prove to be a false positive (at least under the reasonable assumption that there isn’t malware hiding in libgcc or glibc). Yay for reproducible builds!

I noticed and fixed a small UI deficiency in debbugs, making the checkboxes under “Misc options” on package pages easier to hit. This is merged but we haven’t yet deployed it.

I notced and fixed a typo in the Being kind to porters section of the Debian Developer’s Reference.

Code reviews

09 November, 2025 03:33PM

November 07, 2025

Stéphane Graber: Introducing IncusOS!

After over a year of work, I’m very excited to announce the general availability of IncusOS, our own immutable OS image designed from the ground up to run Incus!

IncusOS is designed for the modern world, actively relying on both UEFI Secure Boot and TPM 2.0 for boot security and for full disk encryption. It’s a very locked down environment, both for security and for general reliability. There is no local or remote shell, everything must be done through the (authenticated) Incus API.

Under the hood, it’s built on a minimal Debian 13 base, using the Zabbly builds of both the Linux kernel, ZFS and Incus, providing the latest stable versions of all of those. We rely a lot on the systemd tooling to handle image builds (mkosi), application installation (sysext), system updates (sysupdate) and a variety of other things from network configuration to partitioning.

I recorded a demo video of its installation and basic usage both in a virtual machine and on physical hardware:


Full release announcement: https://discuss.linuxcontainers.org/t/announcing-incusos/25139

07 November, 2025 09:33AM

November 06, 2025

Ubuntu Blog: Web Engineering: Celebrating Our Third Annual Hack Week

The Web Engineering team is thrilled to announce the successful conclusion of our third annual Hack Week! Over the past three years, this initiative has become a cornerstone of our collaborative spirit and commitment to innovation. With 126 significant contributions to date, Hack Week provides a dedicated space for our engineers to tackle challenging problems, refine existing systems, and push the boundaries of what’s possible.

The key goals of these events is allowing us to talk with confidence about the true open source nature of our work. We get the opportunity to addressing issues we’ve identified upstream in projects that we use to benefit ourselves and others. By dedicating time to these fixes, we not only improve the stability and performance of our foundational technologies but also empower our team to gain a deeper understanding of complex systems and our dependency tree. The direct engagement with these challenges allows us to truly experience the difficulties firsthand, fostering a unique learning environment. These invaluable learnings are then taken back to our daily projects, where we reflect on the insights gained and implement improvements that benefit all our ongoing work. We are proud of the dedication displayed by everyone involved, and we look forward to continuing this initiative into the future with impactful contributions.

This year we focused on providing accessibility contributions to our internal corporate message application called Mattermost. All contributions are listed below:

Murtaza-Ax/Color-Converter
Mattermost #34132
Mattermost ##34128
Biome #7749
Mattermost #26961
Mattermost #34141
Mattermost #34142
Psycopg #1184
Upptime#269
HeroUI #5810
HeroUI #5811
Mattermost #34153
Mattermost #34154
TheAlgorithms/JavaScript #1842
adk-web #161
HeroUI #5814
HeroUI #5813
Ghost #25183
HeroUI #5818
HeroUI #5819
Ghost #25195
Upptime#271
Scrabble #3
Mattermost #34196
Countdown #155
Ghost #25197

Open source encourages compatibility with standards, making locking users in more difficult. This is why we love the freedom open source offers. Open source software allows for sharing knowledge, gaining knowledge, and practising. It promotes transparency in data collection and software systems. Freedom, therefore, is the gift that keeps on giving.

Please have a look at our open-source projects and reach out to us via the issues if anything is unclear.

06 November, 2025 11:22AM

hackergotchi for ZEVENET

ZEVENET

Mitigating DDoS and L7 Exhaustion: why one layer is not enough

In security discussions, the term DDoS is often used as if it referred to a single type of threat. In reality, today it covers two very different strategies that share the same goal but not the same execution: volumetric attacks at layers L3/L4 and application exhaustion attacks at layer 7.

Both aim to take a service offline, but they exploit different parts of the infrastructure — and therefore require different mitigation layers.

Two attack families, two impact surfaces

When some vendors claim that “modern DDoS attacks are stealthy and bypass traditional defences”, what they are actually describing is not classic volumetric DDoS, but L7 exhaustion: low-rate traffic, fully valid requests, almost indistinguishable from legitimate clients.

These attacks don’t flood the network — they drain the application from inside.

That doesn’t mean volumetric DDoS has disappeared. It remains cheap to launch, common in the wild, and extremely effective unless it is filtered before the kernel, firewall, or load balancer accepts the connections.

The threat has not changed — the point of mitigation has.

Type Layer Objective How the service breaks
Volumetric DDoS L3 / L4 Saturate bandwidth, connection tables, kernel resources The infrastructure collapses before the application can respond
Application-layer DoS L7 Exhaust CPU, memory, threads, or DB calls The service is “up”, but unusable for real users

Or, even more directly:

  • L3/L4 volumetric attacks → try to take down the network before the service responds
  • L7 exhaustion attacks → mimic valid traffic to drain the app’s internal resources

Layered defence: why L3/L4 and L7 do not compete — they complement each other

One of the most common misconceptions is assuming that a single protection layer is enough to stop any kind of attack. In practice, filtering only at layer 4 leaves the application exposed, while filtering only at layer 7 allows the kernel or load balancer to be overwhelmed before the WAF ever sees the request.

An L4 firewall can drop malformed packets or abnormal connection patterns before they consume resources, but it has no context to detect that a perfectly valid HTTP request is trying to exploit an SQLi pattern.

A WAF can detect that behaviour — but only after the connection has already been accepted, a socket has been created, and memory has been allocated.

Attack type Where it must be stopped What is inspected Typical tooling
Volumetric (L3/L4) Before accepting the connection (edge / kernel / LB) Packets, TCP flags, connection rate SYN flood protection, rate limiting, conntrack offload
Application exhaustion (L7) Once the TCP session is established HTTP headers, URL patterns, payload TWAF, OWASP rulesets, bot filtering

Effective protection is not about choosing the right layer — it is about dropping as much as possible before the app, and reserving deep inspection only for what deserves to reach it.

What happens when mitigation only works at L7 (and why it fails)

When protection is applied solely at the application layer, the TCP connection has already been accepted before any evaluation occurs. In other words, the system has completed the handshake, allocated a socket, reserved memory and promoted the session to HTTP/S before deciding whether the request should be blocked.

That removes the attacker’s need to generate massive traffic: a few thousand seemingly valid, slow, or incomplete connections are enough to consume server resources without ever saturating the network.

The result is not an immediate outage, but a progressive exhaustion:

    • Load balancer or backend CPU spikes
    • Response times increase exponentially
    • The service is still “up”, but unusable for legitimate users

This is the usual pattern of L7 exhaustion attacks: they don’t bring the network down; they wear the application out from the inside. And it happens for a simple reason: the blocking decision is made too late. First the connection is accepted, then the request is inspected, and only at the end is it decided whether to discard it. By then, the damage is already done.

How SkudoCloud applies two-phase mitigation

Effective protection against DDoS and exhaustion attacks is not about choosing between filtering at L4 or L7, but about enforcing both defenses in the right order. SkudoCloud implements this model natively inside the load-balancing engine itself, without relying on external scrubbing services or additional appliances.

Phase What is mitigated How SkudoCloud acts Where it happens
1. Early filtering (L4) TCP floods, anomalous connections, malformed packets Session rejected before allocation, per-IP/VIP limits, SYN protection, IP reputation / blocklists Load balancer kernel
2. Deep inspection (L7) SQLi, XSS, bots, valid-but-abusive requests Advanced WAF + behavioural rules HTTP/S module of the engine

This model ensures that high-volume traffic cannot saturate the system before being analysed, and that low-volume abusive requests cannot hide inside seemingly legitimate sessions. The result is an environment where the network does not collapse under load and the application does not degrade due to resource exhaustion.

Everything is managed from a single interface, with unified policies, metrics and event logging — without depending on multiple vendors, external mitigation layers or duplicated configurations.

👉 To see how this model works in a real deployment, follow the step-by-step guide: Configure the First SkudoCloud Service

06 November, 2025 11:12AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

November 05, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Azure VM utils now included in Ubuntu: boosting cloud workloads

Ubuntu images on Microsoft Azure have recently started shipping with the open source package azure-vm-utils included by default. Azure VM utils is a package that provides essential utilities and udev rules to optimize the Linux experience on Azure. This change results in more reliable disks, smoother networking on accelerated setups, and fewer tweaks to get things running. Here’s what you need to know:

What’s changing

  • Smoother storage on modern Azure VMs: Ubuntu now provides consistent device naming across SCSI and NVMe, reducing post-reboot surprises and easing automation.
  • Better handling of accelerated networking: environments using MANA or Mellanox benefit from safer defaults that avoid double-managing passthrough interfaces.
  • Less image customization: the utility and rules that many platform teams previously added now ship in the image, removing one more custom step from your pipelines.

Why it matters

  • Fewer post-boot surprises: predictable device names keep fstab, cloud-init and provisioning scripts stable across VM families and reboots.
  • Smoother NVMe adoption: newer VM families lean NVMe-first for performance; built-in rules make that transition painless while keeping SCSI setups working.
  • Less to maintain: the stock image now handles Azure disk naming and accelerated NICs (MANA/Mellanox), so teams can drop custom udev/Netplan snippets and avoid fstab surprises after reboots.

How to Get It

  • For New VMs: No action is needed. The package is included by default in new Ubuntu images.
  • For Existing VMs: You can install the package directly from the Ubuntu archive, where it’s available for all current LTS and interim releases: sudo apt update && sudo apt install azure-vm-utils

Quick ways to verify

azure-nvme-id --version           # tool present
find /dev/disk/azure -type l      # predictable Azure disk links

05 November, 2025 03:02PM

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Edge Networking gets smarter: AI and 5G in action

Organizations everywhere are pushing AI and networks closer to the edge. With that expansion comes a challenge: how do you ensure reliable performance, efficiency, and security outside of the data center? Worker safety, healthcare automation, and the success of mobile private networks depend on a robust technology stack that can withstand real-world challenges and still deliver results. Canonical has partnered with Dell Technologies, Intel, Druid, Airspan and Ecrio to publish a new solution brief addressing this question. The brief highlights how a fully integrated, edge-ready platform can meet the growing demand for intelligent, secure, and real-time computing at the edge. 

The brief showcases how to build a strong foundation for edge AI and networking by using a Dell PowerEdge XR8000 ruggedized edge network+compute platform consisting of two server sleds powered by Intel Xeon Scalable processors. Both sleds are running Canonical’s software infrastructure stack, which combines Ubuntu, MicroCloud, and Canonical Kubernetes. On the first sled, MicroCloud hosts two VMs: Airspan Control Platform (ACP) manages the 5G radio units, and Druid Raemis provides the cloud-native 5G core orchestrated by Canonical Kubernetes. The second sled hosts Ecrio’s iota-e platform, also managed by Canonical Kubernetes, which enables AI-powered real-time image-recognition, voice, video, and messaging services. These capabilities support critical business processes such as worker coordination in industrial settings, emergency response in healthcare, and secure team communications in remote or hazardous environments.

Download the solution brief to learn how this integrated platform supports advanced use cases, including AI-driven safety monitoring, smart factory operations, and 5G connectivity at the edge.

In the solution brief, you’ll discover how to:

  • Deploy AI and event detection workloads on optimized, securely designed infrastructure
  • Operate private 5G and RAN control software on edge-virtualized environments
  • Streamline orchestration and lifecycle management with Canonical Kubernetes and MicroCloud
  • Detect safety and operational risks in real time using integrated AI inference

Download the full solution brief

For more information on how Canonical supports your edge and AI journey, visit our related content:

  • Open source AI for the enterprise
    Discover how Canonical enables AI workloads from cloud to edge with tools for model training, trusted deployment, and lifecycle management. This webpage outlines Canonical’s full AI stack, from Ubuntu-optimized hardware acceleration to MLOps best practices, with links to blogs, whitepapers, and deployment guides.
  • Canonical Telco solutions
    Learn how Canonical helps telecom operators modernize their infrastructure using open source technologies. This hub covers solutions for 5G core networks and Radio Access Networks (RAN) built on Ubuntu, Canonical Kubernetes, OpenStack, MAAS and Juju. You’ll find case studies and insights into telco-grade performance and security.

05 November, 2025 07:11AM

hackergotchi for Qubes

Qubes

Fedora 41 approaching end of life

Fedora 41 is currently scheduled to reach end of life (EOL) on 2025-11-19 (approximately two weeks from the date of this announcement). Please upgrade all of your Fedora templates and standalones by that date. For more information, see Upgrading to avoid EOL.

There are two ways to upgrade a template to a new Fedora release:

Please note that no user action is required regarding the OS version in dom0 (see our note on dom0 and EOL).

05 November, 2025 12:00AM

November 04, 2025

hackergotchi for VyOS

VyOS

VyOS Project October 2025 Update

Hello, Community! The October update is here and it's dominated by bug fixes — as we are preparing to release the next VyOS Stream image on the way to the future VyOS 1.5 and working on the new 1.4.4 maintenance release as well. However, there are a few useful features as well, including support for DHCP options 82 (relay agent information) and 26 (interface MTU), containers health checks, and more.

04 November, 2025 01:04PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Deepin

Deepin

deepin Community Monthly Report for October 2025

I. October Community Data Overview II. Community Products 1. deepin 25 Official Release Update: File Management and System Experience Upgraded Again In October, the deepin 25 official release received the 25.0.9 version update, bringing multiple optimizations focused on file management efficiency and system interaction details: File Manager Efficiency Innovations: Supports grouping display by time, size, type, and name, making file finding clearer. Added a pin tab feature for one-click access to frequently used directories. Dragging files to the window edge triggers automatic scrolling, making long-distance operations more convenient. Automatically creates a new tab in the current window when opening a ...Read more

04 November, 2025 09:41AM by xiaofei

November 03, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 916

Welcome to the Ubuntu Weekly Newsletter, Issue 916 for the week of October 26 – November 1, 2025. The full version of this issue is available here.

In this issue we cover:

  • Upgrades to 25.10 (Questing Quokka) are now live!
  • Ubuntu Stats
  • Hot in Support
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • LoCo Events
  • Ubuntu Project docs: That’s a wrap!
  • Introducing architecture variants: amd64v3 now available in Ubuntu 25.10
  • [Ubuntu Studio] Upgrading from 25.04 to 25.10
  • Other Community News
  • What Say You
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • In Other News
  • Featured Audio and Video
  • Updates and Security for Ubuntu 22.04, 24.04, 25.04 and 25.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • irihapeti
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

03 November, 2025 11:53PM

Stéphane Graber: Announcing Incus 6.18

The Incus team is pleased to announce the release of Incus 6.18!

This is a reasonably busy release with quite a few smaller releases in every corner of Incus so there should be something for everyone!

The highlights for this release are:

  • Systemd credentials support
  • File operations on storage volumes
  • Exporting of ISO volumes
  • BFP token delegation
  • MacOS support in the Incus VM agent
  • VirtIO sound cards for VMs
  • Support for temporarily detaching USB devices
  • Configurable DNS mode for OVN networks
  • Configurable MAC address patterns for networks and instances
  • Extended IncusOS management CLI

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

03 November, 2025 07:21PM

November 02, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2025/10

The 10th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.17.6, 6.12.56-LTS, 6.6.115-LTS – Sparky 8.1-RC1 ARM64 for Raspberry Pi released – added to repos: Mousam Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in November too, please. *

Source

02 November, 2025 10:26AM by pavroo

October 31, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Scarlett Gately Moore: A New Chapter: Career Transition Update

I’m pleased to share that my career transition has been successful! I’ve joined our local county assessor’s office, beginning a new path in property assessment for taxation and valuation. While the compensation is modest, it offers the stability I was looking for.

My new schedule consists of four 10-hour days with an hour commute each way, which means Monday through Thursday will be largely devoted to work and travel. However, I’ll have Fridays available for open source contributions once I’ve completed my existing website maintenance commitments.

Open Source Priorities

Going forward, my contribution focus will be:

  1. Ubuntu Community Council
  2. Kubuntu/Debian
  3. Snap packages (as time permits)

Regarding the snap packages: my earlier hope of transitioning them to Carl hasn’t worked out as planned. He’s taken on maintaining KDE Neon single-handedly, and understandably, adding snap maintenance on top of that proved unfeasible. I’ll do what I can to help when time allows.

Looking for Contributors

If you’re interested in contributing to Kubuntu or helping with snap packages, I’d love to hear from you! Feel free to reach out—community involvement is what makes these projects thrive.

Thanks for your patience and understanding as I navigate this transition.

31 October, 2025 03:38PM

Podcast Ubuntu Portugal: E365 Encontrões Cimeiros

Há demasiadas coisas a acontecer, todas ao mesmo tempo, é o caos! De regresso da Ubuntu Summit em Londres, Lisboa e Porto, o Diogo sublinha os momentos mais importantes e entope o Internet Archive; o Miguel continua na missão de libertar pessoas do Windows; problemas técnicos caem-nos em cima em plena emissão; a Canonical tem uma nova Academia para certificações; revimos as novidades da mais moderna versão do Ubuntu Touch; apelamos a voluntários para fazerem respiração boca-a-boca ao Unity; alguém inventou um Recall para Linux e qual Oppenheimer, vemos ao longe a Framework pegar fogo à tenda do circo com Omarchy e Hyprland. E o mais importante: há um novo Super Tux Kart.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Os efeitos sonoros têm os seguintes créditos: [Short Elevator Music Loop by BlondPanda] (https://freesound.org/s/659889/). License: Creative Commons 0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

31 October, 2025 12:00AM

October 30, 2025

hackergotchi for Pardus

Pardus

Pardus 25.0 Beta Sürümü Yayımlandı!

TÜBİTAK BİLGEM tarafından geliştirilmeye devam edilen Pardus’un 25.0 Beta sürümü, Pardus 25.0 Alpha sürümünden aldığımız geri dönüşler ve planlamalarımız doğrultusunda, kullanıcılarımız tarafından deneyimlenmesi ve test edilmesi için yayımlanmıştır.

30 October, 2025 12:02PM by Hace İbrahim Özbal

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Why we brought hardware-optimized GenAI inference to Ubuntu

On October 23rd, we announced the beta availability of silicon-optimized AI models in Ubuntu. Developers can locally install DeepSeek R1 and Qwen 2.5 VL with a single command, benefiting from maximized hardware performance and automated dependency management.

Application developers can access the local API of a quantized generative AI (GenAI) model with runtime optimizations for efficient performance on their CPU, GPU, or NPU.

Architecture of the new open-source tool enabling developers to bundle  different combinations of runtimes and weights into a single snap, deploying the most optimal stack on the host machine

By meeting developers at the intersection of silicon and GenAI models, we package, distribute and manage all the necessary components to run AI apps on any machine that runs Ubuntu. Developers can now install pre-trained and fine-tuned AI models that automatically detect the underlying silicon requirements, from how much memory and what GPU or NPU they need, to which software components and default configurations must be included.

What’s the vision behind the announcement, and how did we pull it off?

Ubuntu: the standard distribution platform for AI models

We aim to make Ubuntu the standard distribution platform for generative AI models. Doing so will enable developers to integrate AI seamlessly into their applications and run them optimally across desktops, servers, and edge devices. We believe machine learning (ML) workloads will soon be as fundamental to compute platforms as traditional software dependencies are today, and generative AI models will be a basic part of the compute experience. 

But wait: isn’t that already true?  Aren’t AI models already everywhere, and don’t we all play with LLMs around 25 times per day

Yes, but there’s a key distinction. Let me use an analogy to illustrate it.

From fragmentation to curated archives of software

In the early days of Linux, software distribution was fragmented and cumbersome. Developers had to manually download, compile, and configure source code from individual projects, often tracking down missing libraries, resolving version conflicts, and maintaining complex build environments by hand. 

While in the early 90s, software was distributed via floppy disks, Slackware and Debian Linux soon ushered in a system of curated archives of software, usually pre-compiled to save time. Source: https://www.debian.org/

As each distribution had its own conventions, packaging tools, and repositories, installing software was an error-prone and time-consuming process. The lack of a unified delivery mechanism slowed down open-source development and created barriers to adoption.

In October 2004, the first release of Ubuntu was out. It shipped with a fairly fixed set of packages in the Ubuntu archive, for which users received security updates and bug fixes over the internet. To get new software, developers still had to hunt down source code and compile it themselves.

What changed?

Fast-forward to a few years later, and in 2007, Canonical introduced Personal Package Archives (PPA), giving developers a hosted build service to publish and share their own software. Discovering new software on Linux was still hard, from living in unknown PPAs to GitHub repositories with daily builds of all kinds of new software. To fix this, Canonical later introduced snaps, containerized software packages that simplified cross-distribution delivery, updates and security.

Standing on the shoulders of giants and building on Debian, Ubuntu helped transform that experience, becoming the aggregation point for open-source software (OSS). Ubuntu consolidated thousands of upstream projects into a coherent, trusted ecosystem that developers could rely on, without needing to understand every dependency or build chain behind it. Ubuntu helped unify and streamline the open-source ecosystem.

A strong packaging foundation, combined with a steady release cadence and curated repositories, lowered the barrier for both developers and enterprises. Ubuntu became the default, trusted layer for distributing and maintaining open-source software. 

What if we could do that with AI models?

GenAI models as basic building blocks of future compute 

Today, software packages are the basic building blocks of compute. Developers routinely install packages, add PPAs, and pull from various vendors, third parties, or the community, without giving it much thought. 

We believe that AI models will soon occupy the same space as first-class citizens of a compute stack. They’ll be treated as standard system components, installed, updated and optimized just like any other dependency. We’ll no longer worry about the details of how to juggle the dependencies of various AI models, just as we don’t think about which repositories the packages your projects depend on come from. Developing software will naturally include integrating ML workloads, and models will be as ubiquitous and invisible in the developer experience as traditional packages are today. LLMs will become part of the commodity layer of compute, evolving into dependencies that containerized workloads rely on: composable, versioned, and hardware-optimized. 

In making Ubuntu, we mastered the art of distributing open-source software to millions of users. Now we are applying that expertise to AI model distribution. Ubuntu is moving toward making AI models a native element of the compute environment. We’re shifting AI models from external tools to an integral part of the stack. Bringing silicon-optimized AI models natively to Ubuntu is the first step in making them a built-in component of the compute experience itself. 

What are silicon-optimized models?

In the announcement, we introduced Intel and Ampere – optimized DeepSeek R1 and Qwen VL, two leading examples of Generative AI models.

DeepSeek R1 is a reasoning Large Language Model (LLM) designed to decompose prompts into structured chains of thought, enabling complex reasoning and problem solving. Qwen VL, on the other hand, is a multimodal LLM that accepts both text and images as inputs, representing the latest generation of vision-language models. Both are transformer-based but tuned and packaged to exploit different runtime patterns and hardware characteristics.

Let’s be more specific. The term model is often used loosely, but in practice, it refers to an entire model family built around a base model. Base models are massive neural networks trained on broad datasets to capture general knowledge. These base models are frequently fine-tuned, retrained, or adapted to specialize in specific tasks, such as instruction following or domain-specific reasoning. For instance, transformer-based LLMs share a common architecture built on components such as self-attention, multi-head attention, and large embedding matrices. From this base, families of foundational models and fine-tuned derivatives, such as instruction-tuned models, adapter-based variants, or task-specific fine-tunes, can be developed.

Inference with fine-tuned models

Let’s look at an example of some of the Mistral models from Mistral AI

On the left-hand side, we have the model vendor, in this case, Mistral AI, which trains and distributes the foundational base models. Fine-tuned derivatives, such as mistral-7b-instruct, are then adapted for instruction-based use cases, responding to prompts in a structured, context-aware manner.

Another model family might look similar but target different objectives or architectures:

However, a “model”  – whether base or fine-tuned – is not particularly useful on its own. It’s essentially a collection of learned weights: billions of numerical parameters, with no runtime context. What matters to developers is the inference stack, the combination of a trained model and an optimized runtime that makes inference possible. In the literature, the term “model” often refers to the complete model artifact, including the tokenizer, pre and post-processing components, and runtime format, e.g., PyTorch checkpoint, ONNX, and GGML.

Inference stacks include inference engines, the software responsible for executing the model efficiently on specific hardware. Besides the weights of the pre-trained model, e.g. the weights of Qwen2.5 VL quantized at Q4_K, an engine will typically include the execution logic, optimizations to efficiently perform matrix multiplications and supporting subsystems. Examples include Nvidia’s TensorRT, Intel’s OpenVINO, Apache’s TVM, and vendor-specific runtimes for NPUs. Multiple stacks can exist for the same model, each tailored to different architectures or performance targets. These engines differ in supported features, kernel implementations, and hardware integration layers – reflecting differences in CPU, GPU, NPU, or accelerator capabilities.

For instance, for a fine-tuned model such as mistral-7b-instruct, one might find multiple inference stacks:

Optimizing hardware for inference

Running LLMs efficiently depends critically on the underlying hardware and available system resources. This is why optimized versions of fine-tuned models are often required. A common form of optimization is quantization, reducing the numerical precision of a model’s parameters (for example, converting 16-bit floating-point weights to 8-bit or 4-bit integers). Quantization reduces the model’s memory footprint and computational load, enabling faster inference or deployment on smaller devices, often with minimal degradation in accuracy.

Beyond quantization, optimization can be silicon-specific. Different hardware architectures, e.g. GPUs, TPUs, NPUs, or specialized AI accelerators, exhibit unique performance characteristics: compute density, memory bandwidth, and energy efficiency. Vendors exploit these characteristics through hardware-aware model variants, which are fine-tuned or compiled to maximize performance on their specific silicon.

For silicon vendors, demonstrating superior inference performance directly translates into market differentiation. Even marginal improvements – a 2% gain in throughput or latency on a leading LLM – can have significant implications when scaled across data centers or deployed at the edge.

This performance race fuels intense investment in AI model optimization across the hardware ecosystem. Each vendor aims to maximize effective TFLOPS and real-world inference efficiency. The result is an expanding landscape of hardware-optimized model variants, from aggressively quantized models that fit within strict memory limits to GPU-tuned builds exploiting tensor cores and parallel compute pipelines.

Furthermore, model packaging and runtime format affect deployability, as one needs optimized artefacts per target, e.g. TorchScript/ONNX for GPUs, vendor-compiled binaries for NPUs, GGML or int8 CPU builds for constrained devices.

As a consequence, developers building embedded AI apps are stuck dealing with API keys, per-token subscriptions, and apps that only work when connected to fast internet. Packaging and distributing AI-powered software is hard. Developers must contend with dozens of silicon types, hundreds of hardware configurations, and an ever-growing number of models and variants – while also managing dependencies, model updates, runtime engines, API servers, optimizations, and more.

Simplifying AI development: abstracting complexity away

Is it possible to abstract that complexity away? Today, developers build on Ubuntu without needing to think about the underlying hardware. The same principle should apply to AI: what if, in the future, a developer could simply code for DeepSeek,  without worrying about selecting the optimal fine-tuned variant, choosing the right inference engine, or targeting a specific silicon architecture?

This is the challenge we set out for ourselves, bridging the gap between the potential of AI and its practical adoption. Our goal is to bring the right models directly into developers’ hands and make LLMs part of everyday software development

We envision a world where application developers can target an AI model, not a stack, and seamlessly use hardware-specific optimizations under the hood. To truly harness the potential of AI, developers shouldn’t have to worry about quantization levels, inference runtimes, or attaching API keys. They should simply develop against a consistent model interface.

Unfortunately, today’s AI ecosystem is still fragmented. Developer environments lack a standard packaging and distribution model, making AI deployment costly, inconsistent, and complex. Teams often spend significant time configuring, benchmarking, and tuning inference stacks for different accelerators, work that demands deep expertise and still leaves hardware underutilized.

This is why, through Canonical’s strong ecosystem partnerships, we introduced an abstraction layer that gives users access to develop using a known model while integrating the hardware-specific stacks. Last week, we announced the public beta release of AI models on Ubuntu 24.04 LTS, with DeepSeek R1 and Qwen 2.5 VL builds optimized for Intel and Ampere hardware.  Developers can locally install those snaps pre-tuned for their silicon, without wrestling with dependencies or manual setup.  Our snap approach enables development against a model’s standard API on Ubuntu, while relying on optimized builds engineered by Canonical, Intel, and Ampere.

Silicon-vendor optimizations will now be automatically included when detecting the hardware. For example, when installing the Qwen VL snap on an amd64 workstation, the system will automatically select the most suitable version – whether optimized for Intel integrated or discrete GPUs, Intel NPUs, Intel CPUs, or NVIDIA GPUs (with CUDA acceleration). Similarly, on arm64 systems using Ampere Altra/One processors, the version optimized for those CPUs will be used. If none of these optimizations match the hardware, Qwen VL will automatically fall back to a generic CPU engine to ensure compatibility.

Canonical’s silicon partnerships: planning for the future

As we saw, the performance of AI models is tightly bound to the silicon layer. Optimizing for silicon covers multiple layers, from reduced numeric precision, to operator fusion and kernel fusion, memory layout and tiling changes, and vendor-specific kernel implementations. The inference stack itself, from TensorRT, ONNX Runtime, OpenVINO/oneAPI,  and vendor NPUs’ runtimes, materially affects latency, throughput and resource utilization. By working with silicon leaders, Canonical can now deliver robust, stable, locally optimized models that run efficiently on desktops, servers, and edge devices, reducing reliance on massive cloud GPU deployments, lowering costs and energy use, improving latency, and keeping sensitive data on-device. Each model can be installed with a single command, without manual setup or dependency management. Once installed, the snap automatically detects the underlying silicon, currently optimized for Intel CPUs, GPUs and NPUs, and  Ampere CPUs, applying the most effective combination of inference engine and model variant. 

With Ubuntu Core, Desktop and Server, we already provide a consistent OS experience to millions of developers across verticals and form factors. We are now eager to extend our collaborations with the silicon community and broader ecosystem, and are perfectly placed to enable AI models across the compute spectrum.

30 October, 2025 08:42AM

hackergotchi for Qubes

Qubes

Debian 13 templates available

The following new Debian 13 templates are now available for both Qubes OS 4.2 (stable) and Qubes OS 4.3 (release candidates):

  • debian-13-xfce (default Debian template with the Xfce desktop environment)
  • debian-13 (alternative Debian template with the GNOME desktop environment)
  • debian-13-minimal (minimal template for advanced users)

There are two ways to upgrade a template to a new Debian release:

  1. Recommended: Install a fresh template to replace the existing one. This option is simpler for less experienced users, but it won’t preserve any modifications you’ve made to your template. After you install the new template, you’ll have to redo your desired template modifications (if any) and switch everything that was set to the old template to the new template. If you choose to modify your template, you may wish to write those modifications down so that you remember what to redo on each fresh install. In the old Debian template, see /var/log/dpkg.log and /var/log/apt/history.log for logs of package manager actions.

  2. Advanced: Perform an in-place upgrade of an existing Debian template. This option will preserve any modifications you’ve made to the template, but it may be more complicated for less experienced users.

Note: No user action is required regarding the OS version in dom0 (see our note on dom0 and EOL).

30 October, 2025 12:00AM

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

New image uploader script default in bunsen-utilities

The BunsenLabs image uploader scripts in bunsen-utilities have been modified. Because imgur no longer provides service to the UK ( see: https://forums.bunsenlabs.org/viewtopic.php?id=9568 ) the image hosting service used by default is now imgbb .

The menu items for taking screenshots, and Thunar right-click menu item for image files, now use a BunsenLabs script bl-imgbb-upload instead of bl-imgur-upload.

To take advantage of these changes please upgrade bunsen-utilities to the latest version.

NOTES:

1) imgbb requires that users set up an account and use the access key that they receive. The script bl-imgbb-upload helps with that process, so it's very easy.

2) Users who want to continue using imgur can open /usr/bin/bl-image-upload as root, uncomment the line uploader=bl-imgur-upload and comment out the equivalent line with bl-imgbb-upload. (The script bl-imgur-upload is still available in bunsen-utilities.)

3) The behaviour of bl-imgbb-upload is slightly different from bl-imgur-upload but images and screenshots can be easily uploaded.

4) Future releases of bunsen-utilities might also offer a script for postimage, and allow users to configure their choice of uploader script with a user config setting, but any improvements will come after the Carbon release.

30 October, 2025 12:00AM

October 29, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Studio: Upgrading from 25.04 to 25.10

An issue has been identified

The Ubuntu Release team has now enabled upgrades from 25.04 to 25.10! This is great news! In fact, you may have noticed this icon on your toolbar and a notification to upgrade.

However, upon doing so, you may have noticed something a little more unfortunate:

Yep, we know. This tells you nothing about what is wrong. What is wrong is slightly more technical. As it turns out, the backend application that actually performs the upgrade removed an argument from its command line unannounced during the Plucky Puffin release cycle, approximately a year ago.

As our project leader, Erich Eickmeyer, maintains the upgrade notifier widget for both Ubuntu Studio and Kubuntu, he woke up and immediately got to work identifying what’s wrong and how to patch the Plasma widget in question to correctly execute the upgrade process. He has uploaded the fix, and it was accepted by a member of the Ubuntu Stable Release Updates team.

At the moment, the fix needs to be tested and verified. In order to test it, one must install the fix from the plucky-proposed repository. In order for it to be available, it must build for all architectures and, as of this writing, is awaiting building on riscv64 which has a 40-hour backlog.

The Workaround

If you wish to begin the upgrade process manually rather than waiting on the upgrade notifier fix to be implemented, feel free to make sure you are fully updated, type alt-space to execute Krunner, and paste this:

do-release-upgrade -f DistUpgradeViewKDE

This is the exact command that will be executed by the notifier widget as soon as it is updated.

Of course, if you’re in no hurry, feel free to wait until the notifier is updated and use that method. Do bear in mind, though, that as of this writing, you have exactly 90 days to perform the upgrade to 25.10 before your system will no longer be supported. At that time, you’ll risk being unable to upgrade at all unless certain procedures for End-Of-Life Upgrades are done, which can be tedious for those uncomfortable in a command line as it will require modifying system files.

Mea Culpa

We do apologize for the inconvenience. Testing upgrade paths like this are hard to do and things go missed, especially when teams don’t communicate with each other. We’re try to identify things before they happen but, unfortunately, certain items cannot be foreseen.

This issue has now been added to the Ubuntu Studio 25.10 Release Notes.

29 October, 2025 08:08PM

hackergotchi for Deepin

Deepin

deepin 25.0.9 Release Note

Dear deepin community members, We are pleased to announce the official release of the deepin 25.0.9 update! This release includes a number of new features and optimizations, addresses several issues reported by the community, and further refines the user experience across various applications. After updating, your system version will be 25.0.9. We encourage all users to update at your earliest convenience. Feel free to share your thoughts and feedback in the comments!   New Features & Improvements File Manager Significant optimizations have been made to file preview and management for a smoother and more efficient experience: Files can now be ...Read more

29 October, 2025 10:11AM by xiaofei

October 28, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical and NVIDIA BlueField-4: a foundation for zero-trust high performance infrastructure

At NVIDIA GTC Washington D.C., Canonical is pleased to support the arrival of the NVIDIA BlueField-4 – the newest generation of the data processing unit (DPU) family. NVIDIA BlueField-4 is an accelerated infrastructure platform for gigascale AI factories. By combining NVIDIA Grace CPU and NVIDIA ConnectX-9 networking, it delivers 6x the compute power of BlueField-3 and 800 Gb/s throughput to accelerate these systems. BlueField-4features multi-tenant networking, rapid data access, AI runtime security, and enables  high-performance inference processing. Running natively on BlueField-4, NVIDIA DOCA microservices deliver containerized services to simplify and scale AI infrastructure.

As with previous generations, BlueField-4 supports the Ubuntu  OS, which comes with Canonical’s security maintenance and support. This development is the latest from Canonical’s longstanding collaboration with NVIDIA to advance the state of DPU-driven infrastructure. 

A securely-designed foundation 

Zero-trust architecture places emphasis on the integrity of infrastructure, which is isolated from untrusted workloads. No component, workload, or user is implicitly trusted, and every interaction within the system is continuously verified and enforced by NVIDIA BlueField at the infrastructure level. In this model, the DPU acts as a hardware-based control and enforcement plane, isolating workloads, validating software integrity, and handling encryption and network policy enforcement independently from the host CPU.

NVIDIA BlueField-4 supports multi-service architectures with native service function chaining, zero-trust tenant isolation, and software defined infrastructure control. Running natively on BlueField-4, NVIDIA DOCA microservices deliver prebuilt, containerized services for AI networking, orchestration, real-time threat detection, and data acceleration–simplifying operations and enabling enterprises and service providers to scale AI securely and efficiently. Enterprises can also deploy validated, BlueField-accelerated applications from leading software providers, enabling advanced infrastructure acceleration and cybersecurity capabilities that enhance the platform’s value.

Ubuntu 24.04 LTS on BlueField-4

Ubuntu plays a key role in supporting the overall security posture of zero-trust BlueField-4 infrastructure. BlueField-4 effectively introduces a dedicated control and enforcement domain alongside the host system, meaning it meets the same security and compliance expectations as any other infrastructure component in the data center. In highly regulated environments, where every element is expected to be hardened and certifiable, the software foundation of BlueField becomes just as important, if not more, as that of the host.

Because the BlueField software stack is based on Ubuntu 24.04 LTS, it benefits from Canonical’s signed packages and reproducible build processes. Expanded Security Maintenance (ESM) provides long-term maintenance guarantees. Ubuntu Pro extends this foundation with continuous CVE monitoring, patch delivery, and compliance tooling, giving operators a clear view of security status and patch levels. When DPUs are deployed in environments that require FIPS, DISA-STIG, or similar compliance frameworks, this is essential. These features, supported in the NVIDIA AI Factory for Government reference design, ensure organizations can integrate BlueField-4 into sensitive infrastructure with confidence, knowing that the underlying operating system aligns with their existing security and compliance processes.

In terms of performance, Canonical publishes optimized Ubuntu images, designed to get the most out of BlueField-4, which combines NVIDIA Grace CPU and NVIDIA ConnectX-9 networking. With NVIDIA Grace, a CPU already certified on Ubuntu 24.04 LTS, operators can deploy with confidence, knowing their platforms have undergone comprehensive validation across performance, reliability, and interoperability. In practical terms, this includes an optimized Ubuntu kernel which combines with NVIDIA drivers on Grace CPU architecture to provide efficient scheduling and accelerated I/O performance on its Arm-based cores. 

Advanced networking with Ubuntu 24.04 LTS

Ubuntu 24.04 LTS provides a robust foundation for service function chaining and software-defined networking (SDN) in BlueField-4 deployments. Ubuntu’s networking stack is optimized for deterministic performance, low latency, and full hardware acceleration.

In environments where complex network services, such as firewalls, load balancers, and intrusion detection, must operate in sequence at line rate, Ubuntu’s Linux kernel is optimized for BlueField and enables high performance service function chaining. Developers can opt to use Canonical’s open virtual network (OVN), which integrates tightly with NVIDIA OVS-DOCA (Open vSwitch) to offload data plane operations directly onto the BlueField-4 programmable platform. This allows for traffic steering, encapsulation, and flow processing to occur entirely within the DPU, freeing host resources and ensuring wirespeed throughput even in multi-tenant or multi-domain deployments.

Use cases for telco and public sector

5G Core and edge networking

Service providers can offload user plane function (UPF) and service chaining to BlueField-4, accelerating 5G core workloads running on Ubuntu OpenStack and Kubernetes. With secure tenant isolation via BlueField Advanced Secure Trusted Resource Architecture, operators can enforce zero-trust policies across multi-tenant, high-throughput environments.

Cybersecurity and mission-critical systems

In mission-critical settings, BlueField-4 with Ubuntu enables line-rate intrusion detection, data encryption, and air-gapped control planes, executing directly in the DPU for minimal latency and maximum assurance. With Ubuntu’s FIPS validation and DISA-STIG compliance, organizations can deploy infrastructure that meets stringent operational and regulatory standards.

A shared vision for the future

Canonical and NVIDIA have already demonstrated the power of combining Ubuntu, Kubernetes, and DOCA for networking acceleration, as described in our earlier post: Canonical Kubernetes Meets NVIDIA DOCA Platform Framework (DPF).

Canonical and NVIDIA share a commitment to advancing open, programmable, and securely-designed infrastructure. With BlueField-4 on Ubuntu 24.04 LTS, organizations gain a validated, compliant, and high-performance platform to power the next era of AI, telco, and government infrastructure.

Together, we’re enabling governments, operators, and enterprises to deploy scalable, securely maintained, and future-proof infrastructure at gigascale.

28 October, 2025 07:24PM

Launchpad News: Make fetch service opt-in

Launchpad Builders do not have direct access to the Internet. To reach external resources, they must acquire an authentication token that allows access to a restricted set of URLs via a proxy. This can either be a custom authenticated builder proxy or the fetch service.

The fetch service is a custom sophisticated context-aware forward proxy. Whereas the builder proxy allows requests to allowlisted URLs, the fetch service also keeps track of requests and dependencies for a build.

Users can now opt-in to use the fetch service while building snaps, charms, rocks and sourcecraft packages. You can read more about the fetch service here.

Why is the fetch service important?

To achieve traceability and reproducibility, artifact dependencies retrieved during a build must be identified. The fetch service mediates network access between the build host and the outside world, examining the request protocol, creating a manifest of the downloaded artifacts, and keeping a copy of the artifacts for archival and metadata extraction for each package build.

How to use the fetch service?

To be able to use the fetch service, users must opt-in. For snaps, charms, rocks and sourcecraft packages, the use_fetch_service flag should be set to true in the API. For snaps and charms, this setting is also available in the Edit Recipe UI page. 

The fetch service can be run in two modes, “strict” and “permissive”, where it defaults to the former. Both modes only allow certain resources and formats, as defined by inspectors which are responsible for inspecting the requests and the various downloads that are made during the build, ensuring that the requests are permitted. 

The “strict” mode errors out if any restrictions are violated. The “permissive” mode works similarly, but only logs a warning when encountering any violations. The mode can be configured using the fetch_service_policy option via the API. For snaps and charms, the mode can also be selected from a dropdown on the Edit Recipe UI page.

When to use the fetch service?

Use the fetch service when you need to keep track of requests and dependencies for a build, e.g., when you need to verify that the artifacts belong to secure, trusted sources.

28 October, 2025 11:31AM

hackergotchi for Deepin

Deepin

October 27, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 915

Welcome to the Ubuntu Weekly Newsletter, Issue 915 for the week of October 19 – 25, 2025. The full version of this issue is available here.

In this issue we cover:

  • Enabling updates on Ubuntu 25.10 systems
  • [Updated] Questing Quokka Release Notes
  • Resolute Raccoon is now open for development
  • Ubuntu Stats
  • Hot in Support
  • Rocks Public Journal; 2025-21-10
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • Vote result: LoCo rebranding and rescoping resolution
  • Discover your fully open source robotics observability at ROSCon 2025
  • Ubuntu 25.10 Release Party @ Taipei
  • LoCo Events
  • TPM-backed FDE: Take 2 minutes to help widen Ubuntu compatibility with your TPM configuration!
  • Canonical’s new design system : towards a design system ontology
  • Other Community News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 22.04, 24.04, 25.04 and 25.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Cristovao Cordeiro (cjdc) – Rocks
  • irihapeti
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

27 October, 2025 09:51PM

Paul Tagliamonte: It's NOT always DNS.

I’ve written down a new rule (no name, sorry) that I’ll be repeating to myself and those around me. “If you can replace ‘DNS’ with ‘key value store mapping a name to an ip’ and it still makes sense, it was not, in fact, DNS.” Feel free to repeat it along with me.

Sure, the “It’s always DNS” meme is funny the first few hundred times you see it – but what’s less funny is when critical thinking ends because a DNS query is involved. DNS failures are often the first observable problem because it’s one of the first things that needs to be done. DNS is fairly complicated, implementation-dependent, and at times – frustrating to debug – but it is not the operational hazard it’s made out to be. It’s at best a shallow take, and at worst actively holding teams back from understanding their true operational risks.

IP connectivity failures between a host and the rest of the network is not a reason to blame DNS. This would happen no matter how you distribute the updated name to IP mappings. Wiping out all the records during the course of operations due to an automation bug is not a reason to blame DNS. This, too, would happen no matter how you distribute the name to IP mappings. Something made the choice to delete all the mappings, and it did what you asked it to do

There’s plenty of annoying DNS specific sharp edges to blame when things do go wrong (like 8.8.8.8 and 1.1.1.1 disagreeing about resolving a domain because of DNSSEC, or since we’re on the topic, a DNSSEC rollout bricking prod for hours) for us to be cracking jokes anytime a program makes a DNS request.

We can do better.

27 October, 2025 05:15PM

Launchpad News: Support for FIDO2 SSH Keys

Launchpad now supports the FIDO2 hardware-backed SSH key types ed25519-sk and ecdsa-sk. These keys use a hardware device, such as a YubiKey or Nitrokey, to perform cryptographic operations and keep your private keys safely off your computer. They can be used anywhere Launchpad accepts SSH authentication, including git+ssh and SFTP PPA uploads.

To generate a new key, run

ssh-keygen -t ed25519-sk -C "your@email.com"

or use ecdsa-sk for backwards compatibility. You will be asked to touch your security key during the key creation, and OpenSSH will store the resulting files in ~/.ssh/. If you want to make your key resident, meaning it can be stored on the hardware device and later retrieved even if the original files are lost, use the -O resident option:

ssh-keygen -t ed25519-sk -O resident -C "your@email.com"

Resident keys are useful if you use multiple machines or if you want a portable login method tied directly to your hardware key. To register a new key on your Launchpad account, visit https://launchpad.net/~username/+editsshkeys.

These new key types offer strong protection against key theft and phishing, but require a physical device each time you connect. It is recommended you keep a separate backup key if you use them regularly.

The introduction of security key backed SSH key types is the next step on making Launchpad even more secure. Let us know if you have any feedback!

27 October, 2025 03:29PM

hackergotchi for SparkyLinux

SparkyLinux

Sparky 8.1-RC1 ARM64

There are new images of Sparky 8.1 Release Candidate ARM64 available for testing. The new images of ARM64 are set with Openbox window manager and CLI (text mode); no ARMHF images any more. The new images are based on and fully compatible with Debian 13 Trixie. Sparky 8 ARM64 can be installed on Raspberry Pi 3+. Known issues: – Wi-Fi can be disabled. To manually fix that run: Then…

Source

27 October, 2025 01:43PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Global-ready from day one

How Anbox Cloud streamlines localization testing

Wherever users are based, they expect apps to just work, whether in Japanese, Arabic, or Spanish. But anyone who’s touched localization knows it’s more than translation. Real quality comes from testing how your app behaves across languages, layouts, and regions – and doing it fast.

If you’re shipping apps for automotive or gaming, localization gets complex fast. It’s never just about translation: you’re adapting to different layouts, alphabet types, interaction models, and hardware quirks. You’re aiming for pixel-perfect across every region, while teams are spread across time zones and builds keep coming.

Anbox Cloud cuts through all of that: enabling real-time, browser-based localization testing at scale. No APK sharing. No device juggling. Enabling you to localize at speed, and at scale.

A consistent experience, everywhere

Let’s examine a common use case: an automotive Tier 1 supplier building in-vehicle Infotainment (IVI) apps for multiple original equipment manufacturers (OEMs). One app, many markets. Each with different languages, reading directions, and screen resolutions. The traditional approach to testing means emailing builds to local quality assurance (QA) teams, or trying to simulate every scenario in-house. If you’ve experienced this approach first hand, then you’ll know it is both fragile and slow.

With Anbox Cloud, all you have to do is launch a container with the right locale in seconds. Your team (in Berlin, Tokyo, or Washington) gets a secure URL, opens their browser, and tests live. No flashing. No setup delays. No exposure of early builds or IP.

Because it runs in the cloud, you control access, enforce authentication, and test in real-time, without sending binaries halfway across the world.

Localization QA at scale: the gaming angle

Now let’s switch to mobile gaming, where localization isn’t just a checkbox: it’s revenue. A game that looks fine in English can break in Turkish, or wrap badly in Finnish. Fonts, line breaks, layout shifts, it all matters. And you don’t want to hear about it from your players.

Global studios know the pain: you need to test across devices, screen densities, and locales. And you need it to be fast, so your players don’t end up furious.

With Anbox Cloud, you can spin up multiple configurations in parallel, simulate different regions, and let your QA team jump into live sessions: no APK installs, no physical devices, just a browser and a link.

Test your UI flow. Click through quests. Break things early.

Why it matters

In a world where users bounce in seconds, localized quality is a differentiator. A misaligned button or clipped string may seem minor, but users notice. And they judge.

In automotive, UI glitches aren’t just annoying, they’re a risk. In gaming, they represent potential lost revenue.

Anbox Cloud brings everything together: faster feedback, real-time testing, and zero APK distribution. Everything runs in the cloud. Everything stays under your control.

Automation that scales with you

Localization QA isn’t just a manual task. With the right tooling, it becomes part of your CI: fast, repeatable, and invisible when it works.

Canonical provides an official GitHub Action to spin up Anbox Cloud dynamically in your pipeline. That means you can launch full Android containers, connect via ADB, and run tests automatically, all from a GitHub workflow. No emulators: just Android, on demand.

Spin up a fresh Android container, connect to it using anbox-connect, and run your UI tests across configurations and locales. The same amc CLI that developers use locally works inside your runner, letting you orchestrate test flows, parallelize across devices, or gate PRs on localization correctness.

Each Android container is accurately replicated, so every test starts from a known baseline. This means that you can also run simultaneous sessions in parallel without sacrificing performance.

Get started today

Whether you’re localizing a safety-critical IVI interface or pushing a mobile game to 30 markets, Anbox Cloud helps you test, adapt, and scale.

And here’s the best part: Anbox Cloud is included in Ubuntu Pro, which is free for personal use and up to five machines. With GitHub integration and built-in automation, your QA process stays in sync with your development pace.

Want to get hands-on? Check out our official documentation, or get started with the Anbox Cloud Appliance

Curious to know how Anbox Cloud fits into your pipeline? Talk to us.

Further reading

Official documentation
Anbox Cloud Appliance


Android is a trademark of Google LLC. Anbox Cloud uses assets available through the Android Open Source Project.

27 October, 2025 08:00AM

hackergotchi for Pardus

Pardus

Pardus 17 ve Pardus 19 Sürümleri İçin Depo Güncellemesi Hakkında

Pardus 17 ve Pardus 19 sürümlerine ait yazılım depoları, destek süresinin sona ermesi nedeniyle 31 Ekim 2025 tarihinde arsiv.pardus.org.tr adresine taşınacaktır. Kullanıcıların ilgili yapılandırma dosyalarını güncellemeleri önemle rica olunur.

27 October, 2025 07:29AM by Hace İbrahim Özbal

hackergotchi for Qubes

Qubes

Qubes OS 4.3.0-rc3 is available for testing

We’re pleased to announce that the third release candidate (RC) for Qubes OS 4.3.0 is now available for testing. This minor release includes many new features and improvements over Qubes OS 4.2.

What’s new in Qubes 4.3?

  • Dom0 upgraded to Fedora 41 (#9402).
  • Xen upgraded to version 4.19 (#9420).
  • Default Fedora template upgraded to Fedora 42 (versions older than 41 not supported).
  • Default Debian template upgraded to Debian 13 (versions older than 12 not supported).
  • Default Whonix templates upgraded to Whonix 18 (upgraded from 17.4.3 in RC2; versions older than 18 no longer supported).
  • Preloaded disposables (#1512)
  • Device “self-identity oriented” assignment (a.k.a. New Devices API) (#9325)
  • Qubes Windows Tools reintroduced with improved features (#1861).

These are just a few highlights from the many changes included in this release. For a more comprehensive list of changes, see the Qubes OS 4.3 release notes.

When is the stable release?

That depends on the number of bugs discovered in this RC and their severity. As explained in our release schedule documentation, our usual process after issuing a new RC is to collect bug reports, triage the bugs, and fix them. If warranted, we then issue a new RC that includes the fixes and repeat the process. We continue this iterative procedure until we’re left with an RC that’s good enough to be declared the stable release. No one can predict, at the outset, how many iterations will be required (and hence how many RCs will be needed before a stable release), but we tend to get a clearer picture of this as testing progresses.

At this time, we expect that there will likely be a fourth release candidate, which will probably be the final one.

How to test Qubes 4.3.0-rc3

Thanks to those who tested earlier 4.3 RCs and reported bugs they encountered, 4.3.0-rc3 now includes fixes for several bugs that were present in those prior RCs!

If you’d like to help us test this RC, you can upgrade to Qubes 4.3.0-rc3 with either a clean installation or an in-place upgrade from Qubes 4.2. (Note for in-place upgrade testers: qubes-dist-upgrade now requires --releasever=4.3 and may require --enable-current-testing for testing releases like this RC.) As always, we strongly recommend making a full backup beforehand and updating Qubes OS immediately afterward in order to apply all available bug fixes.

If you’re currently using an earlier 4.3 RC and wish to update to 4.3.0-rc3, please update normally with current-testing enabled. If you use Whonix, please also upgrade from Whonix 17 to 18.

Please help us improve the eventual stable release by reporting any bugs you encounter. If you’re an experienced user, we encourage you to join the testing team.

Known issues in Qubes OS 4.3.0-rc3

It is possible that templates restored in 4.3.0-rc3 from a pre-4.3 backup may continue to target their original Qubes OS release repos. This does not affect fresh templates on a clean 4.3.0-rc3 installation. For more information, see issue #8701.

View the full list of known bugs affecting Qubes 4.3 in our issue tracker.

What’s a release candidate?

A release candidate (RC) is a software build that has the potential to become a stable release, unless significant bugs are discovered in testing. RCs are intended for more advanced (or adventurous!) users who are comfortable testing early versions of software that are potentially buggier than stable releases. You can read more about Qubes OS supported releases and the version scheme in our documentation.

What’s a minor release?

The Qubes OS Project uses the semantic versioning standard. Version numbers are written as [major].[minor].[patch]. Hence, releases that increment the second value are known as “minor releases.” Minor releases generally include new features, improvements, and bug fixes that are backward-compatible with earlier versions of the same major release. See our supported releases for a comprehensive list of major and minor releases and our version scheme documentation for more information about how Qubes OS releases are versioned.

27 October, 2025 12:00AM

October 24, 2025

XSAs released on 2025-10-24

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is not affected.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

  • (none)

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • XSA-476
    • Qubes OS does not hot unplug PCI devices.

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

24 October, 2025 12:00AM

October 23, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical announces new optimized Ubuntu image for Thundercomm RUBIK Pi 3

Ubuntu now runs natively on the Thundercomm RUBIK Pi 3 developer board – a lightweight Pi created for AI developers which runs on the Qualcomm Dragonwing™ QCS6490 processor.

October 23, 2025 – Today Canonical, the publisher of Ubuntu, announced an optimized, pre-installed Ubuntu image for RUBIK Pi 3 – a powerful AI developer board built on Dragonwing QCS6490. These new, optimized Ubuntu images reduce time to market through out-of-the-box functionality, and offer official long-term support from Canonical. The new Ubuntu image is also available to download and install for current users of RUBIK Pi 3. 

A rapid AI development platform, powered by Ubuntu

Credit: Thundercomm

AI is a fast-paced industry, and time is of the essence when launching new products. These optimized Ubuntu images offer developers on RUBIK Pi 3 access to cutting-edge open source, combined with the stability and robustness that Ubuntu is known for. These new images complement RUBIK Pi 3’s ease of use and accessibility. The images work seamlessly out of the box, and are fine-tuned for hardware performance and resource efficiency. 

“By delivering an optimized Ubuntu image preloaded on the powerful RUBIK Pi 3, we’re offering an integrated, securely designed, and supported foundation for AI developers, said Cindy Goldberg, VP of Silicon Alliances at Canonical. “Our partnership with Qualcomm and Thundercomm allows developers to move from a concept to a deployed solution with speed and confidence.”   

“At Thundercomm, we’re committed to lowering the barriers to AI innovation,” said Ali Mesri, Sr Vice President, Business Development at Thundercomm. “With the optimized Ubuntu image on RUBIK Pi 3, developers now have a unified platform that combines Qualcomm’s performance, Canonical’s stability, and Thundercomm’s deep system integration — enabling faster, more reliable AI deployment from concept to production.”

Powerful features on an accessible board

RUBIK Pi 3 is designed to make innovative hardware more accessible to AI developers. It offers a full-stack, end-to-end solution that is performant at low power. RUBIK Pi 3 consumes less than 6.5W, whilst at the same time being equipped with a 12 tops ML accelerator, 8-core GPU, integrated Wi-Fi and Bluetooth, 8GB LPDDR4x RAM, and 128GB UFS 2.2 storage.

Built on Dragonwing QCS6490, RUBIK Pi 3 includes access to the Qualcomm®

 AI Hub with pre-optimized models, and access to the Edge Impulse MLOps platform for training and deployment. Canonical’s new optimized Ubuntu images are the latest development RUBIK Pi 3 offers – see how everything fits together in the table below:

FeatureDescription
Optimized Ubuntu imagesPrototype, test and deploy edge AI solutions faster with the world’s most popular enterprise Linux.
Edge AI siliconHigh-performance, low-power AI chips.
Qualcomm AI hubPre-optimized models for vision, audio, and NLP.
IMSDKIntelligent Multimedia SDK, for developing HW accelerated multimedia and AI applications.
QIRP SDKQualcomm®  Intelligent Robotics Product SDK for developing Robotics applications with ROS/ROS2.
ContainersContainerized SDKs and applications with access to HW acceleration (such as NPU, GPU, and VZPU).
Integrated developer environmentQualcomm® VSCode IDE Extensions to simplify device setup and application development environment

Download the optimized Ubuntu image for Rubik Pi 3

Visit the Thundercomm RUBIK Pi3 page to get your board and start building with the optimized Ubuntu image today.

About Canonical 

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. Learn more at https://canonical.com/

Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm patents are licensed by Qualcomm Incorporated.

Qualcomm is a trademark or registered trademark of Qualcomm Incorporated.

Further reading

23 October, 2025 03:40PM

Ubuntu Blog: Introducing Canonical Academy

Validate your skills and advance your career with recognized qualifications from the publishers of Ubuntu

London, October 23 2025 – Canonical today announced the launch of Canonical Academy, a new platform that enables individuals and enterprises to validate their open source skills with qualifications designed and maintained by the engineers behind Ubuntu. The first available track is the SysAdmin track, which includes four exams that test practical expertise with Linux and Ubuntu. Successful candidates can earn digital badges that prove their ability to employers and peers.

Canonical Academy exams are designed to prepare learners for the real world, with modular, self-paced assessments that fit into busy schedules. The SysAdmin track launches with three exams today:

  • Using Linux Terminal – available now to the public
  • Using Ubuntu Desktop – available in beta for community testing
  • Using Ubuntu Server – available in beta for community testing


The fourth exam is currently in development and will be announced soon. To achieve the SysAdmin qualification, users must earn all the badges in the corresponding SysAdmin track.

Real-world exams, built by Ubuntu experts

Traditional certifications can leave gaps between theory and practice. Canonical Academy takes a different approach, creating assessments based on the challenges IT professionals face every day. With the launch of the SysAdmin track, candidates can prove their ability to navigate the Linux terminal, configure Ubuntu desktops, and manage servers in environments that mirror the workplace. New qualification tracks are also in development to broaden the exam portfolio. 

“Canonical Academy grounds its qualifications in realistic professional applications. Current, technical professional needs are the foundation of  our development process, from the earliest identification of critical job skills for the target occupation, to the design of realistic custom cloud exam environments, to the industry expert code review of every hands-on item.”

Adrianna Frick, Academy Team Lead, Canonical

Modular and self-driven learning

Canonical Academy is designed to fit the needs of modern learners. Each exam is modular, meaning professionals can progress at their own pace while building towards an overarching qualification badge.  The system allows individuals to focus on the skills they need most, while enterprises can map training to specific roles.

Badges indicate the year of the Ubuntu LTS release the exam is based on. All current exams are aligned with Ubuntu 24.04 LTS. Updated exams for Ubuntu 26.04 LTS are expected in September 2026. 

Each exam is supported by a study guide based on the official exam content, helping test takers prepare effectively through guided, self-paced learning.

The ‘Using Linux Terminal’ qualification provided me with a well-rounded understanding of the Ubuntu ecosystem. I’m enthusiastic about bringing this program to Indonesia to empower local talent with the open source skills that are increasingly in demand across the industry.

Safira Zahira, Product Manager, Sivali Cloud Technology

Proof of skill that employers can trust

With Canonical Academy, successful candidates receive verifiable digital badges that demonstrate open source competence. Backed by Canonical, these credentials provide credible evidence of technical ability in a competitive job market.

“The test and the entire in-web-browser desktop environment was super cozy. I think the exam is incredibly valuable and will be a fantastic resource to a lot of aspiring and experienced Ubuntu users worldwide. Frankly, I thought I’d nail the Using Linux Terminal exam in 30 minutes.  But instead I was sweating.”

Nathan Haines, Community Council Member, Ubuntu

Get started today

The Using Linux Terminal exam is open to everyone today. 

If you are looking to contribute to Canonical Academy, you can sign up to be a Subject Matter Expert (SME) or a beta tester. As a subject matter expert, you’ll help define topics, advise on future content, and guide the direction of open source skills assessments. As a tester, you’ll preview and critique exams in development, shape the user experience, and access full exams at discounted rates.

Learn more

About Canonical

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. 

Learn more at https://canonical.com/ 

23 October, 2025 03:00PM

Ubuntu Blog: Introducing silicon-optimized inference snaps

Install a well-known model like DeepSeek R1 or Qwen 2.5 VL with a single command, and get the silicon-optimized AI engine automatically.

London, October 23 – Canonical today announced optimized inference snaps, a new way to deploy AI models on Ubuntu devices, with automatic selection of optimized engines, quantizations and architectures based on the specific silicon of the device. Canonical is working with a wide range of silicon providers to deliver their optimizations of well-known LLMs to the developers and devices.

A single well-known model like Qwen 2.5 VL or DeepSeek R1 has many different sizes and setup configurations, each of which is optimized for specific silicon. It can be difficult for an end-user to know which model size and runtime to use on their device. Now, a single command gets you the best combination, automatically. Canonical is working with silicon partners to integrate their optimizations. As new partners publish their optimizations, the models will become more efficient on more devices.

This enables developers to integrate well-known AI capabilities seamlessly into their applications and have them run optimally across desktops, servers, and edge devices.

A snap package can dynamically load components. We fetch the recommended build for the host system, simplifying dependency management while improving latency.  The public beta includes Intel and Ampere®-optimized DeepSeek R1 and Qwen 2.5 VL as examples, and open sources the framework by which these are built.

“We are making silicon-optimized AI models available for everyone. When enabled by the user, they will be deeply integrated down to the silicon level,” said Jon Seager, VP Engineering at Canonical, “I’m excited to work with silicon partners to ensure that their silicon-optimized models ‘just work.’ Developers and end-users no longer need to worry about the complex matrix of engines, builds and quantizations. Instead, they can reliably integrate a local version of the model that is as efficient as possible and continuously improves.”

The silicon ecosystem invests heavily in performance optimizations for AI, but developer environments are complex and lack simple tools for unpacking all the necessary components for building complete runtime environments. On Ubuntu, the community can now distribute their optimized stacks straight to end users. Canonical worked closely with Intel and Ampere to deliver hardware-tuned inference snaps that maximize performance.

“By working with Canonical to package and distribute large language models optimized for Ampere hardware through our AIO software, developers can simply get our recommended builds by default, already tuned for Ampere processors in their servers,” said Jeff Wittich, Chief Product Officer at Ampere, “This brings Ampere’s high performance and efficiency to end users right out of the box. Together, we’re enabling enterprises to rapidly deploy and scale their preferred AI models on Ampere systems with Ubuntu’s AI-ready ecosystem.”

“Intel optimizes for AI workloads from silicon to high-level software libraries. Until now, a developer has needed the skills and knowledge to select which model variants and optimizations may be best for their client system,” said Jim Johnson, Senior VP, GM of Client Computing Group, Intel, “Canonical’s approach to packaging and distributing AI models overcomes this challenge, enabling developers to extract the performance and cost benefits of Intel hardware with ease. One command detects the hardware and uses OpenVINO, our open source toolkit for accelerating AI inference, to deploy a recommended model variant, with recommended parameters, onto the most suitable device.”

Get started today 

Get started and run silicon-optimized models on Ubuntu with the following commands:

sudo snap install qwen-vl --beta

sudo snap install deepseek-r1 --beta

Developers can begin experimenting with the local and standard inference endpoints of these models to power AI capabilities in their end-user applications. 

Learn more and provide feedback

About Canonical 

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.

Learn more at https://canonical.com/

23 October, 2025 02:02PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Freedom of Choice instead of Lock-in – Why Open IAM Solutions Are Built for the Future

Open IAM solutions are the key to true freedom of choice. No more vendor lock-in—just secure, flexible, and scalable identity management. More control, more digital sovereignty, less dependency.

Relying on proprietary tools for digital identity management means giving up control—and often paying for it twice: once in high license fees, and again in lost flexibility. Open Identity & Access Management (IAM) systems put the power back in your hands. They give you the freedom to shape your IT on your own terms, not according to a vendor’s roadmap.

In this article, we’ll walk you through why open IAM takes you further: less dependency, more transparency, and a foundation that’s ready for whatever comes next.

Open Source IAM: Full Control and Open Standards

An Open Source IAM is like a toolbox: every piece is out in the open, you see exactly what’s inside, and you can add or rearrange parts whenever you need to. But “open” goes far beyond just reading the source code. Open IAM solutions are transparent by design, built on open standards, and licensed for free use—no hidden strings attached. You can review the software, patch and extend it, and connect it with other systems without having to rely on a vendor’s goodwill.

Nothing’s locked in a black box, nothing depends on whether a manufacturer feels like supporting you. With proprietary systems, the vendor calls the shots: how long security updates are delivered, which interfaces are available—and how much you’ll pay. With an Open Source IAM, you stay in charge. You decide which building blocks to use, and you can adapt or expand your environment at any time.

And the best part? This toolbox grows with you. Whether you’re running a school district, a government agency, or a company: new applications can be integrated seamlessly. Moving to the cloud is just as smooth as running in parallel with your existing legacy systems. No proprietary roadblocks—just a flexible platform that adapts to your rules.

IAM Lock-in: Convenient Today, Costly Tomorrow

Imagine this: you need an IAM solution fast. A vendor shows up with a shiny package that promises everything—quick to install, wrapped in a shiny dashboard, pitched as “all-in-one convenience.” Sounds like the perfect solution, right?

The reality check comes later. Suddenly your data and identities are locked away in a black box. Migration? Painful and expensive. Adding new applications? Only with extra modules—if at all. Before you know it, the vendor is the one shaping your IT, not you.

Vendor lock-in comes at a double cost: money and flexibility. When you don’t control your digital identities, your processes inevitably bend to a vendor’s rules instead of serving your own goals.

Closed systems may look convenient at first glance. But behind the fancy interface lie serious risks:

  • Vendor Lock-in: rising prices, complex contracts, and switching providers nearly impossible
  • Limited integrations: only a handful of apps fit the mold—everything else gets left out
  • Black-box security: how updates are handled or vulnerabilities are patched often remains in the dark

In the end, it all adds up to dependency. That “easy” all-in-one package you bought today could leave you stuck in a dead end tomorrow.

The Benefits of Open IAM Solutions at a Glance

Open IAM systems redefine the rules: instead of a closed black box, you get a modular toolbox where every piece is visible, interchangeable, and ready to grow—transparent, flexible, and free of lock-in:

  • Control: Run the solution in your own data center or in a sovereign cloud. Your data stays exactly where it belongs.
  • Extensibility: The community and partner networks can contribute new ideas, features, and add-ons.
  • Predictability: No hidden license traps, no nasty budget surprises, no dependence on a single vendor.

In short: with Open Source IAM, you design your infrastructure on your own terms—not according to a vendor’s contract.

Standardized IAM Interfaces as a Game Changer

What good is the best IAM if it doesn’t talk to anything else? Open systems rely on standards like LDAP, SAML, OpenID Connect, or SCIM—a common language that’s understood worldwide. This keeps your identities portable: you decide which systems are connected and maintain full control over data handling and role models.

Proprietary systems, on the other hand, often invent their own dialect. That may seem convenient in the short term, but every extension becomes unnecessarily complex. Open IAM solutions speak the established languages of the IT world—making them compatible with nearly any application. Whether it’s email, a cloud service, a learning platform, or a specialized business system: with open standards, your IAM integrates seamlessly and keeps your infrastructure flexible and future-proof.

IT Security and Data Protection with Open IAM

More security? Yes, but without the frustration for your users. A modern Open Source IAM delivers strong protection without making everyday work more complicated. With Single Sign-on (SSO), one login is all it takes for every application—no more password chaos or sticky notes. Add Multi-Factor Authentication (MFA) to reliably secure sensitive areas like HR records or administrative portals.

Open IAM is especially strong when it comes to data protection. Access follows the “need-to-know” principle, and every action is cleanly logged. That makes it easier to meet GDPR requirements in practice: data minimization, transparency, accountability. An open IAM isn’t a black box—it’s a reliable foundation for IT security and digital sovereignty.

Open IAM as the Key to True Independence

Digital identities are the control center of every modern IT environment. Choose a black box here, and you lose flexibility—paying the price in high costs and dependency. Open IAM like Nubus takes you further: you stay in control, remain independent, and build an infrastructure that’s ready for tomorrow. That’s what real digital sovereignty means: freedom of choice, transparency, and being ready for what’s next.

Ready for real digital freedom? Start your Open IAM journey now and book a meeting.

Der Beitrag Freedom of Choice instead of Lock-in – Why Open IAM Solutions Are Built for the Future erschien zuerst auf Univention.

23 October, 2025 12:57PM by Yvonne Ruge

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: ESWIN Computing launches the EBC7702 Mini-DTX Mainboard with Ubuntu 24.04 LTS

October 23, 2025 – Today, ESWIN Computing and Canonical announced the pre-installation of Ubuntu on EBC7702 Mini-DTX Mainboard – a hardware platform designed to offer developers high computing power in resource-constrained environments. Developers will now be able to benefit from the stability and mature ecosystem of Ubuntu out of the box, on top of the rich feature set offered by the RISC-V compliant, multi-purpose system on chip (SoC) EIC7702X. 

This development follows the July 2025 launch of the EBC7700 single board computer. With the introduction of the EIC7702X dual-die SoC, developers gain access to twice the CPU and AI processing cores, an extended set of peripherals, and an enhanced performance – forming part of a wider initiative by Canonical and ESWIN Computing to offer choice and consistency to developers across product lines.

Delivering innovation on RISC-V platforms

When operating intensive use cases in resource-constrained environments, it’s important that both your hardware and software offer the right balance of computing power and convenience. Whether you’re building AI applications on embedded hardware, or testing industrial automation on manufacturing equipment, your hardware-software stack should facilitate fast testing and development. Ubuntu is known for its versatility and robustness, has a solid track record in applications like edge AI and embedded computing, and is regularly ranked as the top Linux OS by developers. The pre-installation of Ubuntu 24.04 LTS ensures that developers can get the most out of the EBC7702 Mini-DTX Mainboard, and develop applications at pace.

We are excited to continue our collaboration with ESWIN Computing through the launch of their new dual-die variant for the EBC77 series. By combining increased performance with the flexibility and reliability of Ubuntu, developers can now explore wider possibilities and innovate across more RISC-V platforms. Having Ubuntu pre-installed ensures developers can immediately begin testing and building. Our partnership with ESWIN Computing continues to show how open standards and collaborative innovation drive meaningful progress,” says Jonathan Mok, Silicon Alliances Ecosystem Development Manager at Canonical.7

A board for designed for high-performance computing

The EBC7702 Mini-DTX Mainboard offers considerable computing power on a minimal form factor of just 203mm x 107mm. The board also comprises the minimal core components needed for high-performance computing. At its heart is a 64-bit, 8-core RISC-V CPU that delivers high AI processor performance, backed by fast on-board LPDDR5 memory running at 640Mbps, available in either 32GB or 64GB configurations. Users can also draw upon varied connectivity and power options, such as four gigabit Ethernet ports, two HDMI out connectors, and multiple USB ports. The full list of features is set out below: 

Form Factor203mm x 170mm
CPU64-bit OoO RISC-V RV64GC 8-core processors
AI Processor Up to 40TOPS in INT8, 20TOPS in INT16, and 20TFLOPS in FP16
Memory On-board 32GB/64GB LPDDR5@6400Mbps
Storage1x On-board 32GB eMMC
2x On-board 16MB SPI Flash
1x M.2 M-Key SATA connector
1x Micro SD card connector
1x On-board 2Kbit EEPROM
Network4x Gigabit Ethernet RJ45 connectors (one of them supports Wake-on-Lan)
1x On-board 802.11ac dual band Wi-Fi module
Display2x HDMI out connectors
2x 4-lane MIPI DSI connectors
Camera4x 4-lane MIPI CSI connectors
Audio1x Audio in connector
1x Audio out connector
1x header for audio line-out and line-in signals on front panel 
PCIe2x 4-lane PCIe Gen3.0 x16 slots
USB2x USB3.0 Type-A (stacked)
2x USB2.0 Type-A (stacked)
1x USB 19-pin connector to support 1x USB3.0 Type-A on front panel
1x USB Type-E connector to support 1x USB3.0 Type-C connector on front panel.
Power SupplyATX Power Supply Connector
Debug1x USB2.0 Type-C port for On-board MCU debugging or SoC Debugging (default)
RTC1x CR1220 RTC Battery holder(Battery not included)
FAN4x 2.54mm 4-pin 12V Fan Headers
Other I/OI2C, I2S, SPI, UART, General I/O ports (mapping on 40-pin header)

“We are proud to announce the world premiere of the EBC7702 Mini-DTX Mainboard, powered by our EIC7702X SoC – the industry’s first RISC-V die-to-die solution with cache-coherent interconnected dual chiplets. This architecture provides exceptional scalability and performance, representing a major advance in high-performance RISC-V computing,” said Haibo Lu, Executive Vice President of the Embodied Intelligence Business Group of ESWIN Computing, “By partnering with Canonical and pre-integrating the widely adopted Ubuntu OS, we are providing a robust, ready-to-use platform backed by one of the world’s most vibrant open-source communities, empowering developers to immediately leverage its rich ecosystem and accelerate the next generation of intelligent computing solutions on RISC-V.”

With the launch of the EBC7700 single board computer back in July, Ubuntu developers can now choose between both the EBC7700 SBC for light workload applications (like education and software development) due to its compact size and affordable price, or opt for the EBC7702 Mini-DTX Mainboard for heavy workload use cases, such as AI-assisted super computer and intelligent video analysis. With Ubuntu supported across these platforms, this unlocks multiple possibilities for whatever your use case is.

Close-up of ESWIN EIC7702X system-on-chip (left); EBC7702 Mini-DTX Mainboard (right)

Getting started

Head over to our download webpage to find the compatible Ubuntu images for the EBC7702 Mini-DTX Mainboard and more materials on how to get started with the board. 

For anyone attending the RISC-V Summit North America 2025, the EBC77002 Mini-DTX Mainboard will be showcased there from October 21-23, 2025. Visitors to Santa Clara will be able to experience the platform first hand at Canonical’s booth.

Pre-orders for the EBC7702 Mini-DTX Mainboard will open soon. If you’d like to get your hands on one, please visit the ESWIN Amazon online store (either the US, UK, German, or French site) or the Taobao store for the latest updates.

Canonical’s commitment to RISC-V

Canonical is committed to accelerating innovation through open source, empowering developers to bring their products to market faster by providing a stable and reliable platform. With RISC-V rapidly emerging as a competitive instruction set architecture across diverse markets and industries, Canonical’s decision to enable Ubuntu to RISC-V was a natural step. Through Canonical’s partnership with ESWIN Computing, we are ensuring that Ubuntu becomes the reference operating system for early adopters on RISC-V platforms, supporting the next wave of open hardware innovation.

Are you using RISC-V in your project?

Canonical partners with silicon vendors, board manufacturers, and leading enterprises to shorten time-to-market. If you are deploying Ubuntu on RISC-V platforms and want access to ongoing bug fixes and security maintenance or if you wish to learn more about our solutions for custom board enablement and application development services, please reach out to Canonical.

If you have any questions about the platform or would like information about our certification program, contact us.

About Canonical

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. Learn more at https://canonical.com/.

About ESWIN Computing

ESWIN Computing is a provider of intelligent solutions in the AI era. Focusing on smart devices and embodied intelligence as our two core application scenarios, ESWIN Computing is adopting next-generation RISC-V computing architecture, innovating domain-specific algorithms and IP modules, and constructing efficient and open software-hardware platforms to deliver highly competitive system-level solutions for customers worldwide. Learn more at www.eswincomputing.com.

23 October, 2025 08:05AM

October 22, 2025

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Alibaba Damo Academy and Canonical partner to deliver Ubuntu on XuanTie and drive RISC-V innovation

Alibaba Damo Academy and Canonical today announce a new collaboration to bring the Ubuntu operating system to the latest XuanTie C930 processor. This collaboration will give users access to a robust, reliable and production-ready platform for modern workloads running on the XuanTie processor family, helping to advance RISC-V adoption. 

Enhanced Ubuntu experience on XuanTie

Alibaba Damo Academy, the team behind the XuanTie processor family, drives advancements in intelligent and secure computing architectures built around the RISC-V ecosystem. Alibaba Damo Academy and Canonical are collaborating to enhance Ubuntu support on XuanTie, helping to drive greater RISC‑V maturity and enabling smoother integration into the expanding RISC-V landscape. This partnership focuses on strengthening community collaboration within the RISC‑V ecosystem, and improving software readiness on XuanTie platforms using Ubuntu and RVA23 as the profile and platform of choice.

Since both Alibaba Damo Academy and Canonical are members of the RISC-V Software Ecosystem (RISE), the joint efforts will help accelerate open source adoption and promote consistent interoperability between hardware and software. A key aspect of this work includes alignment with the RVA23 profile, designed to improve portability across hardware designs and simplify development, further advancing the RISC‑V ecosystem’s growth and accessibility.

For developers and XuanTie users, this partnership will deliver an integrated Ubuntu experience for XuanTie hardware all backed by Canonical’s trusted long-term support and rigorous security maintenance. 

“We are very excited to be partnering with Alibaba Damo Academy to bring Ubuntu to the latest XuanTie platform. Our teams have already done great work together on a community level for earlier XuanTie processors like the C906 and showing Ubuntu running on a XuanTie C930 FPGA at the Shanghai RISC-V Summit in July 2025. This partnership represents the next step in deepening our relationship and we look forward to working with Alibaba Damo Academy to get Ubuntu into the hands of even more XuanTie developers,” said Cindy Goldberg, Vice President of Silicon Alliance at Canonical.

“We are excited to deepen our collaboration with Canonical, marking another important step toward the real-life applications of RISC-V. Together, we have successfully brought Ubuntu to multiple ecosystem products based on the XuanTie processor. Moving forward, we look forward to advancing our joint work and continuing to unlock the value of RISC-V in various computing scenarios, while further contributing to the growth of the RISC-V community.” said Jing Yang, Vice Present of RISC-V at Alibaba Damo Academy.

Get in touch

If you have any questions about the platform or would like information about our silicon or RISC-V program, contact us.

About Canonical

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. 

Learn more at https://canonical.com/ 

22 October, 2025 10:02AM

Ubuntu Blog: Discover your fully open source robotics observability at ROSCon 2025

Another year, another ROSCon! This year we’re heading to Singapore, and Canonical is once again thrilled to sponsor this important community event. Just like last year in Odense, Denmark, we’re looking forward to the talks and workshops, which always inspire us and teach us new things about robotics. We’re excited to reconnect with our Southeast Asia community, especially after our earlier gathering at Canonical’s IoT day in Singapore.

We’re really looking forward to sharing some of the work we have done in the robotics space this year, alongside our partners Advantech & Botmind. From Advantech’s powerful platforms for robotics to Botmind’s unified fleet management solutions, our booth showcases collaborative efforts designed to help and guide ROS developers as they aim to simplify complexity and accelerate innovation.

Here’s a quick overview of what we’ll be showcasing at ROSCon booth 51/52, featuring our partners Advantech and Botmind. 

A new open source observability stack

Our mission is to bring software to the widest audience. We took the latest step in this mission by bringing together popular open source tools, including Grafana, Prometheus, and Loki, to make it easy to set up a fully functional observability infrastructure for ROS 2 devices using Ubuntu. The same infrastructure used by our telco, logistics, aerospace, and data center customers is now available for robotics makers and ISVs. 

The infrastructure is designed to bring together a unified platform for both open source and custom enterprise solutions (e.g. Botmind). Thus allowing companies to bring their own or preferred applications and tools in a tested, reliable, and open source infrastructure. 

What can you do today with this beta observability stack 

  • Easily onboard your Ubuntu devices
  • Automatically upload your ros2bags to a self-hosted server
  • Remotely monitor a fleet of ROS 2 robots
  • Access live ROS data
  • Access live Ubuntu system data & logs
  • Trigger alerts for events such as low battery or network loss

Learn More at ROSCon 2025

If you’re attending ROSCon 2025, we’ve got two exciting opportunities for you to dive deeper into observability for robotics systems.

Workshop: Demonstrating the Canonical Observability Stack for Devices

📍 Room: 330
🗓️ Date: Monday, October 27, 2025
Time: 10:30 AM – 11:30 AM

Join us for a hands-on workshop where we’ll demonstrate how the Canonical Observability Stack helps you monitor, debug, and optimize your robotic devices.
> All ROSCon attendees are welcome, even without the workshop ticket — no RSVP required!

Talk: Open-Source Robotics Observability at Scale!

📍 Track: Debugging

🗓️ Date: Tuesday, October 28, 2025
Time: 4:00 PM – 4:10 PM SST

In this talk, we’ll explore how open-source tools make large-scale observability and debugging in robotics simpler, faster, and more powerful.

Workshop: Hands-on ROS 2 with Rubik Pi 3 

📍 Room: 330
🗓️ Date: Monday, October 27, 2025
Time: 11:30 AM – 12:30 PM

Join our partner Qualcomm to learn how to run ROS 2 on the RUBIK Pi 3, a compact platform optimized for edge robotics, powered by Ubuntu.

An “all-in-one“ ROS fleet management system

At ROSCon, we’re not just talking about observability; we’re showing how everything can come together in a real-world deployment. That’s why we’re excited to team up with Botmind, a Singapore-based robotics platform innovator, and deploy their fleet management service on top of our open source Canonical Observability Stack (COS) infrastructure.

Botmind’s mission is to simplify how businesses manage robot fleets. They build an “all-in-one” control platform that integrates multiple robots, real-time tracking, AI-powered scheduling, analytics, and a unified API layer. Their vision is bold: to let robotics operators manage everything, from mission assignments to health monitoring, via a single, intuitive control plane.

In our demo at ROSCon, we’ll show how Botmind’s proprietary fleet manager integrates with our COS infra, enabling:

  • Deployment of their fleet management services within a robust, open-source stack
  • Seamless interaction between Botmind’s APIs and our observability tools
  • Live monitoring and orchestration of ROS-based robots through a unified dashboard
  • End-to-end integration from the ROS layer all the way to fleet-level commands

Through this demo, we aim to prove that you can combine open observability and enterprise robot control in a modular, scalable way. 

Visitors to booth 51/52 will be able to see first-hand how Botmind’s system works in tandem with our COS stack, giving developers, integrators, and system architects a compelling reference architecture for real robotic deployments.

A new foundation for autonomous robotics

We’re proud to collaborate with Advantech to showcase how advanced platforms and Ubuntu-based solutions accelerate the development of Autonomous Mobile Robots and robotic systems. Together, we’re addressing some of the toughest challenges faced by robotics developers, from real-time edge computing to secure and compliant deployments.

At ROSCon 2025, you can discover Advantech’s robotics platforms, powered by Ubuntu and its real-time kernel. Designed for ROS 2, it provides a unified and scalable hardware-software foundation that speeds up robot prototyping and deployment. Advantech’s AFE and ASR series edge computers integrate CPU, GPU, and NPU computing with industrial-grade reliability, supporting:

  • Time-synchronized sensor fusion across LiDAR, IMU, and camera inputs
  • Flexible modular I/O design for motor control, navigation, and perception
  • Wide voltage and ruggedized design for demanding environments. 
  • Pre-validated ROS 2 environments and Advantech’s own ROS nodes for Modbus and OPC-UA integration

With Ubuntu Pro, Advantech extends long-term support, security, and maintenance for its Ubuntu-based hardware, including ESM for ROS, ensuring a consistent, secure, and reliable foundation throughout the robot’s lifecycle.

In our joint demo at booth 51/52, Canonical and Advantech will showcase how developers can move from prototype to production faster using Advantech hardware combined with Ubuntu Core, Canonical’s secure, immutable, and reliable operating system designed for edge deployments.

Ubuntu and NVIDIA Jetson Thor

Canonical recently announced official Ubuntu support for the NVIDIA Jetson Thor family, extending our collaboration with NVIDIA to accelerate AI innovation at the edge. In addition, Canonical announces it will support and distribute NVIDIA CUDA directly within Ubuntu’s repositories, making it easier than ever for developers to access GPU acceleration natively on Ubuntu. This partnership ensures that developers can rely on the same enterprise-grade security, stability, and performance on Jetson Thor that powers Ubuntu across clouds and data centers.

Observability on Jetson: monitoring the almighty Thor

In our demo at ROSCon, visitors will see COS running directly on an NVIDIA Jetson Thor device. Using the Grafana Agent, the system continuously collects rich performance and telemetry data from the Jetson platform, including CPU, GPU, and memory metrics, visualized in real time through Grafana dashboards.

By bringing COS to Jetson Thor, Canonical showcases how open source observability can extend all the way from the robot’s edge hardware to cloud-scale operations, empowering developers and integrators to optimize performance and reliability across every layer.

For more information, please visit NVIDIA at ROSCon.

See you soon in Singapore! 

We can’t wait to see you at ROSCon! Join us to explore the latest advancements, connect with fellow innovators, and discover how Ubuntu and our partners are shaping the future of robotics. See you there!

22 October, 2025 08:47AM

hackergotchi for ZEVENET

ZEVENET

What the AWS Outage Revealed About Global Continuity in the Cloud

In the early hours of October 20, 2025, Amazon Web Services (AWS) experienced one of the most significant service disruptions of the year. The incident originated in the US-EAST-1 (Northern Virginia) region — one of the provider’s oldest and busiest zones, hosting critical components of its global services.

For several hours, thousands of applications and online platforms — including Snapchat, Fortnite, Duolingo, Alexa, Coinbase, and Robinhood — suffered connection errors, latency issues, or partial outages.

AWS later confirmed an increase in errors and response times across several services tied to that region.

What Really Happened in Virginia

Technical analyses and AWS status reports indicate that the issue was linked to a failure in the internal DNS resolution system, one of the most critical elements of its infrastructure. The Domain Name System (DNS) translates domain names (such as api.mycompany.com) into IP addresses so that applications can communicate with each other.

When this system fails, servers may continue running, but they can no longer “find” one another — requests are lost because domain names cannot be resolved. In this case, the outage affected AWS’s internal DNS service, which depends on DynamoDB to store DNS zone data.

As that service degraded, many applications could no longer resolve the names of their own instances or connect to their databases, even though the servers themselves remained operational.

The result was a regional failure with global impact, as countless organizations rely on infrastructure hosted in Virginia to operate their services — even when their users connect from other continents. It was not a global AWS outage, nor a multi-region event, but rather a localized failure in a critical region that exposed how much dependency can concentrate in the cloud.

Designing for Continuity: The Role of Global Traffic Management

The cloud offers scalability and simplified management — but it does not eliminate architectural responsibility. Replicating servers within the same region (for example, across “Availability Zones”) does not guarantee continuity if the entire region becomes unavailable.

Building a truly robust infrastructure means designing for the complete loss of a region — and still being able to keep services online. This is where global traffic management, also known as Global Server Load Balancing (GSLB), becomes essential. Such a system operates above individual data centers or regions.

It continuously monitors multiple distributed endpoints and automatically redirects traffic to the one that remains available and responsive. If a region stops responding — as happened in Virginia — the load balancer can update public DNS records so that users are routed to another active environment.

In practice, this mechanism provides two fundamental benefits:

  • Automatic failover between regions, ensuring that an outage in one location does not interrupt global service.
  • A foundation for disaster recovery, since continuity no longer depends on manual actions or static configurations.

However, for this approach to be truly effective, the regions or data centers involved must be completely independent from each other. If both environments share the same control plane, internal DNS, or network services, a failure in that common layer could affect both simultaneously.

That’s why GSLB can only ensure real continuity when deployed between operationally isolated environments. In other words: GSLB would not have prevented the AWS outage, but it would have allowed organizations with independent regional architectures to keep their services running while the affected region recovered.

How SKUDONET Applies This Approach

SKUDONET Enterprise Edition integrates a Global Server Load Balancing (GSLB) system designed to maintain service availability across geographically distributed data centers or regions.

Operating at the DNS level, it continuously monitors the health of applications in each location. If one site becomes unavailable, it automatically updates DNS resolution to redirect users to another operational data center.

The GSLB can operate in active-passive mode, ensuring automatic recovery in Disaster Recovery scenarios, or in active-active mode, sharing traffic between multiple data centers to optimize latency and overall performance.

Its design allows combining environments within the same provider or across different ones — as long as they remain operationally independent — thereby avoiding single points of failure.

In this way, SKUDONET provides an external control layer that strengthens high availability and service continuity strategies, even during severe regional disruptions.

📘 Technical reference:
How Global Server Load Balancing works in SKUDONET

The AWS outage in Virginia showed that even the most mature infrastructures can experience critical regional failures. The lesson is not to avoid the cloud, but to design with failure in mind — assuming that any region can go offline at any time. Separating environments and managing traffic at a global level does not eliminate errors, but it allows business operations to continue when they occur.

22 October, 2025 06:24AM by Nieves Álvarez

October 21, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: The FuturFusion Cloud stack

Besides my regular open source contributions and running my own consulting business (Zabbly), I’m also the CTO and co-founder of an Open Source company called FuturFusion where I’ve been running Engineering for well over a year now.

My main focus over there has been building the FuturFusion Cloud stack, a completely open-source private cloud solution built around Incus. As part of that, our engineering team has been hard at work over the past year or so, improving Incus itself but also building a number of other projects from the ground up to make it easy to build and operate large scale Incus deployments.

Our stack is made of 4 core components:

  • Incus itself as the private cloud platform that runs virtual machines, system containers and application containers, with full clustering and multi-tenancy as well as support for a variety of storage and networking options to fit most environments.
  • IncusOS (shipping as HypervisorOS to our customers) that acts as our base layer Operating System image running on all physical servers as well as in the virtual machines that run our other components. It’s an immutable OS image based on Debian 13 and using systemd’s tooling to provide a safe boot experience and full disk encryption through the use of UEFI Secure Boot and TPM 2.0 modules. It uses an A/B update scheme, guaranteeing no variance in software between servers and an easy rollback mechanism should something go wrong. It’s completely locked down with API-only access and optional central management through Operations Center.
  • Operations Center provides an overview of an entire deployment, keeping track of all individual servers (running HypervisorOS), centrally managing all updates, handling Incus cluster creation and then acting as a global inventory of every Incus resource across all clusters.
  • Migration Manager is our migration tool which currently focuses on migrating from VMware (vCenter or standalone ESXi) over to Incus. It can connect to a large number of source VMware environments as well as target Incus clusters. It can easily keep track of hundreds of thousands of VMs that need to be migrated, making it easy to create migration batches and schedule those over weeks or months, running regular data pre-migration and finally completing the migration during scheduled downtime windows.

I recently took a bit of time away from customer deployments to record a video of how everything fits together, including an end to end lab deployment, starting from a pre-existing VMware environment and going all the way to having two Incus clusters running and the VMware VMs fully converted to Incus VMs.

In addition, for those interested in the security aspect of things, I gave a talk a few months back about IncusOS’ security story at the Linux Security Summit in Denver, Colorado. The recording of which has since been made publicly available.

Now our focus on the engineering front is primarily in fixing some filling a few remaining gaps as well as putting together up to date comprehensive documentation on IncusOS, Migration Manager and Operations Center. This will then make it easy for anyone to get started with those as well as hopefully attract more contributors to those projects.

On the topic of contributors, none of this would have been possible without the 112 individuals who contributed to the Incus project in the past year, thank you!

21 October, 2025 07:57PM

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.3.0-33 Released

This release of Clonezilla live (3.3.0-33) includes major enhancements and bug fixes.

ENHANCEMENTS AND CHANGES SINCE 3.2.2-15

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2025/Oct/17).
  • The Linux kernel was updated to 6.16.12-1.
  • Partclone was updated to 0.3.38, which includes a fix for a btrfs-related issue.
  • Added a new program, ocs-blkdev-sorter, which allows udev to create Clonezilla alias block devices in /dev/ocs-disks/. This is used by the udev rule 99-ocs-sorted-disks.rules.
  • Added the "-uoab" option to ocs-sr and ocs-live-feed-img to support selecting Clonezilla alias block device names in the TUI. This experimental feature addresses the random ordering of kernel block devices and can currently only be enabled via a command-line parameter.
  • Improved the performance of ocs-get-dev-info.
  • Improved ocs-blk-dev-info to ensure 'jq' works correctly in some cases and to increase efficiency.
  • Added ocs-cmd-screen-sample, which can be used with the "run again" script. It works with screen, tmux, and the console.
  • Added support for imaging MTD block and eMMC boot devices in expert mode. The options "-smtd", "-smmcb", "-rmtd", and "-rmmcb" can be used for saving or restoring.
  • Added a new program, ocs-live-gen-ubrd, to merge an OCS zip file with a U-Boot enabled bootable raw image, creating a new U-Boot enabled OCS live raw disk.
  • The en_US language file was refined.
  • The full repack command is now saved (for informational purposes) into ./{live/}Clonezilla-Live-Version.
  • Formatted the output of ocs-scan-disk for better readability.
  • Added the 'atd' and 'cron' packages to the live system; their services are disabled by default.
  • The live-boot package was updated to version 20250815 and patched to include the 'ethdevice-link-timeout' boot parameter, allowing users to set the timeout for the Ethernet device linking status.
  • Set 'ethdevice-link-timeout=7' in the live client. This changes the Ethernet device linking status timeout to 7 seconds (down from the default of 15 seconds). Ref: https://sourceforge.net/p/clonezilla/discussion/Open_discussion/thread/579168f80f/
  • ocs-blk-dev-info: This is a new program that outputs block device information in JSON format.
  • Switched to using fbterm by default for locale and keymap selection.
  • Moved the locale and keymap selection to the login shell, allowing fbterm to run in an interactive tty.
  • ocs-lang-kbd-conf: Added the "-f" and "-t" options.
  • Added a mechanism to automatically set the console font size based on the console's columns and rows, if a size is not already assigned.
  • Added a new program, ocs-live-time-sync, which is used by ocs-live-netcfg. Time synchronization will be performed when an internet connection is available.
  • Implemented a mechanism to set the timezone by syncing from the BIOS time when no internet connection is available.
  • Included the 'upower' package in the live system.
  • Added a mechanism to check if LVM thin provisioning exists. If it does, the program will quit. Ref: https://github.com/stevenshiau/clonezilla/issues/133
  • ocs-iso-2-onie: Updated to support modern Debian, specifically handling mkinitramfs multiple segments.
  • ocs-live-hook.conf: Forced the "loop" module to be added to initramfs.
  • Added '-gb3' and '-cb3' options for b3sum in the TUI menu. Renamed '-gb' to '-gb2' and '-cb' to '-cb2'. Ref: https://github.com/stevenshiau/clonezilla/issues/52
  • Recovery ISO/zip file names now include the CPU architecture.
  • Added 'dhcpcd-base' to the live packages list, as 'dhclient' is deprecated. Thanks to q2dg. Ref: https://github.com/stevenshiau/clonezilla/issues/145

BUG FIXES

21 October, 2025 12:31PM by Steven Shiau

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: What’s new in security for Ubuntu 25.10?

Ubuntu 25.10 Questing Quokka has landed, marking the final interim release before Ubuntu 26.04 LTS,  and it’s a bold one. Interim releases have always been the proving grounds for features that define the next LTS, and this cycle is no exception. From memory-safe reimplementations of foundational tools to hardware-backed encryption, post-quantum cryptography preparedness, and confidential computing, 25.10 pushes Ubuntu security into its next era, and the trajectory is clear: Ubuntu is building a more secure foundation for the next decade of computing.

Memory safety takes center stage

Ubuntu 25.10 defaults to sudo-rs, a Rust implementation of sudo. This change directly addresses a history of memory corruption vulnerabilities in security-critical code. The sudo vulnerability CVE-2021-3156, which existed undetected from 2011 to 2021, demonstrates why this matters; memory safety guarantees at the compiler level prevent entire categories of these bugs.

Similarly, we now ship rust-coreutils as the default provider of utilities like ls, cat, and cp. The GNU implementations remain available, and users can switch between them if needed. We maintain a compatibility matrix documenting behavioral differences, though most users won’t encounter any issues. Performance varies by operation, base64 encoding is notably faster, while some operations show minimal change.

For users who need the traditional sudo, it’s available as sudo.ws. Existing sudo configurations work without modification. This parallel availability allows thorough testing while maintaining a fallback path.

TPM-backed full disk encryption gets real

The TPM-backed Full Disk Encryption implementation has matured considerably in this release, though it remains experimental. New capabilities include:

  • Passphrase support with proper management interfaces
  • Recovery key regeneration for improved key management
  • Better integration with firmware updates to prevent boot issues

There are important compatibility considerations. The feature is incompatible with Absolute (formerly Computrace) security software, systems must choose one or the other. Additionally, certain hardware configurations require specific kernel modules that may not be available in the TPM-secured kernel. Users should test thoroughly with their specific hardware before considering deployment.

This work targets production readiness in Ubuntu 26.04 LTS. Testing and feedback during the 25.10 cycle will directly influence the LTS implementation.

Network Time Security by default

Ubuntu 25.10 replaces systemd-timesyncd with Chrony as the default time daemon, configured with Network Time Security (NTS) enabled. This change addresses a long-standing security concern: unauthenticated NTP has been vulnerable to tampering that could affect certificate validation, audit logs, and distributed system coordination.

NTS adds TLS-based authentication to time synchronization, using port 4460/tcp for key exchange before standard NTP communication on 123/udp. 

Preparing for the quantum apocalypse

Ubuntu 25.10 includes preparations for quantum computing threats thanks to the latest versions it ships with for OpenSSH and OpenSSL. OpenSSH 10.0 now uses hybrid post-quantum algorithms by default for key agreement. No configuration is required, SSH connections automatically benefit from quantum resistance while maintaining compatibility with systems that don’t support these algorithms.

OpenSSL 3.5.3 adds support for ML-KEM, ML-DSA, and SLH-DSA algorithms. The default TLS configuration prefers hybrid post-quantum KEM groups, balancing future security with present-day compatibility.

Note that OpenSSH 10.0 removes DSA support entirely. Systems still using DSA keys will need migration before they can connect to or from Ubuntu 25.10 systems.

Intel TDX and confidential computing

For those running sensitive workloads in the cloud, Ubuntu 25.10 ships with native support for Intel TDX (Trust Domain Extensions) host capabilities. This technology creates hardware-isolated virtual machines for confidential computing,  perfect for data clean rooms and confidential AI workloads. The kernel ships with Intel TDX host support out of the box, setting the stage for confidential computing to become mainstream in the 26.04 LTS.

Security through modernization

Beyond the headline features, there’s a consistent theme of security through modernization:

  • Django updated to 5.2 LTS with improved security defaults
  • Systemd v257.9 with enhanced security features
  • Apache 2.4.64 with multiple security fixes
  • The entire toolchain has been rebuilt with GCC 15.2, providing better compile-time security checks

What to watch for

Some security features require careful deployment:

  • AppArmor profiles may unexpectedly affect operations in LXD containers
  • TPM-backed FDE has specific hardware requirements
  • The switch to OpenSSH 10.0 removes DSA support, which may affect legacy systems

Looking ahead

In all, the security enhancements and hardening measures delivered in Ubuntu 25.10 continue Ubuntu’s evolution toward delivering the most secure Linux experience. They lay the groundwork for Ubuntu 26.04 LTS,  the next long-term supported release, where these technologies will mature into default, fully supported capabilities. Furthermore, security updates, compliance, hardening and kernel livepatching for 26.04 LTS will be covered for up to 12 years through Ubuntu Pro, extending Ubuntu’s track record as a securely-designed foundation for developing and deploying modern Linux workloads.

We’re always refining Ubuntu’s security experience, and your input matters. To share feedback or join the conversation, visit Ubuntu’s Discourse page. If you’d like to discuss your deployment needs, please reach out via this contact form.

Stay secure, and happy upgrading.

21 October, 2025 08:05AM

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 914

Welcome to the Ubuntu Weekly Newsletter, Issue 914 for the week of October 12 – 18, 2025. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • End of 10 Install Event Abingdon Library – 1st November 2025
  • A Cimeira do Ubuntu chama!
  • The Ubuntu UK Community Social is Back!
  • Ubuntu Summit 25.10 Extended & Release Party in Thessaloniki!
  • LoCo Events
  • Ubuntu Project docs: the final furlong!
  • Release 26.04 LTS without the ISO Tracker
  • Ubuntu worker nodes for Oracle OKE are now in Limited Availability
  • Patch Pilot Hand-off 26.04
  • Ubuntu Test Rebuilds
  • Call for testing: ubuntu-frame, mir-test-tools (Mir 2.23.0 update)
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 22.04, 24.04, 25.04 and 25.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • irihapeti
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

21 October, 2025 01:07AM by bashing-om

hackergotchi for Qubes

Qubes

XSAs released on 2025-10-21

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is not affected.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

  • (none)

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • XSA-475
    • Due to a bug, Viridian extensions are currently not enabled in Qubes OS. Although Viridian extensions are enabled in our libvirt config, this setting is mostly ignored by libvirt. While it is used when libvirt converts the XML config to the xl config format, it is not used when actually creating a VM. Advanced users who wish to confirm this on their own systems may do so by executing the command sudo xl list -l <NAME_OF_HVM> in dom0 and noting the absence of a "viridian": true line.

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

21 October, 2025 12:00AM

October 20, 2025

hackergotchi for Deepin

Deepin

(中文) 2025 OSCAR 开源产业大会完整版议程揭晓

Sorry, this entry is only available in 中文.

20 October, 2025 06:29AM by xiaofei

October 18, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Julian Andres Klode: Sound Removals

Problem statement

Currently if you have an automatically installed package A (= 1) where

  • A (= 1) Depends B (= 1)
  • A (= 2) Depends B (= 2)

and you upgrade B from 1 to 2; then you can:

  1. Remove A (= 1)
  2. Upgrade A to version 2

If A was installed by a chain initiated by Recommends (say X Rec Y, Y Depends A), the solver sometimes preferred removing A (and anything depending on it until it got).

I have a fix pending to introduce eager Recommends which fixes the practical case, but this is still not sound.

In fact we can show that the solver produces the wrong result for small minimal test cases, as well as the right result for some others without the fix (hooray?).

Ensuring sound removals is more complex, and first of all it begs the question: When is a removal sound? This, of course, is on us to define.

An easy case can be found in the Debian policy, 7.6.2 “Replacing whole packages, forcing their removal”:

If B (= 2) declares a Conflicts: A (= 1) and Replaces: A (= 1), then the removal is valid. However this is incomplete as well, consider it declares Conflicts: A (< 1) and Replaces: A (< 1); the solution to remove A rather than upgrade it would still be wrong.

This indicates that we should only allow removing A if the conflicts could not be solved by upgrading it.

The other case to explore is package removals. If B is removed, A should be removed as well; however it there is another package X that Provides: B (= 1) and it is marked for install, A should not be removed. That said, the solver is not allowed to install X to satisfy the depends B (= 1) - only to satisfy other dependencies [we do not want to get into endless loops where we switch between alternatives to keep reverse dependencies installed].

Proposed solution

To solve this, I propose the following definition:

Definition (sound removal): A removal of package P is sound if either:

  1. A version v is installed that package-conflicts with B.
  2. A package Q is removed and the installable versions of P package-depends on Q.

where the other definitions are:

Definition (installable version): A version v is installable if either it is installed, or it is newer than an installed version of the same package (you may wish to change this to accomodate downgrades, or require strict pinning, but here be dragons).

Definition (package-depends): A version v package-depends on a package B if either:

  1. there exists a dependency in v that can be solved by any version of B, or
  2. there exists a package C where v package-depends C and any (c in C) package-depends B (transitivity)

Definition (package-conflicts): A version v package-conflicts with an installed package B if either:

  1. it declares a conflicts against an installable version of B; or
  2. there exists a package C where v package-conflicts C, and b package-depends C for installable versions b.

Translating this into a (modified) SAT solver

One approach may be to implement the logic in the conflict analysis that drives backtracking, i.e. we assume a package A and when we reach not A, we analyse if the implication graph for not A constitutes a sound removal, and then replace the assumption A with the assumption A or "learned reason.

However, while this seems a plausible mechanism for a DPLL solver, for a modern CDCL solver, it’s not immediately evident how to analyse whether not A is sound if the reason for it is a learned clause, rather than a problem clause.

Instead we propose a static encoding of the rules into a slightly modified SAT solver:

Given c1, …, cn that transitive-conflicts A and D1, …, Dn that A package-depends on, introduce the rule:

A unless c1 or c2 or ... cn ... or not D1 or not D2 ... or not Dn

Rules of the form A... unless B... - where A... and B... are CNF - are intuitively the same as A... or B..., however the semantic here is different: We are not allowed to select B... to satisfy this clause.

This requires a SAT solver that tracks a reason for each literal being assigned, such as solver3, rather than a SAT solver like MiniSAT that only tracks reasons across propagation (solver3 may track A depends B or C as the reason for B without evaluating C, whereas MiniSAT would only track it as the reason given not C).

Is it actually sound?

The proposed definition of a sound removal may still proof unsound as I either missed something in the conclusion of the proposed definition that violates my goal I set out to achieve, or I missed some of the goals.

I challenge you to find cases that cause removals that look wrong :D

18 October, 2025 07:37PM

October 16, 2025

hackergotchi for Pardus

Pardus

Sağlık Bakanı Sayın Kemal Memişoğlu Pardus Ekibini Makamında Kabul Etti

Sağlık Bakanı Sayın Prof. Dr. Kemal Memişoğlu, TÜBİTAK BİLGEM ve Açık Kaynak Teknolojileri Birimi yöneticileri ile Pardus ekibini makamında kabul etti. Ziyarette, millîleştirilmiş işletim sistemi Pardus’un kamu hastanelerinde kullanılmasına yönelik yürütülen çalışmalar hakkında bilgi arz edildi. Sayın Bakan, Pardus’un yaygınlaştırılmasına yönelik tam desteğini ifade ederek çeşitli illerde aksiyon alınması yönünde talimatlarda bulundu.

16 October, 2025 11:58AM by Hace İbrahim Özbal

hackergotchi for Deepin

Deepin

hackergotchi for ZEVENET

ZEVENET

Enterprise Edition 10.0.13: Performance, Security, and a Modernized Interface

The latest SKUDONET Enterprise Edition update (10.0.13) brings meaningful improvements that strengthen performance, security, and usability—especially for administrators managing complex, high-demand environments. Rather than introducing a long list of minor adjustments, this version focuses on upgrades that directly impact daily operations.

Web Interface with Angular 20

The most visible change is the update of the web interface framework to Angular 20. This upgrade improves how the platform feels and performs from the very first interaction:

  • Faster rendering and smoother navigation, which is especially noticeable when managing multiple farms or large configurations.
  • Improved frontend security, thanks to the security patches and architectural updates included in the new Angular version.
  • A future-proof foundation for new interface capabilities and user experience improvements planned for upcoming releases.

For IT teams, this translates into a cleaner, more responsive environment that reduces friction during configuration, monitoring, and maintenance tasks.

Updated Linux Kernel for Stability and Long-Term Support

Another key improvement is updating the system kernel to version 6.1.153. This change brings important benefits in terms of stability and long-term support. The new version enhances compatibility with modern hardware and virtualized environments, ensuring more consistent performance across current infrastructures.

Security is also strengthened with the inclusion of recent patches that help protect production environments. While this update requires a system reboot, it reinforces the foundation for critical deployments and ensures more robust long-term operation.

Smoother Farm Operations and Real-Time Performance

Managing multiple farms in high-traffic environments demands efficiency. Version 10.0.13 introduces optimizations in two key areas:

  • Faster Farm Start and Stop Operations: The internal mechanisms that control farm services have been streamlined. The result is a noticeable reduction in the time it takes to start or stop farms—particularly valuable when working with clusters or multiple active instances.
  • Improved API Performance for Monitoring: The statistics API now processes requests with lower resource consumption and faster response times. This benefits dashboards, monitoring tools, and automated systems that rely on continual visibility of performance metrics.

Other Improvements to Take Into Account

Beyond the major updates, version 10.0.13 includes several enhancements that strengthen security, traceability, and system integration:

  • Cookie security improvements in HTTP/S farms with attributes such as HttpOnly, Secure, and SameSite.
  • Logging of all API login attempts for better auditability and threat detection.
  • More resilient rsyslog configuration when sending logs to external servers.

Two bug fixes have also been applied:

  • Correct handling of RBAC when copying farms as resource groups.
  • Accurate monitoring of the ssyncd service in HA environments.

Enterprise Edition version 10.0.13 delivers enhancements that improve security and provide a more responsive interface. With strengthened cookie handling, login audit logging, faster farm operations, and the updated GUI, administrators can manage complex environments more efficiently.

As always, plan a convenient maintenance window for the kernel update to ensure a smooth deployment. For guidance on upgrading or validating these improvements in your environment, our support team is available to assist.

If you work with SKUDONET Enterprise Edition or want to stay up to date with the latest technical updates, visit our Timeline.

If you’d like to experience these improvements firsthand, try the SKUDONET Enterprise Edition 30-day trial.

16 October, 2025 07:08AM by Nieves Álvarez

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E364 Guaxinim Resoluto E Outras Festas

O Diogo ficou afectado da garganta? O Miguel perdeu o comboio? A culpa é da IA! De regresso da Festa do Software Livre, comentámos alguns momentos altos do magnífico certame, fizemos pouco de economistas, da CP e de pessoas e comunidades de Software Livre que enviarão cartas iradas ao Provedor do Podcast. Babámo-nos com o novo Raspberry Pi 500+; revimos as últimas novidades do Firefox 144; as últimas versões 20.04 e 24.04 do Ubuntu Touch; o grande festão Intercidades que vai acontecer a 25 de Outubro em Lisboa e Porto e ainda discutimos DRAMA à volta da Canonical e Flatpaks, para pegar fogo à tenda do circo.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

16 October, 2025 12:00AM

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Imgur unavailable in the UK

The image hosting service Imgur is no longer accessible in the UK. It's easy to do a web search on this topic, for example: https://help.imgur.com/hc/en-us/article … ed-Kingdom https://www.bbc.com/news/articles/c4gzxv5gy3qo

This affects BunsenLabs because the image upload utilities we ship use Imgur by default - both our in-house scripts and the xfce4-screenshooter.

Eventually, once we know this is permanent, we will have to provide different scripts, but in the meantime we request BunsenLabs forum users who want their images to be viewable in the UK to upload them to some other service.

16 October, 2025 12:00AM