November 26, 2025

hackergotchi for ZEVENET

ZEVENET

Community Edition: an easy-to-use Open Source Load Balancer

Community Edition is the open source load balancer from SKUDONET, designed for engineers and administrators who need traffic control without relying on costly or complex solutions.

Built on Debian 12.8, it provides a stable and secure operating system with performance far above what is typical in free software:

  • Up to 250,000 TCP requests per second (L4)
  • Up to 70,000 HTTPS requests per second (L7)

It is 100% free, fully open source, and compatible with Linux and Windows server environments.

For technical users looking for an open source load balancer with a Web GUI, easy to deploy and free of extra dependencies, this edition is an ideal starting point.

A Load Balancer with a GUI: traffic control without complexity

Most open source load balancers require manual configuration, editing text files, or navigating a steep learning curve. This is where SKUDONET Community Edition stands out.

The software includes a full Web GUI that allows you to:

  • Configure farms and services without touching the CLI
  • View backend health and real-time status
  • Adjust L4/L7 traffic distribution policies visually
  • Manage SSL/TLS certificates directly from the interface
  • Enable blocklists, basic DDoS protection, and simple filtering rules

This makes it highly accessible—even for small teams that need immediate results without spending hours reading advanced documentation.

SKUDONET Community Edition Open Source Load Balacer with web GUI

Technical Capabilities: Built-in WAF

From a technical perspective, SKUDONET CE includes features typically found only in commercial products:

1.L4 Load Balancing (NAT, DNAT, Stateless NAT, DSR)

Handles large volumes of TCP/UDP connections with minimal overhead.

2.L7 Load Balancing (Reverse Proxy)

Designed for web applications, APIs, and HTTP/HTTPS services.

3.Reverse Proxy with SSL/TLS support

Enables SSL termination directly in the load balancer.

4.Backend health checks

Ensures traffic is sent only to available backend servers.

5.Full Web GUI

A visual interface for managing all components without CLI.

6.REST JSON API

Full automation for CI/CD pipelines, hybrid infrastructures, or external integrations.

7.IPv6, dual stack, routing, and SD-WAN support

Suitable for modern and distributed network architectures.

8.Built-in essential security functions (including WAF)

Beyond traffic distribution, SKUDONET Community Edition includes fundamental security mechanisms:

  • Source-based blocking rules
  • Blocklists
  • Basic DDoS protection
  • A built-in basic Web Application Firewall (WAF) with fundamental rules

It is not a deep-inspection system like the Enterprise Edition, but it provides an initial layer of protection useful for development environments and small deployments. And if your infrastructure later requires deeper traffic inspection, advanced clustering, or professional support, you can migrate to SKUDONET Enterprise Edition, designed for critical environments.

Where to download SKUDONET Community Edition

SKUDONET Community Edition is distributed for free through SourceForge, where you can download the ISO or installation packages for your test or development environment.

26 November, 2025 03:05PM by Nieves Álvarez

hackergotchi for Proxmox VE

Proxmox VE

Proxmox Backup Server 4.1 released!

We're pleased to announce the release of Proxmox Backup Server 4.1.

This version is based on Debian 13.2 (“Trixie”), uses Linux kernel 6.17.2-1 as the new stable default, and comes with ZFS 2.3.4 for reliable, enterprise-grade storage and improved hardware support.

Here are the highlights
  • User-based traffic control for more fine-grained bandwidth management across backup and restore operations
  • Configurable parallelism for verify jobs to optimize runtimes and balance...

Read more

26 November, 2025 12:30PM by t.lamprecht (invalid@example.com)

hackergotchi for Deepin

Deepin

November 25, 2025

November 24, 2025

hackergotchi for GreenboneOS

GreenboneOS

Greenbone Helps Defend Against Advanced Social Engineering Attacks

Urgency, fear, curiosity, trust, greed, sympathy — social engineering has been wildly successful in exploiting human emotions in cyber attacks. Social engineering attacks have been identified as a top root cause in a high number of breaches. Most breach analysis reports place social engineering among the top initial-access techniques. The recent rise in AI-enabled phishing […]

24 November, 2025 11:23AM by Greenbone AG

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: AMI and Canonical announce partnership

The collaboration makes it easy to boot directly into Ubuntu from AMI’s UEFI firmware solutions

Nuremberg, Germany, November 24, 2025 – Today, Canonical, the publisher of Ubuntu, announced a partnership with AMI, a provider of Unified Extensible Firmware Interface (UEFI) solutions. The partnership will enable users of AMI’s Aptio® V UEFI Firmware to netboot directly into Ubuntu by simply selecting Ubuntu Cloud Installation in the boot menu.

This new native boot functionality makes it easy and convenient to use Ubuntu, and eliminates the need for flashing images or using additional media or external devices. A simple Ethernet connection is enough to install and launch Ubuntu.

Alexander Lehmann (Sales Director – IoT, Canonical) and B. Parthiban (General Manager, Boot Firmware Group at AMI) are excited to provide users with the best out-of-the-box experience for Ubuntu.

“At AMI, we value partnerships that strengthen the ecosystem and deliver trusted solutions. Canonical’s widely adopted, community-supported platform is recognized for its stability and reliability, making this collaboration a natural fit,” commented B. Parthiban, General Manager, Boot Firmware Group at AMI. “Together, we’re enabling secure, high-performance experiences for customers everywhere.”

“Our collaboration with AMI furthers our commitment to deliver the best Ubuntu experience right out of the box. It’s now even easier to install Ubuntu,” said Alexander Lehmann, Sales Director – IoT, at Canonical. 

The collaboration between Canonical and AMI kicks off at SPS – the Smart Production Solutions summit  – in Nuremberg from November 25 to 27, 2025. 

To find out more about Ubuntu, visit Canonical’s booth in hall 6, number 112 and AMI’s booth in hall 6, number 223.

* * *

About Canonical 

Canonical, the publisher of Ubuntu Pro, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. Learn more at https://canonical.com/  

About AMI 

AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world’s compute platforms from on-premises to the cloud to the edge. AMI’s industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. AMI is a registered trademark of AMI US Holdings, Inc. Aptio is a registered trademark of AMI in the US and/or elsewhere.

24 November, 2025 09:41AM

Ubuntu Blog: The $8.8 trillion advantage: how open source software reduces IT costs

Open source software is known for its ability to lower IT costs. But in 2025, affordability is only part of the story. A new Linux Foundation report, The strategic evolution of open source, reveals that open source has evolved from a tactical cost-saving measure to a mission-critical infrastructure supporting enterprise-grade investments, and delivering stronger business outcomes as a result.

This transformation is supported by academic research estimating that, without open source, companies would pay roughly 3.5 times more  to build the software running their businesses – an $8.8 trillion increase.[1] 

Open source: from “free alternative” to core infrastructure

The 2025 World of Open Source Survey by the Linux Foundation reveals that open source is deeply embedded across enterprise technology stacks, making it a foundation for global IT operations.  In fact, over 55% of analyzed tech stacks used a Linux-based operating system; and similarly, around half of all analyzed cloud, container, and DevOps technologies have Linux at their core.

The survey illuminates the many great reasons businesses are choosing open source: improved productivity, reduced vendor lock-in, and, unsurprisingly, lower total cost of ownership (TCO). Nearly half of organizations (46%) report an increase in business value from open source compared to last year, with 83% considering it valuable for their future. According to the World of Open Source Survey, 58% of organizations reported lower software ownership costs, and 63% cited higher productivity as a direct benefit of adopting open source. In addition, 62% reported reduced vendor lock-in and 75% judged their software quality to be higher thanks to OSS. Overall, 56% said the benefits of OSS exceeded the costs.[3] A Gartner study echoes these findings, showing that cost control and application development flexibility remain the top drivers of open source adoption.[2] 

And it’s not just about the costs: organizations that invest strategically in open source are 20% more likely to perceive competitive advantage, while 78% report workplace satisfaction and better talent attraction. Nearly 80% say open source makes their organization a better workplace, and 74% say it improves their ability to attract technical talent. 

One respondent put it this way: “Open source is not supplementary tooling but an ecosystem of core infrastructure dependencies.” This captures the shift perfectly: cost savings may start the conversation, but reliability, flexibility, and long-term value now drive adoption.

Why open source reduces IT costs and keeps cutting them

The same characteristics that make open source adaptable also make it economical:

  • No per-seat licensing: organizations avoid scaling costs tied to user counts or cores
  • Modular adoption: businesses can deploy only what they need, minimizing waste
  • Shared innovation: security fixes, feature improvements, and bug patches benefit from collective community investment
  • Interoperability and exit freedom: avoiding proprietary lock-in reduces switching costs and enables infrastructure that fits business strategy rather than the vendor’s roadmap

Systems based on open source tend to have lower maintenance overhead and longer life cycles, advantages that compound fast. That’s why enterprises see real savings,not just from shifting license costs to labor, but through genuine efficiency gains across teams.

Here’s a real-world case study of that in action: Greek telecom leader Nova leveraged Canonical’s planning and open pricing to control its CAPEX and OPEX, benefiting from predictable costs and freedom from management software licensing fees. Support from Canonical paid for “real expertise that enriches our team, rather than paying for access.”[3]

Open source is also the backbone of AI, making it easier to adopt this increasingly must-have technology into business operations. McKinsey research highlights how open source frameworks accelerate AI adoption, enable faster product development, and catalyze ecosystem innovation, amplifying the total value beyond mere cost savings.[4] In fact, the LF’s survey found that AI is the technology that benefits the most from being open source, according to 38% of respondents, and research from the Microsoft AI cloud Partners team showed that Linux environments such as Ubuntu deploy 63% faster with up to 306% ROl over three years.[5]

Mission-critical workloads demand enterprise-ready support

The data is clear: open source software lowers IT costs, but cost benefits only reach their full potential when paired with enterprise-ready support. For technical audiences, this isn’t about “just” having a backstop: it’s about operational excellence, security, and resilience. When issues do arise, they must be addressed quickly and precisely.

The survey shows 71% of organizations expect response times under 12 hours for critical OSS production issues, marking a shift from traditional community support to commercial-grade service-level agreements. In financial services and manufacturing, over 90% consider paid OSS support essential. This need for enterprise-grade support peaks in mission-critical workloads (54%), systems handling sensitive data (43%), and regulated sectors (38%).

There’s a perception that support is ‘too expensive’, but quite to the contrary, paid commercial support does not diminish open source’s cost benefits; instead, it enhances them. Just like OSS adoption saves on costs and licences, robust support services protect organizations against the potentially disastrous costs of downtime, compliance failures, or data breaches. 

Canonical’s own experience confirms that long-term OSS support is an increasingly strategic investment, especially in markets with high regulatory demands and cloud migration complexities.[6]

Take, for example, The European Space Agency (ESA), which depends on Canonical’s distributions of Kubeflow and Spark running on Kubernetes for its mission operations. ESA highlights that Canonical’s support lets them “sleep soundly,” focusing on space missions while trusting infrastructure experts for uptime and reliability.[3]

How Ubuntu Pro locks in the value of open source

The biggest IT cost benefits of open source software come when free software innovation is combined with investments in professional support. After all, these low-cost (or sometimes free) tools are highly accessible and often intuitive to build with, but they can take a lot of time, effort, and specialized skills to maintain and secure in the long term.

Canonical takes away that time-consuming effort from developers, and allows them to focus on building, through Ubuntu Pro + Support, our comprehensive security maintenance and support service.

 Ubuntu Pro + Support gives users a wide range of benefits, including:

  • Up to 15 years of security maintenance and support covering thousands of open source components from the kernel to the applications layer.
  • Compliance-ready patching for mission-critical, regulated, and sensitive workloads.
  • Predictable enterprise SLAs aligned with the sub–12-hour incident response expectations of 71% of organizations.
  • Transparent, forecastable total cost of ownership, eliminating license uncertainties.

Ubuntu Pro extends cost benefits beyond licensing into comprehensive lifecycle management, turning open source affordability into sustained business value.

Open source is an economic strategy, not a shortcut

In conclusion, the business benefits of open source are clear to see, and widely reflected in the business landscape, where record numbers of organizations and tech stacks have open source as a core part of their mission-critical systems. The permissive licences, lack of vendor lock-in, and flexibility of open source make it a clear cost optimizer; but the most significant IT cost savings emerge when organizations combine free software innovation with enterprise-grade support, governance, and active engagement. Those who treat open source as core infrastructure aren’t just saving money: they’re building competitive, secure, and innovative foundations for growth.

$8.8 trillion – that’s what open source is worth to the global economy. If you’re not building on it, you’re paying for it somewhere else. The organizations leading in innovation, efficiency, and resilience already know: open source is the foundation of competitive advantage.

Sources

  1. Open Source Software: The $9 Trillion Resource Companies Take for Granted, HBS
  2. Top challenges to using Open-source for product and application development, Gartner
  3. What’s the state of open source adoption in Europe?, Ubuntu blog
  4. Open source technology in the age of AI, McKinsey
  5. IDC Business Value Study: A 306% ROI within 3 years using Ubuntu Linux on Azure, Microsoft Azure
  6. 54% of European enterprises want long term open source support: how Ubuntu Pro + Support delivers, Ubuntu blog
  7. The value of open source software is more than cost savings, Linux Foundation

24 November, 2025 09:40AM

hackergotchi for Deepin

Deepin

November 21, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Open design: the opportunity design students didn’t know they were missing

What if you could work on real-world projects, shape cutting-edge technology, collaborate with developers across the world, make a meaningful impact with your design skills, and grow your portfolio… all without applying for an internship or waiting for graduation?

That’s what we aim to do with open design: an opportunity for universities and students of any design discipline. 

What is open design, and why does it matter?

Before we go further, let’s talk about what open design is. Many open source tools are built by developers, for developers, without design in mind. When open source software powers 90% of the digital world (PDF), it leaves everyday users feeling overwhelmed or left out. Open design wants to bridge that gap. 

We aim to introduce human-centred thinking into open source development, enhancing these tools to be more intuitive, inclusive, and user-friendly. Most open source projects focus on code contributions, neglecting design contributions. That leaves a vast number of projects without a design system, accessibility audits, or onboarding documentation. That’s where designers come in, helping shape better user experiences and more welcoming communities.

Open design is about more than just aesthetics. Open design helps to make technology work for people; that’s exactly what open source needs. Learn more about open design on our webpage.

We want to raise awareness for the projects,  the problems that currently exist, and how we can fix them together,  and encourage universities and students to become advocates of open design. 

We want universities to connect their students to real-world, meaningful design opportunities in a field that is currently lacking the creativity of designers. Our goal is to help and motivate students to bring their design skills into open source projects and become advocates, to make open design accessible, practical, and empowering! 

How Canonical helps universities access open design

We want to help universities help students to access:

  • Real-world experiences: Students apply their design skills to global projects to create valuable, demonstrable outcomes, beyond hypothetical briefs
  • Interdisciplinary growth: Empower students to gain collaborative experience with developers, and navigate real tech workflows
  • Accessible opportunities: No interviews, no barriers; just impact, experience, and learning

We have provided universities with talks and project briefs, enabling them to prepare students to utilise their expertise and design a brighter future for open source. If you’re a department leader, instructor, or coordinator, exploring open source and open design will help you to give your students unique access to industry-aligned experiences, while embedding values of collaboration, open contribution, and inclusive design.

Why should students care?

If you’re a student in UX, UI, interaction, service, visual, HCI design, or any other field with design influence, you’ve been told how important it is to build your portfolio, gain hands-on experience, and collaborate with cross-functional teams. Open design is your opportunity to do so.

The best part is, you don’t have to write a single line of code to make a difference! Open source projects are looking for:

  • UX/UI improvements
  • Accessibility and heuristic audits
  • User research and persona development
  • User flows and wireframes
  • Information architecture reviews
  • Design documentation and feedback systems

If you’re in a design course, you already have, or are developing, the skills that open-source projects need. 

Open design is an opportunity to develop by collaborating across disciplines, navigating ambiguity, and advocating for users: skills employers value. With open design, you’ll gain confidence in presenting ideas, working with international teams, and handling feedback in a real-world setting, growing in ways that classroom projects and internships often don’t offer.

If you’re aiming for a tech-focused design career, open design is one of the most impactful and distinctive ways to stand out!

How can you start?

Getting started is easier than you think, even if GitHub looks scary at first. Here’s how:

  1. Learn the basics of GitHub

We’ve made a video guide to understanding GitHub, and curated a list of other videos to get to grips with GitHub.

  1. Find a project on contribute.design

It’s like a job board for design contributions. These projects are waiting for you. 

  1. Understand the project’s needs

Most projects on contribute.design list what they’re looking for in .design file or DESIGN.md guidelines.

  1. Pick an issue, or propose your own

Navigate to the Issues tab of the project repo, where you can filter for issues labelled for design. You can also use this tab to propose any issues you discover in the project.

  1. Contribute, collaborate, grow

Start adding your ideas, questions, and solutions to issues. You’ll be collaborating, communicating, and making meaningful contributions.

You can explore more projects through the GitHub Explore page, but not every project will have a design process in place; that’s where your skills are especially valuable. If you don’t see design issues, treat the project as a blank canvas. Suggest checklists, organise a design system, or improve documentation. The power is in your hands!

Reach out to maintainers, join community discussions, and don’t hesitate to introduce design-focused thinking. Your initiative can spark meaningful change and help open source become more user-friendly, one project at a time.

View every project as an opportunity; you don’t need an invitation to contribute, just curiosity, creativity, and the willingness to collaborate.

Interested? 

We’re looking for universities and departments interested in introducing open design to their students. Whether that’s through a talk, module project briefs, or anything else you’d like to see, we’re excited to find ways to work together and bring open design to campus.

Are you a program director, a design department, a student group, or an interested student? Let’s talk!

Reach out at opendesign@canonical.com

21 November, 2025 09:39AM

hackergotchi for Deepin

Deepin

(中文) RK3588 适配 deepin 25 实操教程(附烧录指南)

Sorry, this entry is only available in 中文.

21 November, 2025 08:33AM by xiaofei

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Anbox Cloud 1.28.0 is now available!

Enhanced Android device simulation, smarter diagnostics, and OIDC-enforced authentication

The Anbox Cloud team has been working around the clock to release Anbox Cloud 1.28.0! We’re very proud of this release that adds robust authentication, improved diagnostic tools, and expanded simulation options, making Anbox Cloud even more secure, flexible, and developer-friendly for running large-scale Android workloads.

Let’s go over the most significant changes in this new version.

Strengthened authentication and authorization

Our OpenID Connect (OIDC)-based authentication and authorization framework is now stable with Anbox Cloud 1.28.0. This new framework provides a standardized approach for controlling access across web and command-line clients. Operators can now assign permissions through entitlements with fine-grained control, define authorization groups, and create and manage identities.

Configuring user permissions, understanding the idea of identities and groups, and looking through the entire list of available entitlements are all thoroughly covered in the new guides that come with this release. This represents a significant advancement in the direction of a more uniform and standards-based access model for all Anbox Cloud deployments.

Simulated SMS support

This is one of our most exciting new features: developers testing telephony-enabled applications in Anbox Cloud can now simulate incoming SMS messages using the Anbox runtime HTTP API. 

This new functionality allows messages to trigger notifications the same way they would on a physical device, generating more realistic end-to-end scenarios. A new how-to guide in our documentation provides detailed instructions on how to enable and use this feature.

Protection against accidental deletions

Because we know accidents happen (especially in production environments…), in order to reduce operational risk, this release introduces the ability to protect instances from accidental deletion. This option can be enabled directly in the dashboard either when creating a new instance or later from the Instance details page under the Security section. 

Once this protection option is turned on, the instance cannot be deleted, even during bulk delete operations, until the configuration is reset. This simple safeguard helps operators preserve important data and prevents costly mistakes in busy environments.

Improved ADB share management

Working with ADB (the Android Debug Bridge) has also become more flexible. Anbox Cloud now allows up to five ADB shares to be managed directly from the dashboard. For those who prefer the command line, the new amc connect command provides an alternative to the existing anbox-connect tool. Together, these improvements make it easier for developers to manage and maintain multiple debugging or testing sessions at once.

New diagnostic facility for troubleshooting

With version 1.28.0, we’re introducing a new diagnostic facility in the dashboard. This tool is designed to simplify troubleshooting for both the instances and the streaming sessions themselves. 

This feature helps collect relevant diagnostic data automatically, thereby reducing the work needed to identify and resolve issues. It also makes collaboration with our Canonical support teams more efficient, as users can now provide consistent and accurate diagnostic information in a structured, standard format.

Sensor support in the Streaming SDK

Here’s another hotly anticipated feature: the Anbox Streaming SDK gains expanded sensor support in this release. Our SDK now includes gyroscope, accelerometer and orientation sensors, allowing developers to test applications more interactively. 

Sensor support is disabled by default but can be easily enabled in the streaming client configuration. This addition opens up new possibilities for interactive use cases, such as gaming.

Upgrade now and stay tuned!

We think that Anbox Cloud 1.28.0 is our best release to date, and we are pleased to keep providing a feature-rich, scalable, and safe solution for managing Android workloads on a large scale. 

This latest version makes it easier than ever for developers and operators to create and test Android apps by introducing more precise device simulation, improved troubleshooting tools, and stricter access controls, as we’ve explained above.

Try it now and stay tuned for further developments in our upcoming releases. For detailed instructions on how to upgrade your existing deployment, please refer to the official documentation.

Further reading

Official documentation
Anbox Cloud Appliance
Learn more about Anbox Cloud or contact our team to discuss your use case


Android is a trademark of Google LLC. Anbox Cloud uses assets available through the Android Open Source Project.

21 November, 2025 08:00AM

November 20, 2025

Balint Reczey: Think you can’t interpose static binaries with LD_PRELOAD? Think again!

Well, you are right, you can’t. At least not directly. This is well documented in many projects relying on interposing binaries, like faketime.

But what if we could write something that would take a static binary, replace at least the direct syscalls with ones going through libc and load it with the dynamic linker? We are in luck, because the excellent QEMU project has a user space emulator! It can be compiled as a dynamically linked executable, honors LD_PRELOAD and uses the host libc’s syscall – well, at least sometimes. Sometimes syscalls just bypass libc.

The missing piece was a way to make QEMU always take the interposable path and call the host libc instead of using an arch-specifix assembly routine (`safe_syscall_base`) to construct the syscall and going directly to the kernel. Luckily, this turned out to be doable. A small patch later, QEMU gained a switch that forces all syscalls through libc. Suddenly, our static binaries started looking a lot more dynamic!

$ faketime '2008-12-24 08:15:42'  qemu-x86_64 ./test_static_clock_gettime
2008-12-24 08:15:42.725404654
$ file test_static_clock_gettime 
test_clock_gettime: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, ...

With this in place, Firebuild can finally wrap even those secretive statically linked tools. QEMU runs them, libc catches their syscalls, LD_PRELOAD injects libfirebuild.so, and from there the usual interposition magic happens. The result: previously uncachable build steps can now be traced, cached, and shortcut just like their dynamic friends.

There is one more problem though. Why would the static binaries deep in the build be run by QEMU? Firebuild also intercepts the `exec()` calls and now it rewrites them on the fly whenever the executed binary would be statically linked!

$ firebuild -d comm bash -c ./test_static
...
FIREBUILD: fd 9.1: ({ExecedProcess 161077.1, running, "bash -c ./test_static", fds=[0: {FileFD ofd={FileO
FD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOFD #3 type=FD_PIPE_OUT w} {Pipe #0} close_o
n_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=FD_PIPE_OUT w} {Pipe #1} close_on_popen=fal
se cloexec=false}, 3: {FileFD NULL} /* times 2 */]})
{
    "[FBBCOMM_TAG]": "exec",
    "file": "test_static",
    "// fd": null,
    "// dirfd": null,
    "arg": [
        "./test_static"
    ],
    "env": [
        "SHELL=/bin/bash",
 ...
        "FB_SOCKET=/tmp/firebuild.cpMn75/socket",
        "_=./test_static"
    ],
    "with_p": false,
    "// path": null,
    "utime_u": 0,
    "stime_u": 1017
}
FIREBUILD: -> proc_ic_msg()  (message_processor.cc:782)  proc={ExecedProcess 161077.1, running, "bash -c 
./test_static", fds=[0: {FileFD ofd={FileOFD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOF
D #3 type=FD_PIPE_OUT w} {Pipe #0} close_on_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=F
D_PIPE_OUT w} {Pipe #1} close_on_popen=false cloexec=false}, 3: {FileFD NULL} /* times 2 */]}, fd_conn=9.
1, tag=exec, ack_num=0
FIREBUILD:   -> send_fbb()  (utils.cc:292)  conn=9.1, ack_num=0 fd_count=0
Sending message with ancillary fds []:
{
    "[FBBCOMM_TAG]": "rewritten_args",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "path": "/usr/bin/qemu-user-interposable"
}
...
FIREBUILD: -> accept_ic_conn()  (firebuild.cc:139)  listener=6
...
FIREBUILD: fd 9.2: ({Process NULL})
{
    "[FBBCOMM_TAG]": "scproc_query",
    "pid": 161077,
    "ppid": 161073,
    "cwd": "/home/rbalint/projects/firebuild/test",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "env_var": [
        "CCACHE_DISABLE=1",
...
        "SHELL=/bin/bash",
        "SHLVL=0",
        "_=./test_static"
    ],
    "umask": "0002",
    "jobserver_fds": [],
    "// jobserver_fifo": null,
    "executable": "/usr/bin/qemu-user-interposable",
    "// executed_path": null,
    "// original_executed_path": null,
    "libs": [
        "/lib/x86_64-linux-gnu/libatomic.so.1",
        "/lib/x86_64-linux-gnu/libc.so.6",
        "/lib/x86_64-linux-gnu/libglib-2.0.so.0",
        "/lib/x86_64-linux-gnu/libm.so.6",
        "/lib/x86_64-linux-gnu/libpcre2-8.so.0",
        "/lib64/ld-linux-x86-64.so.2"
    ],
    "version": "0.8.5.1"
}

The QEMU patch is forwarded to qemu-devel. If it lands, anyone using QEMU user-mode emulation could benefit — not just Firebuild.

For Firebuild users, though, the impact is immediate. Toolchains that mix dynamic and static helpers? Cross-builds that pull in odd little statically linked utilities? Previously “invisible” steps in your builds? All now fair game for caching.

Firebuild 0.8.5 ships this new capability out of the box. Just update, make sure you’re using a patched QEMU, and enjoy the feeling of watching even static binaries fall neatly into place in your cached build graph. Ubuntu users can get the prebuilt patched QEMU packages from the Firebuild PPA already.

Static binaries, welcome to the party!

20 November, 2025 08:56PM

hackergotchi for VyOS

VyOS

VyOS SSO explained: how accounts, logins, and permissions work now

After introducing Single Sign-On (SSO) for VyOS services, we received a number of questions about how it works in practice. This follow-up post explains what SSO is, how it relates to your existing VyOS accounts, and what to keep in mind when using it day to day. For rollout dates and migration details, please refer to our initial SSO announcement.

20 November, 2025 03:03PM by Taras Pudiak (taras@vyos.io)

hackergotchi for GreenboneOS

GreenboneOS

October 2025 Threat Report

Just over 4,100 new CVEs emerged in October 2025, representing new attack surfaces and placing pressure on defenders to identify and patch. For operational resilience, organizations need to scan their IT infrastructure often and prioritize mitigation efforts. A free trial of Greenbone’s OPENVAS BASIC lets defenders scan their enterprise IT estate and stay on top […]

20 November, 2025 08:45AM by Joseph Lee

November 19, 2025

hackergotchi for Proxmox VE

Proxmox VE

Proxmox Virtual Environment 9.1 available!

We're proud to present the next iteration of our Proxmox Virtual Environment platform. This new version 9.1 is the first point release since our major update and is dedicated to refinement.

This release is based on Debian 13.2 "Trixie" but we're using the newer Linux kernel 6.17.2 as new stable default. In addition to the main system enhancements, this update incorporates the latest versions of core technologies, including QEMU 10.1.2, LXC 6.0.5, ZFS 2.3.4, and Ceph Squid 19.2.3, all fully...

Read more

19 November, 2025 12:52PM by t.lamprecht (invalid@example.com)

November 18, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: 83% of organizations see value in adopting open source, but report major gaps in security and governance

A new Linux Foundation report reveals how organizations worldwide are adopting, using, and perceiving open source software.

The Linux Foundation’s latest report, The state of global open source, has just been released in collaboration with Canonical. The report follows the Linux Foundation’s European spotlight report, released earlier this year, and confirms that many of the trends the European spotlight report unveiled are true on a global scale. In particular, the global spotlight report confirms the role of open source software as the foundation of business-critical systems worldwide, and indicates a continued increase in adoption. However, organizations continue to lack the governance, security testing, and strategic maturity required to manage open source strategically and securely. 

The report suggests that most organizations expect enterprise-grade performance from open source software, but under-invest in the required governance frameworks, security practices, and community engagement. 

83% of organizations acknowledge open source is valuable to their future

According to the report, the trend of increasing open source adoption in the enterprise is set to continue, as 83% of enterprises consider open source software adoption valuable to their future. Likewise, the report reveals the centrality of open source software to the modern enterprise. Globally, enterprises have adopted open source software throughout their technical stacks: 55% have adopted open source operating systems, whilst 49% have adopted open source cloud and container technologies, and 46% open source web and application development. 

The widespread confidence that open source will play a pivotal role in many organizations’ futures is closely connected to a growing understanding of the benefits of open source software adoption.   

86% report open source software improves productivity

This report confirms a shift in enterprises’ strategic mindset around open source: 82% of respondents considered open source as an asset that enables innovation. Historically, open source software was often reserved for specific projects or use cases, like setting up web servers – with wider organizational use being viewed with some scepticism. 

Open source is now a “must-have.” Why is this the case? Here’s what the respondents had to say: 

  • 86% stated that open source improves productivity 
  • 79% reported improved software quality as a result of open source
  • 78% highlighted improved security

Compared to the benefits seen by organizations using open source software in 2024, 46% reported increased business value from open source over the past year. The growing interest in and use of open source technologies is particularly clear for certain technologies, like AI.

AI technologies benefit most from being open source

The growing value of open source can partly be attributed to the influence of AI. Since 2024, there has been an increase in the adoption of open source AI and machine learning (ML) applications from 35% to 40% – a rise of 5%. Globally, AI and ML were perceived to be the technology most benefiting from being open source. Code visibility ensures organizations can more easily audit their AI systems, which makes compliance simpler, provides more transparency into how the AI model functions, and enables companies to run the AI on their own infrastructure – ensuring sensitive data never leaves the organization’s control.

With growing adoption of AI and ML come new cybersecurity risks and requirements. However, the report indicates that organizations currently lack mature governance structures for their open source estates, creating additional complications to adopting AI and ML securely.

Lack of mature governance: only 34% of organizations have defined a clear open source strategy

Despite increasing adoption of open source technologies, many organizations still lack a mature governance strategy for their open source software. 

The number of organizations that have defined a clear open source strategy has grown by just 2% in the last year, to a total of 34%. That means that nearly two-thirds of organizations rely instead on informal strategies of governance of their open source estates, primarily due to budget constraints, shifting priorities and new strategic requirements. For example, when evaluating open source components for adoption:

  • 44% of organizations check the activity level of the project community
  • 31% use automated security testing tools
  • 28% manually review the source code
  • 36% evaluate the direct dependencies of the open source component

With less than half of organizations taking these important formal strategies before adoption, the report indicates that this “creates significant risk exposure and limits organizations’ ability to capture the full strategic value of open source participation,” signalling that this is a concern that organizations must take seriously.

Similarly, organizations demonstrate a lack of consensus around which security features and assurances matter to them when adopting open source components, with no single certification or assurance mechanism achieving adoption by more than a quarter of open source solutions. Almost a third of organizations (28%) don’t know which assurances would make them more likely to trust an open source solution. This opens them up to serious security risks, like supply chain attacks.  

As a result, enterprises are increasingly turning to paid support options for their open source estates. 

54% view paid support as essential for mission-critical workloads

More than half of respondents consider paid support for their open source essential. As open source technologies have become critical to business infrastructure, expectations for open source software support are beginning to mirror that of commercial software standards: 

  • 71% of organizations expect response times of less than 12 hours from support providers
  • 47% expect rapid security patching for open source software in production environments
  • 53% expect long term support guarantees for their open source software. 

Acquiring paid support for open source software makes this level of support achievable, which organizations broadly accept. On a granular level, the industries with the highest proportion that consider paid support essential are those that process sensitive or valuable data, such as manufacturing (97%) followed by financial services (96%), IT (91%) and government (92%). 

Conclusion and recommendations

The Linux Foundation’s The state of global open source reveals that enterprises are relying on open source software and perceiving its benefits. However, increasing engagement with open source communities, more structured governance of open source estates, and structured security evaluations of open source elements before adoption will help organizations to strengthen the resilience of their open source infrastructure. 

18 November, 2025 04:10PM

hackergotchi for GreenboneOS

GreenboneOS

Greenbone Adds New Compliance Profiles for Huawei EulerOS

Greenbone is excited to announce new compliance policies for Huawei’s EulerOS and openEuler. These compliance policies are the result of close collaboration with Huawei to provide OPENVAS SCAN users with authenticated checks for over 200 key security controls. By thoroughly vetting security settings, defenders gain high degree security assurances and visibility into the security posture […]

18 November, 2025 01:19PM by Greenbone AG

hackergotchi for ZEVENET

ZEVENET

The Importance of Reload in Web Service Continuity

Today, virtually every business depends on its online infrastructure. Companies manage multiple websites, internal applications, or e-commerce platforms that must remain available at all times.

As these environments grow, maintenance and stability become as critical as security — and every change must be applied without affecting the user experience. Maintaining uptime during configuration changes requires solutions that support reload configuration without downtime.

Even a minimal interruption —a restart, a reconnection, or a lost session— can translate into frustrated customers, disrupted operations, or lost revenue. And in an environment where reputation depends on availability, these incidents can have visible consequences.

Keeping services running while updating configurations or network policies is no longer just a technical challenge — it’s an operational requirement.

Network Maintenance: When Stopping Is No Longer an Option

In daily practice, administrators plan maintenance and updates through a controlled service restart. They pick a low-traffic time, apply the changes, and assume a brief service interruption.

Although the process is fast and almost imperceptible to users, it still implies a short disruption. This method works well in small environments, where a few seconds of downtime go unnoticed.

The challenge arises when managing dozens or hundreds of sites simultaneously — as in the case of service providers, public administrations, or hosting environments.

In those scenarios, even a short interruption can have a direct impact:

  • A customer abandons a purchase because the e-commerce platform stops responding.
  • An API session interrupts an ongoing transaction.
  • An application stops synchronizing real-time data.
  • Monitoring systems detect the failure and generate unnecessary alerts.

That’s why, in modern web traffic and service management, the ability to apply changes without restarting has become an essential function.

This is where the concept of reload becomes relevant.

Reload Configuration Without Downtime

This feature allows changes to be applied without restarting active processes — reloading the configuration in the background while keeping all connections open.

In other words, reload replaces the classic “stop and restart” with a smarter process where the system updates its configuration, preserves active sessions, and ensures that traffic continues to flow without interruptions. Technically, reload forces a re-read of the service configuration in memory, synchronizing new rules, policies, or certificates without closing sockets or terminating active connections.

From a technical point of view, it may seem like a small change, but in practice it represents a qualitative leap for any infrastructure that requires true continuity.

The difference between restart and reload is simple: one stops and starts again, the other updates without stopping.

When Updating Without Downtime Becomes Essential?

Not every business requires the same level of continuity, but there are scenarios where reload makes a clear difference:

  • E-commerce and financial services: every second of downtime can directly result in lost sales or trust.
  • Providers managing multiple websites: when hundreds of domains or applications are handled at once, a restart could leave one of them hanging.
  • Platforms with frequent updates: where traffic rules, certificates, or configurations are adjusted several times a day.
  • Systems with real-time traffic: where every connection or active process must remain uninterrupted.

In these cases, reload isn’t a technical luxury — it’s what allows an infrastructure to keep operating without visible interruptions while evolving internally.

How SKUDONET Ensures Updates Without Interruptions

At SKUDONET, we believe that availability should never be sacrificed each time a configuration is updated.

That’s why the system allows services and load-balancing policies to be reloaded without restarting, keeping traffic flowing smoothly even in high-demand environments.
This means that:

  • New settings are applied immediately.
  • Active sessions remain connected.
  • No packet loss or connection drops occur.
  • Performance remains stable throughout the operation.

Thanks to this capability, SKUDONET helps organizations maintain full service availability, even in continuous activity environments or when managing dozens of simultaneous applications.

Everything is managed from a single visual interface, with unified metrics, logs, and events that simplify control without manual processes or scheduled restarts.

SKUDONET Enterprise Edition combines load balancing, security, and traffic inspection in a single platform — designed for business environments that can’t afford downtime but still need to evolve quickly.

👉 Want to try SKUDONET Enterprise Edition with all its features, including reload configuration without downtime? Request your free 30-day trial here:

18 November, 2025 08:00AM by Nieves Álvarez

November 17, 2025

hackergotchi for SparkyLinux

SparkyLinux

Annual Server Fundraiser 2025

Dear Friends! It’s time for our annual fundraiser for our servers. So let’s get started! By January 15, 2026, we need to raise and pay for the servers €510 plus a minimum of €1100 for our monthly payments: domains, internet, electricity, gas, water, fuel, rent, medications, and life, which is getting more expensive and difficult each month. We also have non-monthly but equally important…

Source

17 November, 2025 09:42PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Everything you need to know about FIPS 140-3 on Ubuntu | Videos

FIPS 140 is a highly demanding security standard that’s mandatory for almost all high-security and federal environments. It can be hard to get right and may be a daunting part of the journey for those trying to meet compliance requirements like FedRAMP or CMMC. We get a lot of questions about FIPS 140-3, and so we decided to put together this comprehensive collection of video resources to answer the most burning ones we’ve had so far. 

In this collection, you’ll be able to get answers to the most frequently asked FIPS questions, including:

  • How to enable FIPS 140-3 on Ubuntu 22.04
  • How to check if you’re operating in FIPS mode
  • How to enable FIPS on public clouds: AWS, Azure, GCP
  • Which modules and hardware have been FIPS 140-3 certified for Ubuntu 
  • Which FIPS-enabled Docker containers are available in Iron Bank 
  • What are the most common issues when enabling FIPS 140-3 

How to enable FIPS on Ubuntu?

We’ll start with the most common question: how do you enable FIPS on Ubuntu? The basic prerequisite is an Ubuntu Pro subscription, which is available either free for personal use or with a 30-day free trial for enterprise users. After subscribing, you’ll get access to a dashboard where you can find a token that you can attach to an Ubuntu instance and get access to the FIPS certified modules. All you need to do is open your terminal and enter the following commands: 

sudo pro attach <token>

sudo pro enable fips-updates

sudo reboot

You should see output like the following, indicating that the FIPS packages have been installed:

Installing FIPS Updates packages

FIPS Updates enabled

A reboot is required to complete install.

Enabling FIPS should be performed during a system maintenance window since this operation makes changes to underlying SSL-related libraries and requires a reboot into the FIPS-certified kernel.

How to check if you’re operating in FIPS mode

After enabling FIPS mode, it is good to verify that it is activated. Luckily it’s very straightforward to verify that FIPS mode is enabled. Just run this command in the terminal:

cat /proc/sys/crypto/fips_enabled

The output that indicates that FIPS mode is enabled is “1”. 

How to enable FIPS on public clouds

It is very easy to enable FIPS in public clouds. In contrast to on-prem usage, Ubuntu images for public clouds already have FIPS enabled. Decide on the Ubuntu version you’d like to run, visit the relevant marketplace for your public cloud provider (for example: AWS, Azure, or GCP), and search for the relevant image. Here is an example of how it would look:

Which modules and hardware have been FIPS 140-3 certified 

Sometimes it can be tricky to figure out exactly which modules and hardware have been FIPS 140-3 certified. This video goes into extensive detail outlining the modules and components you’ll be able to make full use of with FIPS 140-3 certified Ubuntu. 

To give a brief overview, the following certified cryptographic modules are available with Ubuntu 22.04 LTS:

  • OpenSSL v3.0.5
  • Libgcrypt v1.9.4
  • GnuTLS v3.7.3
  • Linux kernel v5.15.0
  • StrongSwan v5.9.5

These modules have been developed and tested on a range of hardware platforms:

  • Intel/AMD x86_64
  • ARM64
  • IBM z15

FIPS-enabled containers available in Iron Bank 

Canonical’s container images are trusted and pre-approved for high-security use cases. Hardened Ubuntu images are already certified and available in the U.S. Department of Defense’s Iron Bank, the official repository of security-hardened containers for government systems. You can find the code to build your own image here, or get the actual container that passed all the automated compliance checks here. Note, you would need to first register to get access to the platform. 

Canonical has also recently added FIPS and STIG-compliance to Canonical Kubernetes. Built on Ubuntu Pro hosts, Canonical Kubernetes now includes FIPS 140-3 validated crypto modules out of the box and can be hardened for DISA-STIG. This means you can deploy secure, compliant clusters built on Ubuntu, making it much easier to meet FedRAMP and other federal compliance requirements right from your Kubernetes base.

Common issues when enabling FIPS 140-3 

Compliance always comes with challenges, but when we know the issues, we can help. The video above explains how to solve the most common issues that teams run into when enabling FIPS 140-3, including: 

  • WiFi SSID should be 16 characters
  • 32-bit crypto library versions must be removed, if present
  • Full-disk encryption requires PBKDF2
    • sudo cryptsetup –pbkdf=pbkdf2 luksAddKey <partition>
  • Some applications might not expect disallowed operations to fail – we will endeavor to provide fixes where possible

If you’d like to raise a bug/issue with FIPS compliance on Ubuntu, you can do it on Launchpad. Here is an example of OpenSSL bugs

Summary 

We hope this blog has been useful for you to learn more about FIPS 140-3 on Ubuntu. You can easily get FIPS 140-3 compliance with an Ubuntu Pro subscription, which is free for personal use and offers a free trial for enterprise-focused projects. Additionally, an Ubuntu Pro subscription is not limited to only FIPS 140-3: the subscription also includes access to our hardening automation tools such as Ubuntu Security Guide, expanded security maintenance, Ubuntu fleet management, and more. And if you’re looking for assistance with more complex enterprise use cases, you can simply contact us

More reading 

17 November, 2025 06:23PM

hackergotchi for Finnix

Finnix

Finnix 251 released

Finnix 251 boot screen

Finnix is a Linux-based utility live distribution. Write it to a USB flash drive or burn it to a CD, boot it, and you’re seconds from a root prompt with hundreds of utilities available for recovery, maintenance, testing and more. Finnix 251 has been released today, including new official OCI / Docker images, and containing new packages, features and fixes.


Finnix 251 is the first release to distribute official OCI container images. The official Finnix container contains all the same software as the ISO release, and may be launched from Podman, Docker, Kubernetes, etc.

docker run -it --rm finnix/finnix

podman run -it --rm docker.io/finnix/finnix:latest

kubectl run finnix-$(uuidgen | cut -b -4 | tr A-Z a-z) --image=finnix/finnix --restart=Never -it --rm

This is particularly useful for Kubernetes users, giving you a quick utility shell in the namespace of your choice. The finnix/finnix:latest container currently includes architecture support for amd64, arm64 and riscv64.

Otherwise, Finnix 251 is a regular semiannual utility release:

  • Linux kernel 6.16 (Debian 6.16.12-2)
  • Added packages: dc3dd
  • Upstream Debian package updates
  • Many minor fixes and improvements

Please visit finnix.org to download Finnix 251 today!


17 November, 2025 06:00PM

hackergotchi for VyOS

VyOS

We're introducing single sign-on (SSO) for VyOS services

Starting November 18, 2025, you can sign in to VyOS Support Portal, Community Forum, and Issue Tracker using a new, unified authentication method: Single Sign-On (SSO).

This change brings several benefits, and there are a few key things you’ll need to know — read on to ensure you’re prepared.

17 November, 2025 04:48PM by Taras Pudiak (taras@vyos.io)

hackergotchi for GreenboneOS

GreenboneOS

CVE-2025-64446: A Lurking FortiWeb Vulnerability Proves Critical amid Active Exploitation

Discussion of a new security issue affecting Fortinet’s FortiWeb began circulating online in early October 2025, when cyber deception firm Defused reported capturing a working exploit via honeypot. FortiWeb is Fortinet’s web application firewall (WAF) platform, designed to shield web applications from malicious activity. For over one month, Defused’s revelation mostly lurked in the shadows; […]

17 November, 2025 12:28PM by Joseph Lee

hackergotchi for VyOS

VyOS

VyOS Stream 2025.11 is available for download

Hello, Сommunity!

VyOS Stream 2025.11 and its corresponding source tarball are now available for download. You can find them at the end of this post. This is the third VyOS Stream release on the way to the upcoming 1.5/Circinus LTS release and includes many of its features for you to test — most notably, a VPP-based accelerated dataplane.

17 November, 2025 11:24AM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Deepin

Deepin

November 16, 2025

hackergotchi for SparkyLinux

SparkyLinux

Hyprland

There is a new desktop available for Sparkers: Hyprland What is Hyprland? Features: – All of the eyecandy: gradient borders, blur, animations, shadows and much more – A lot of customization – 100% independent, no wlroots, no libweston, no kwin, no mutter. – Custom bezier curves for the best animations – Powerful plugin support – Built-in plugin manager – Tearing support for better…

Source

16 November, 2025 01:30PM by pavroo

hackergotchi for Qubes

Qubes

The Qubes OS Project will be at FOSDEM 2026!

The Qubes OS Project will have a stand at FOSDEM 2026, which will take place in Brussels, Belgium on January 31 and February 1, 2026. FOSDEM is a top annual meeting for free and open source software developers. Entry is free, and no registration is required. If you attend, stop by and say hello!

16 November, 2025 12:00AM

November 14, 2025

hackergotchi for Purism PureOS

Purism PureOS

PureOS Crimson Development Report: October 2025

In September, we prioritized fixes for the out-of-box experience on PureOS Crimson. For the rest of the beta milestone, we are prioritizing work that benefits the most from user feedback. That includes functionality that must account for device-to-device variations and that is strongly influenced by users' individual preferences.

The post PureOS Crimson Development Report: October 2025 appeared first on Purism.

14 November, 2025 06:58PM by Purism

hackergotchi for Grml developers

Grml developers

Michael Prokop: HTU Bigband @ 30 Jahre Radio Helsinki am 22.11.2025

HTU Bigband @ 30 Jahre Radio Helsinki

Radio Helsinki wird 30, und feiert das mit einem prall gefüllten Musikprogramm am Samstag, 22.11.2025 im Forum Stadtpark. 🥳 Wir sind mit der HTU-Bigband mit dabei, und spielen ab ~19:30 Uhr für ~1,5 Stunden. Schaut vorbei und feiert mit!

PS: wer es wirklich nicht hinschafft, möge zumindest das Radio einschalten oder den Livestream aufdrehen! 🤓

Foto-Quelle / Copyright: graz.social/@radiohelsinki/115428633169757433

14 November, 2025 04:08PM

hackergotchi for Ubuntu developers

Ubuntu developers

hackergotchi for Purism PureOS

Purism PureOS

EvilAI Malware Exploits AI Targets Organizations Worldwide

A sophisticated malware campaign, dubbed “EvilAI” by cybersecurity firm Trend Micro, is leveraging AI-enhanced software to infiltrate organizations across the globe. By posing as legitimate productivity tools, this malware evades traditional security measures and exploits user trust.

The post EvilAI Malware Exploits AI Targets Organizations Worldwide appeared first on Purism.

14 November, 2025 01:16AM by Purism

November 13, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical Kubernetes officially included in Sylva 1.5

Sylva 1.5 becomes the first release to include Kubernetes 1.32, bringing the latest open source cloud-native capabilities to the European telecommunications industry 

With the launch of Sylva 1.5, Canonical Kubernetes is now officially part of the project’s reference architecture. This follows its earlier availability as a technology preview in Sylva 1.4.

What is the Sylva project?

The Sylva project is backed by Europe’s largest telecom operators and vendors, including Nokia and Ericsson, and is designed to deliver an open, telco-friendly cloud-native framework. By focusing on interoperability, performance, and automation, Sylva addresses the unique requirements of telecommunications providers building Kubernetes telco platforms for their IT, 5G core, O-RAN, and edge services. Canonical is thrilled to be included as part of the Sylva project, supporting the important work of creating an open source cloud-native reference architecture capable of hosting the mission-critical workloads of the telco industry.

Canonical’s contribution to the Kubernetes telco ecosystem

Canonical Kubernetes brings unique advantages to Sylva’s mission of reducing fragmentation and simplifying operations across telecom networks. One of its defining features is up to 12 years of long-term support (LTS). For operators running critical workloads, this ensures  stability, ongoing security updates, and compliance with industry standards over a much longer lifecycle than other Kubernetes distributions.

Canonical Kubernetes also provides the flexibility needed for large-scale Kubernetes telco deployments, from core networks to the far edge. Operators benefit from a distribution designed to be both lightweight and maintained with security in mind, while remaining capable of handling advanced workloads such as 5G core, O-RAN, and AI-driven services.

Guillaume Nevicato, Sylva Technical Steering Committee co-chair and Orange Telco Cloud Product Manager, recognized the importance of this contribution:

Canonical is a major open-source player that has achieved the integration of their Canonical Kubernetes distribution into Sylva. They fully embrace Sylva’s full-stack automation, including cluster lifecycle management, storage, networking, observability, GreenDashboard, and security enhancements. This represents a significant step forward in Sylva’s adoption.

Accelerating the validation of Kubernetes telco workloads

A critical part of Sylva’s role is validating network functions against its reference framework, ensuring that cloud-native network functions (CNFs) and virtualized network functions (VNFs) perform reliably across any Sylva-compliant infrastructure. Following its technical preview in the previous Sylva release, Canonical Kubernetes is now included with 1.32 LTS in Sylva 1.5. This allows it to enter the validation process with the Sylva Validation Workgroup, covering key telecom workloads such as 5G Core, O-RAN, and distributed edge services.

For operators, this means they can deploy Kubernetes telco workloads with confidence, knowing that interoperability and performance have already been tested. Vendors also benefit, since a single certification process ensures compatibility across multiple Sylva-aligned platforms, reducing time to market for new services.

Looking ahead: future opportunities for integration

Canonical is now exploring how its broader infrastructure portfolio, including technologies like Canonical OpenStack, a featureful, highly customizable cloud, and MAAS, bare-metal server automation software, could complement Sylva’s approach in the future. These solutions could help create a more unified environment for both virtualized and cloud-native network functions, enhancing the flexibility of Kubernetes telco deployments.

As Sylva evolves, Canonical will continue engaging with operators, vendors, and the wider community to identify opportunities where its open source software can add value.

Building the future of Kubernetes in telco

The inclusion of Canonical Kubernetes in Sylva  represents a milestone in the adoption of open source telco cloud infrastructure. Operators now have access to a validated, commercially supported Kubernetes telco distribution that combines long-term stability, security, and interoperability with the innovation of cloud-native technologies.

 With Kubernetes at the foundation, operators can accelerate the rollout of next-generation network functions and services, while benefiting from the reliability and flexibility that only open-source collaboration can deliver.

Next steps

Are you building your telco cloud strategy? Learn how Canonical Kubernetes can give you a stable, validated, and open foundation for 5G, O-RAN, and edge workloads.

13 November, 2025 05:37PM

hackergotchi for Purism PureOS

Purism PureOS

Librem PQC Encryptor: Future‑Proofing Against Both SS7 and Quantum

Signaling System 7 (SS7) was designed in the 1970s to let telecom carriers route calls, deliver SMS, and enable roaming. It was never built with authentication or encryption in mind. Security wasn’t part of the plan.  The assumption was simple:  All carriers are trusted.

The post Librem PQC Encryptor: Future‑Proofing Against Both SS7 and Quantum appeared first on Purism.

13 November, 2025 11:15AM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical expands total coverage for Ubuntu LTS releases to 15 years with Legacy add-on

Expansion ensures business continuity without forcing major upgrades

Today, Canonical announced the expansion of the Legacy add-on for Ubuntu Pro, extending total coverage for Ubuntu LTS releases to 15 years. Starting with Ubuntu 14.04 LTS (Trusty Tahr), this extension brings the full benefits of Ubuntu Pro – including continuous security patching, compliance tooling and support for your OS – to long-lived production systems.

In highly regulated or hardware-dependent industries, upgrades threaten to disrupt tightly controlled security and compliance. For many organizations, maintaining production systems for more than a decade is complex, but remains a more sensible option than a full upgrade.

That’s why, in 2024, we first introduced the Legacy add-on for Ubuntu Pro, starting with Ubuntu 14.04 LTS (Trusty Tahr). The Legacy add-on increased the total maintenance window for Ubuntu LTS releases to 12 years: five years of standard security maintenance, five years of Expanded Security Maintenance (ESM), and two years of additional coverage with the Legacy add-on – with optional support throughout. Due to the positive reception and growing interest in longer lifecycle coverage, we’re excited to now extend the Legacy add-on to 5 years, bringing a 15-year security maintenance and support window to Ubuntu LTS releases.

A 15-year lifecycle for stability

Throughout this 15-year window, Ubuntu Pro provides continuous security maintenance across the entire Ubuntu base, kernel, and key open source components. Canonical’s security team actively scans, triages, and backports critical, high, and select medium CVEs to all maintained LTS releases, ensuring security without forcing disruptive major upgrades that break compatibility or require re-certification.

Break/fix support remains an optional add-on. When production issues arise, you can get access to our Support team through this service and troubleshoot with experts who contribute to Ubuntu every day, who’ve seen similar problems before and know how to resolve them quickly.

The scope of the Legacy add-on itself is unchanged, but the commitment is longer, giving users additional years to manage transition timelines and maintain compliance. 

This updated coverage applies from Ubuntu 14.04 LTS onward. With this extension, Ubuntu 14.04 LTS is now supported until April 2029, a full 15 years after its debut.

By committing to a 15-year lifecycle, Canonical gives users:

  • Realistic timelines for planning and executing major migrations
  • Continuous security and compliance coverage for long-lived systems
  • Flexibility to modernize infrastructure strategically rather than reactively

Infrastructure is complicated, and upgrades carry real costs and risks. This expansion acknowledges those realities and gives you the support duration your deployments actually require.

A simple path to extended coverage

Current Ubuntu Pro subscriptions will continue uninterrupted. No re-enrollment, no reinstallation, no surprise migration projects.

The Legacy add-on is available after the first 10 years of coverage (standard security maintenance plus ESM, and optional break/bug fix support), priced at a 50% premium over standard Ubuntu Pro. This applies whether you’re approaching that milestone with 16.04 LTS or already using the Legacy add-on with 14.04 LTS.

To activate coverage beyond ESM, contact Canonical Sales or reach out to your account manager.  

For more information about the Legacy add-on, visit our Ubuntu Pro page.  

Learn more:

13 November, 2025 09:11AM

hackergotchi for Deepin

Deepin

hackergotchi for Tails

Tails

Tails 7.2

Changes and updates

  • Update Tor Browser to 15.0.1.

    Tor Browser 15.0 is based on Firefox 140 and inherits from it several new features that are particularly useful if you use many tabs:

  • Update Thunderbird to 140.4.0.

  • Update the Linux kernel to 6.12.57.

  • Remove Root Console.

    To open a root console, you can execute the following command in a Console.

    sudo -i

  • Show Don't ask again notifications only after the clock has been synchronized.

Fixed problems

  • Disable connections that Thunderbird was making to telemetry services run by Mozilla. (#21275)

For more details, read our changelog.

Get Tails 7.2

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.2.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.2 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.2 directly:

13 November, 2025 12:00AM

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E366 Morangos Com Linux

O Miguel está enervadíssimo com Ubuntu Touch e fez peixeirada no Telegram, mas o Diogo tem boas notícias: para além de ter um monitor todo «gamer» para jogar SuperTuxKart, ele e o Ruben Carneiro vão ao Porto combater bravamente oligopólios malvados nos telefones! Revimos excelentes novidades do Firefox 145 que ajudam no combate às invasões de privacidade; discutimos a entrevista de Jon Seager sobre a última BRONCA da Canonical com Flatpaks e o que esperar do Ubuntu Core Desktop; debatemos violentamente as lojas de aplicações móveis e para acabar, planeámos raptar pessoas, inventar séries de Netflix com Linux e explicar porque é que o Pipewire não é um tubo ligado a um fio.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Os efeitos sonoros têm os seguintes créditos: [crowd booing by HowardV] (https://freesound.org/s/264378/) – License: Creative Commons 0; [Police Car Siren in Traffic by hyderpotter] (https://freesound.org/s/268809/) License: Creative Commons 0; [patrons laughing.mp3 by pbrproductions] (https://freesound.org/s/418831/) – License: Attribution 3.0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

13 November, 2025 12:00AM

November 12, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky 8.1

There is the 1st update available for Sparky 8 – 8.1. This is a quarterly update of the Sparky 8 “Seven Sisters” stable release. Sparky 8 is based on and fully compatible with Debian 13 “Trixie”. Main changes: – All packages updated from the stable Debian and Sparky repositories as of November 10, 2025. – Linux kernel PC: 6.12.48-LTS (6.17.7, 6.12.56 LTS, 6.6.115-LTS in sparky repositories) …

Source

12 November, 2025 09:45AM by pavroo

hackergotchi for ZEVENET

ZEVENET

How Kernel-Level Load Balancing Improves Performance and Latency

When evaluating load balancers, teams often look at features, benchmarks, or latency claims. But the factor that usually determines how far a load balancer can scale is much simpler: where the traffic is processed inside the operating system.

In Linux, packets originate and are handled in the kernel, where the TCP/IP stack runs. User space — where most reverse proxies and L7 load balancers operate — is a separate execution context. When a load balancer is implemented in user space, every packet must travel back and forth between these two layers — which is fundamentally different from kernel-level load balancing, where forwarding happens inside the kernel.

This boundary crossing is subtle, but it has a real cost.

The Cost of Crossing Kernel and User Space

A typical user-space load balancer (like HAProxy in TCP mode, NGINX, Envoy, or Traefik) receives a packet in the kernel, copies it to user space for processing, then returns it to the kernel to send it out. This happens for every packet in the flow.

Each transition triggers:

  • a memory copy
  • a context switch
  • a scheduler hand-off

Individually, these operations are insignificant. Under moderate or heavy load, they accumulate into two visible symptoms:

  • Latency grows as concurrency increases
  • CPU usage rises faster than throughput

And this is why user-space load balancers often reach a scaling ceiling long before hardware limits are reached. The system is not slow — it is simply doing more work than necessary to move each packet.

user space load balancing path

Figure 1. In user-space load balancers, forwarding requires repeated transitions between kernel and user space, increasing latency and CPU overhead.

What Changes When Forwarding Stays in the Kernel

Linux already provides a capable packet-processing engine in the kernel: netfilter for filtering and NAT, and conntrack for connection tracking. If forwarding decisions are made inside the kernel, packets do not need to move up into user space at all — they stay where they originate.

This is the core idea behind kernel-level load balancing. The forwarding path becomes:

Packet arrives → Kernel processes it → Packet leaves
  • No additional memory copies.
  • No process wake-ups.
  • No proxy loop.

This drastically reduces overhead and keeps CPU usage predictable as load increases.

kernel to user space forwarding

Figure 2. When forwarding occurs in the kernel data plane, packets avoid user-space transitions entirely.

Performance in Real Conditions

This approach is not theoretical. On standard mid-range hardware running SKUDONET L4xNAT, with no DPDK or kernel bypass optimizations:

  • 475,983 requests per second
  • ~1.63 ms average latency
  • ~27% CPU usage

This demonstrates that the improvement does not come from specialized hardware or experimental networking stacks — it comes from where the work is done.

Choosing the Right Forwarding Mode

Kernel-level forwarding supports multiple operational models:

Mode Client IP Visible Return Path Best Use Case
SNAT No Through LB NAT environments, general balancing
DNAT Yes Direct backend → client Internal routed services, logs requiring client IP
DSR Yes Direct backend → client High-volume reads, UDP, media/CDN distribution

DSR offers the lowest latency, while SNAT provides the most operational control.
The right choice depends on network topology, not performance capabilities.

Keeping Control Without Touching the Data Path

Working directly with netfilter can be complex. Its chains, rule priorities, and packet classification logic require a detailed understanding of kernel-level networking.

SKUDONET addresses this by providing a control plane that defines services, backends, and policies at a higher level, while automatically generating and maintaining the underlying kernel configuration. In this model, the forwarding logic never leaves the kernel, but operators still retain full visibility and control over how traffic is handled.

This is the separation of concerns that makes the architecture both efficient and maintainable:

  • Kernel: data path
  • User space: control and orchestration

Conclusion

Whether a load balancer processes traffic in user space or in the kernel fundamentally affects:

  • Latency stability
  • CPU efficieancy
  • Scalability under concurrency
  • Predictability under load

User-space load balancing offers flexibility and extensibility, but that flexibility comes at the cost of additional data movement and processing overhead. Kernel-level load balancing forwarding avoids this by keeping packet handling in the layer where the traffic already resides, eliminating unnecessary copies and context switches while preserving visibility and control.

If you want to explore the architecture in depth — including packet flow breakdowns, performance measurements, and forwarding mode selection — you can read the full technical paper below.

Download the full technical whitepaper here:

12 November, 2025 09:30AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

Exciting News! deepin Global Community Expands: Indonesia Site Officially Launched

As the deepin community continues to advance steadily in its internationalization journey, we are delighted to announce the official launch of its 11th overseas branch – the Indonesia community site! This marks another solid and significant step forward in deepin's global expansion.   Zaky, tthe lead of the deepin Indonesia community, has sent his greetings: Halo semuanya, saya Zaky, Global Ambassador deepin dari Indonesia. Saya sangat bangga bisa mengumumkan peluncuran resmi situs deepin Indonesia. Ini adalah momen yang istimewa bagi saya pribadi dan saya sangat bersemangat untuk berkontumbusi dalam memperkuat komunitas open source dan Linux di tanah air, serta memperkenalkan ...Read more

12 November, 2025 09:07AM by xiaofei

November 11, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical releases FIPS-enabled Kubernetes

Deploy a FedRAMP-ready Kubernetes cluster and application suite, with FIPS 140-3 crypto and DISA-STIG hardening,

Today at KubeCon North America, Canonical, the publisher of Ubuntu, released support to enable FIPS mode in its Kubernetes distribution, providing everything needed to create and manage a scalable cluster suitable for high-security or Federal deployments. As of version 1.34, Canonical Kubernetes is available with a built-in FIPS 140-3 capability that uses certified cryptographic modules. Your deployment with this FIPS capability can be easily hardened to DISA-STIG standards using comprehensive documentation when deployed as a snap package.

KubeCon attendees in Atlanta can learn more about FIPS-enabled Canonical Kubernetes at booth 821. 

What is Canonical Kubernetes?

Canonical Kubernetes is a performant, lightweight, and securely designed CNCF-conformant distribution of Kubernetes. It provides everything needed for a fully functioning cluster, including a container runtime, a CNI, DNS services, an ingress gateway, metrics server, and more. New versions of Canonical Kubernetes ship within a week of the upstream release, and Long Term Support (LTS) versions (which are released every 2 years) are fully supported and security maintained by Canonical for up to 12 years. Long Term Support for Ubuntu and FIPS-enabled Canonical Kubernetes is offered through an Ubuntu Pro subscription. Canonical’s FIPS 140-3 compliant Kubernetes is also available as part of the NVIDIA AI Factory for Government reference design.

Gain stability with the option to upgrade for new features

Canonical is the first software provider to offer 12 years of support for Kubernetes, which is far beyond the support window offered by upstream CNCF and other vendors. Upstream Kubernetes is typically maintained and supported for about 14 months by the Kubernetes community, with 3 releases per year. In comparison, Canonical maintains an LTS release every 2 years, in line with the Ubuntu LTS release cadence.

Traditionally, Kubernetes clusters must be upgraded one version at a time. However, Canonical’s “interim” versions will be supported for 1 year past the next LTS release, allowing customers to upgrade within 1 year of the next LTS release, without downtime, all while knowing their cluster is fully covered by security maintenance.

Get reliable security maintenance

Each component of the Kubernetes stack is backed by Canonical’s CVE patching service. Our dedicated security team triages all relevant vulnerabilities and backports upstream fixes to the currently supported software versions, ensuring a completely stable base without breaking existing deployments. 

Comply with FedRAMP requirements

Canonical has been publishing FIPS-certified cryptographic modules for Ubuntu since 2016. These modules are vital for customers across the Federal sector and for on-premises and public clouds, powering a wide range of FedRAMP deployments. With the availability of Canonical Kubernetes and its built-in FIPS 140-3 mode using certified cryptographic modules, customers will have a faster and more direct route to meet their FedRAMP requirements.

FIPS 140-3 functionality requires Kubernetes to be deployed on top of a FIPS-enabled Ubuntu LTS host Operating System. Canonical Kubernetes enables Kubernetes DISA-STIG, and allows you to deploy onto a host OS hardened to DISA-STIG guidelines using the Ubuntu Security Guide (USG) tool. What’s more, applicable STIG controls can be applied to enable hardened containers, along with embedded FIPS cryptographic libraries. Ubuntu STIG hardening has been extensively tested and deployed across the Federal landscape, making it a proven route to meeting FedRAMP security standards.

FIPS modules and STIG hardening are available with an Ubuntu Pro subscription. Ubuntu Pro subscriptions apply on a per-machine basis, which means that any containerized application running on a Pro-enabled host machine is also included within Pro when the Pro token is enabled. 

Visit us at our booth 821 at KubeCon North America on November 11-13, 2025 for an in-person conversation about how Canonical Kubernetes powers FedRAMP compliant deployments.

About Canonical

Canonical, the publisher of Ubuntu, provides open source security, support ,and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. 

Learn more at https://canonical.com/ 

Further Reading

11 November, 2025 02:45PM

November 10, 2025

Ubuntu Blog: Canonical announces optimized Ubuntu images for Google Cloud’s Axion N4A Virtual Machines

This new release brings the stability and security of Ubuntu to Axion-based N4A virtual machines on Google Compute Engine.

November 6, 2025 – Today Canonical, the publishers of Ubuntu, and Google Cloud announced the immediate availability of optimized Ubuntu images for the new Axion-based N4A virtual machines (VMs) on Google Compute Engine. This collaboration brings the stability, security, and expansive ecosystem of Ubuntu, the world’s most popular cloud operating system, to Google Cloud’s most cost-effective N-series offering, enabling enterprises to maximize the total cost of ownership (TCO) for a wide range of general-purpose workloads.

The new N4A VMs are powered by Google’s custom-designed Axion ARM-based CPUs and offer up to 105% better price performance and 80% better performance-per-watt than comparable, current-generation x86-based VMs. By integrating optimized Ubuntu images at launch, Canonical helps ensure developers and operators can immediately take advantage of this breakthrough efficiency for demanding workloads.

Canonical has long supported ARM infrastructure, helping to ensure that Ubuntu provides a consistent, reliable, and secure experience across heterogeneous computing environments. Our deep experience in solving the challenges of mixed x86 and ARM deployments allows us to bring a robust and fully optimized operating system to the N4A series from day one.

The availability of optimized Ubuntu on N4A ensures developers can use the familiar packages and libraries of the latest Long-Term Support (LTS) releases, guaranteeing longevity and simplifying migration. This is crucial for businesses looking to adopt N4A’s cost savings without compromising on operational consistency across Google Cloud’s Compute Engine, Google Kubernetes Engine (GKE), and other services.

Seamless integration and reliability from Day One

These optimized Ubuntu images are backed by rigorous testing to help ensure enterprise-grade stability and compatibility with Google Cloud’s core features.

Canonical and Google Cloud have executed thorough validation across the entire image lifecycle, confirming that Ubuntu on N4A performs exceptionally well with Google Cloud services and VMs. This extensive testing includes validation of:

  • Secure Boot integrity: Full compatibility and successful execution of google-secure-boot, ensuring the highest levels of system integrity from the moment of launch.
  • Initialization and configuration: Robust confirmation of the cloud-init configuration process, including network, user, and password authentication (cloud-init-password-auth-test), guaranteeing reliable deployment and user setup.
  • Lifecycle management: Successful execution of startup and shutdown scripts (google-startup, google-shutdown-script, and their URL-based variants), critical for automated maintenance and application orchestration.
  • Compute Engine feature compatibility: Validation of core Google Cloud functionality, including accurate disk resizing (google-disk-size), and general system integration (google-general), helping to ensure that Ubuntu images behave predictably within the Compute Engine environment.

This comprehensive testing suite allows customers to deploy Ubuntu on N4A with total confidence.

Getting started

To get started, simply select the N4A machine type and choose your preferred Ubuntu image when creating a VM in Google Cloud Compute Engine, or when configuring node settings in GKE.

The optimized images are available now in the public preview regions for N4A (us-central1, us-east4, europe-west3, and europe-west4).

About Canonical 

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. Learn more at https://canonical.com/ 

Read more

10 November, 2025 10:08PM

hackergotchi for Volumio

Volumio

Volumio 4 is Here: A Year in the Making, Built for What’s Next

Today marks something special for us and for everyone who loves what Volumio does. After nearly a year of work, we’re releasing Volumio 4 for Raspberry Pi and PC platforms.

You might fire it up and think, “Wait, it looks the same.” And you’d be right. We didn’t redesign the interface or move buttons around. What we did was rebuild the foundation.

Why This Matters

Think of Volumio 4 like replacing the engine in your favorite car. From the driver’s seat, everything feels familiar. But under the hood, we’ve swapped in something more powerful, more efficient, and ready to take you places the old engine simply couldn’t go.

We’ve moved to Debian Bookworm, which is the technical way of saying we’ve given Volumio a completely modern foundation. This isn’t just about what you’ll see today. It’s about what we can build tomorrow, and the year after that, and beyond.

volumio-4-raspberry-pi

A Huge Thanks to Our Community

This kind of work doesn’t happen alone. Andy, Marco, Pascal, Ash, Gé, Josh, and our entire inner circle of moderators and developers spent countless hours testing, troubleshooting, and pushing this forward. When you’re rebuilding something from the ground up while keeping it running, you need people who care as much about getting it right as we do. We couldn’t have done this without them.

The App Changes Everything

Here’s where things get exciting. Volumio 4 works hand in hand with our new Volumio app, which just landed on the app stores. This is the first step in our vision: one seamless ecosystem for music playback, everywhere, for every kind of digital music.

The new app doesn’t just look better. It’s fundamentally more resilient. We’ve built a new connection method that stays rock solid even when your network gets cranky. The onboarding is now straightforward instead of confusing. You’ll spend less time troubleshooting and more time listening.

volumio-4-app

 

What You’ll Actually Notice

Your CDs will play quietly now. Most USB drives used to make noise during playback. That’s fixed. Silent CD playback with the vast majority of drives out there.

Bluetooth that actually works well. We completely rewrote the Bluetooth stack. Lower latency, better compatibility, and we didn’t compromise on sound quality to get there. There’s also a new plugin that lets you send audio from Volumio to Bluetooth speakers, and your Bluetooth remotes will now work with Volumio.

Everything feels snappier. Browsing your library is smoother. Things respond faster. It’s one of those improvements that’s hard to quantify but impossible to miss once you experience it.

NVME storage support. So many of you asked for this. Read performance with NVME devices is dramatically better now.

Better handling of big libraries. We’re running the latest version of MPD, which means if you have thousands of albums, Volumio handles them more gracefully and reliably.

More DACs just work. We’ve expanded USB quirks support, so if your DAC supports direct DSD, Volumio will recognize it.

Security updates built in. We’re on the latest kernel, which means better security as the world around us keeps changing.

Touchscreen displays. If you want to connect an HDMI touchscreen panel, Volumio 4 has you covered with improved display management.

volumio-4-device

We Learned Our Lesson About Plugins

Let’s talk about something important. The Volumio 3 launch was a success in many ways, but we heard you loud and clear about one thing: plugins weren’t ready at launch like they were for Volumio 2. That hurt the experience for many of you, and we get it. Plugins aren’t just nice to have, they’re part of what makes Volumio yours.

We listened. We learned. And we made sure not to repeat that mistake.

All the plugins you love are available right now for Volumio 4. From day one. We didn’t want anyone sitting around waiting for their favorite functionality to come back.

But we didn’t stop there. We’ve also added new capabilities to Volumio 4 that open up possibilities for plugin developers. This means the plugins you already use can get better, and new plugins can do things that weren’t possible before.

How to Get It

Volumio 4 is available today for Raspberry Pi and PC platforms. If you have one of our Volumio products or something from our OEM partners, you can expect to see the update roll out in Q1 2026.

Two Things You Should Know

First, you can’t update over the air from Volumio 3 to Volumio 4. You’ll need to reflash. It’s not ideal, but with a change this fundamental, it was the only way to ensure everything works correctly.

Second, we had to make a difficult call. Raspberry Pi 1 and Raspberry Pi Zero are no longer supported. The necessary binaries simply aren’t available for these older boards. We know this affects some of you, and we didn’t make this decision lightly.

What Comes Next

Today is about giving Volumio a foundation that can support everything we want to build. The interface looks the same because we wished to this transition to feel seamless for you. But now, with Volumio 4 and the new app working together, we can start building the features and improvements that weren’t possible before.

We’ve spent a year on the foundation. Now comes the fun part.

Welcome to Volumio 4.

Download Volumio 4 and get the new app from your device’s app store. As always, if you run into any issues, our community is here to help.

The post Volumio 4 is Here: A Year in the Making, Built for What’s Next appeared first on Volumio.

10 November, 2025 01:38PM by Volumio Team

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Generating color palettes for design systems … inspired by APCA!

This is the first of two blog posts about how we created the color palette for a new design system at Canonical. In this post I share my journey into perceptually uniform color spaces and perceptual contrast algorithms. 

If you’re already familiar with these concepts, skip to this section (or visit the Github repository) to see how I reverse-engineered the Accessible Perceptual Contrast Algorithm (APCA) to generate perceptually contrasting color palettes. In the next post, I will share why we didn’t choose this solution and what we chose instead.

How humans perceive color

I was nerd sniped by a colleague with this article by Matthew Ström, “How to pick the least wrong colors” about using perceptually uniform color spaces to pick data visualization colors. At that point, I hadn’t known about perceptually uniform color spaces like Oklab and Oklch.

“Normal” color spaces, such as RGB, are structured in such a way that machines can easily process colors. Therefore, RGB has very inhuman characteristics. If you imagine the color space as a geometric shape, for example, RGB would be a cube. The naive assumption would be that colors we perceive as similar are close to each other in this cube, right? However, this is not the case. Surprisingly, human color perception does not correspond to a perfect cube. Who would have thought?

Perceptually uniform color spaces support human perception, not computers. While RGB’s are consistent in how a color displays on a monitor, PUCs are consistent with how we actually see the color. As a result, their 3D shapes are not perfect geometric shapes, such as the one from Oklch (pictured above). Shocker!

This property of perceptually uniform color spaces, which aligns more closely with actual human color perception, holds enormous potential for UI design and the wider design spectrum. For example, it’s much easier to create color palettes in which the brightness of different colors appears more uniform in the same “gradation”. This potential fascinated me, so I took a deeper dive into perceptually uniform color spaces and human perception of colors and contrasts in general.

How humans perceive contrast

One of the things I learned during my research was the shortcomings of the contrast algorithm currently recommended in the WCAG guidelines, a recommendation based on the ISO-9241-3 standard. The author of APCA, myndex, does an excellent job of documenting the shortcomings of WCAG.

Essentially, WCAG produces both false positives and false negatives when it evaluates contrast between two colors. Meaning, WCAG approvals aren’t necessarily accessible because some combinations with high contrast fail and some with low contrast pass. APCA is a contrast algorithm that is more closely aligned with human contrast perception and is therefore much better at evaluating contrast than WCAG.

At that time, I also was going to start creating a new color palette for Canonical’s design system. So I expanded my research to include how different color spaces and contrast algorithms can be used to create color palettes. In this context, I also read another article by Matthew Ström on color generation, titled “How to Generate Color Palettes for Design Systems.” This article was one of the most important sources of inspiration for my further work and this blog post; In particular, Ström’s principle of using contrasts to determine color gradations, which made me wonder whether it could be developed further.

Generating color palettes for design systems…

To support my work creating a new color palette for Canonical’s design system, I also researched how color spaces and contrast algorithms can be used to make color palettes. In his article Ström explores combining contrast algorithms and perceptually uniform color spaces to generate color palettes.

Contrast is one of the most important aspects of working with color in user interfaces (and other media). There must be sufficient contrast between two colors so that people can distinguish between them. Ström believes that contrast should determine the gradation between colors in a palette. Applied to Ström’s palette, this means that every pair of colors with a distance of 500 will have the WCAG mandated contrast ratio of 4.5:1.

In a color palette where the contrast between two shades is consistent, it’s easy to choose accessible color pairs. Choose any two shades in the palette that are a certain distance apart, and you’ve got an accessible color pair. You no longer need to manually check all color combinations in your user interface. In an internal survey of designers at Canonical, we found that selecting accessible color pairs is an important concern for designers. Therefore, a color palette in which it is easy to select accessible color pairs seemed ideal for us.

… inspired by APCA!

Matthew Ström used the WCAG algorithm in his blog post to good effect, but as mentioned earlier, the WCAG contrast algorithm has its drawbacks. I was curious to see if it would be possible to follow the same principle (basing color palette gradation on contrast) but replace the WCAG algorithm with a perceptual contrast algorithm; in fact, even Ström mentioned in his article that it would be an interesting experiment. I found the idea of trying it with perceptual contrast exciting and began to investigate its feasibility.

So began my journey to create a color palette inspired by APCA contrast algorithm principles.

The APCA formula

First, I had to create a reverse perceptual contrast algorithm. APCA takes two colors and outputs a number between -108 and 106 (where 0 is low contrast and the extreme values are high contrast) to indicate how contrasting the color pair is. Reversing the algorithm means restructuring it so that we can specify a color and a desired contrast ratio to the algorithm, and it returns a color that meets those criteria. Due to its complexity, reversing a perceptual contrast algorithm was much harder than reversing the WCAG algorithm.

I knew that the apca-w3 package already had a “reverse APCA” function. Originally, I thought I would have to go beyond the capabilities of this function (it can only perform the reversal with grayscale colors). As a side project during a climbing trip with friends, I therefore tried to sketch out the reversal of the APCA algorithm on a napkin myself (with the help of a physicist friend, as I’m not that good at math myself).

Much of the APCA algorithm’s complexity stems from the fact that there are four possible cases and the equation looks different depending on the case. The four cases we need to consider for our inverse algorithm are the polarity (is the text lighter than the background) and which of the two variables we want to solve for (text or background).

So for the inverse algorithm, we need to consider four cases:

  • Case 1: Light text on a dark background, solving for text
  • Case 2: Dark text on light background, solving for text
  • Case 3: Light text on dark background, solving for background
  • Case 4: Dark text on light background, solving for background

I will show my process for the first case. The process for the other cases is basically the same, but different substitutions and signs must be used depending on the case.



Repeating the same process for the other cases we get the following 4 equations for our 4 cases:



Finally, in APCA, all input Y values must be clamped, and the Y value returned as the output of the inverse function must be unclamped. The two functions for clamping and unclamping Y are as follows:



After completing all the scary calculations, I was ready to translate it all into code. In doing so, I realized that I had only determined the required Y component (in the XYZ color space) of a color with the correct contrast value distance, but not a full color. So, the formula is essentially capable of determining a grayscale color that has the correct contrast distance to the input color – exactly what the existing reverse APCA function can do 😅.

I took another look at Ström’s article and realized that the Y component was actually all I needed to generate the palettes. So I could have just used the function available in the apca-w3 package… So if you are considering a similar project, you can save yourself (and your physicist friends) the napkin calculations and either use the existing reverseAPCA() function in the apca-w3 package or my code below.

I still thought it was a good learning experience to reverse it myself, and since apca-w3 is not completely open source (it doesn’t have a standard open source license), I also thought it would be nice to have an implementation of the reverse algorithm with a truly open source license. I’m not sure if what I did is compatible with the APCA trademark license, so I’ll refrain from claiming that my result is APCA-compliant. The code for my inverse perceptual contrast finder, inspired by APCA algorithm principles, is as follows:

/**
 * Constants used in perceptual contrast calculations
 * Inspired by the formula found at https://github.com/Myndex/apca-w3/blob/c012257167d822f91bc417120bdb82e1b854b4a4/src/apca-w3.js#L146
 */

const PERCEPTUAL_CONTRAST_CONSTANTS: {
    BLACK_THRESHOLD: number
    BLACK_CLAMP: number
    OFFSET: number
    SCALE: number
    MAGIC_OFFSET_IN: number
    MAGIC_OFFSET_OUT: number
    MAGIC_FACTOR: number
    MAGIC_EXPONENT: number
    MACIG_FACTOR_INVERSE: number
} = {
    BLACK_THRESHOLD: 0.022,
    BLACK_CLAMP: 1.414,
    OFFSET: 0.027,
    SCALE: 1.14,
    MAGIC_OFFSET_IN: 0.0387393816571401,
    MAGIC_OFFSET_OUT: 0.312865795870758,
    MAGIC_FACTOR: 1.9468554433171,
    MAGIC_EXPONENT: 0.283343396420869 / 1.414,
    MACIG_FACTOR_INVERSE: 1 / 1.9468554433171,
}

/**
 * Removes clamping from near-black colors to restore original values
 * Inspired by the formula found at: https://github.com/Myndex/apca-w3/blob/c012257167d822f91bc417120bdb82e1b854b4a4/src/apca-w3.js#L403
 * @param y - The clamped luminance value to be unclamped
 * @returns The unclamped luminance value
 */
function unclampY(y: number): number {
    return y > PERCEPTUAL_CONTRAST_CONSTANTS.BLACK_THRESHOLD
        ? y
        : Math.pow(
              (y + PERCEPTUAL_CONTRAST_CONSTANTS.MAGIC_OFFSET_IN) *
                  PERCEPTUAL_CONTRAST_CONSTANTS.MAGIC_FACTOR,
              PERCEPTUAL_CONTRAST_CONSTANTS.MAGIC_EXPONENT
          ) *
              PERCEPTUAL_CONTRAST_CONSTANTS.MACIG_FACTOR_INVERSE -
              PERCEPTUAL_CONTRAST_CONSTANTS.MAGIC_OFFSET_OUT
}

/**
 * Applies clamping to near-black colors to prevent contrast calculation issues
 * Inspired by the formula found at: https://github.com/Myndex/apca-w3/blob/c012257167d822f91bc417120bdb82e1b854b4a4/src/apca-w3.js#L381
 * @param y - The luminance value to be clamped
 * @returns The clamped luminance value
 */
function clampY(y: number): number {
    return y >= PERCEPTUAL_CONTRAST_CONSTANTS.BLACK_THRESHOLD
        ? y
        : y +
              Math.pow(
                  PERCEPTUAL_CONTRAST_CONSTANTS.BLACK_THRESHOLD - y,
                  PERCEPTUAL_CONTRAST_CONSTANTS.BLACK_CLAMP
              )
}

/**
 * Reverses perceptual contrast calculations to find a matching luminance
 * Inspired by the formula found at: https://github.com/Myndex/apca-w3/blob/c012257167d822f91bc417120bdb82e1b854b4a4/images/APCAw3_0.1.17_APCA0.0.98G.svg
 * @param contrast - Target contrast value (between 5 and 106.04066)
 * @param y - Known luminance value (between 0 and 1)
 * @param bgIsDarker - Whether the background is darker than the text
 * @param lookingFor - What we're solving for: "txt" (text color) or "bg" (background color)
 * @returns The calculated luminance value, or false if no valid solution exists
 */
export function reversePerceptualContrast(
    contrast: number = 75, // Default contrast of 75
    y: number = 1, // Default luminance of 1
    bgIsDarker: boolean = false, // Default assumes background is lighter
    lookingFor: "txt" | "bg" = "txt" // Default solves for text color
): number | false {
    contrast = Math.abs(contrast)
    let output: number | undefined

    if (!(y > 0 && y <= 1)) {
        console.log("y is not a valid value (y > 0 && y <= 1)")
        return false
    }

    if (!(contrast >= 5 && contrast <= 106.04066)) {
        console.log(
            "contrast is not a valid value (contrast >= 5 && contrast <= 106.04066)"
        )
        return false
    }

    // Apply clamping to input luminance
    y = clampY(y)

    // Calculate output luminance based on what we're looking for and background darkness
    // You could do these calculations here more DRY, but I find that it is easier to
    // understand the derivation from the original calculation with the if statements.

    if (lookingFor === "txt") {
        if (bgIsDarker) {
            // For light text on dark background
            output =
                (y ** 0.65 -
                    (-contrast / 100 - PERCEPTUAL_CONTRAST_CONSTANTS.OFFSET) *
                        (1 / PERCEPTUAL_CONTRAST_CONSTANTS.SCALE)) **
                (1 / 0.62)
        } else if (!bgIsDarker) {
            // For dark text on light background
            output =
                (y ** 0.56 -
                    (contrast / 100 + PERCEPTUAL_CONTRAST_CONSTANTS.OFFSET) *
                        (1 / PERCEPTUAL_CONTRAST_CONSTANTS.SCALE)) **
                (1 / 0.57)
        }
    } else if (lookingFor === "bg") {
        if (bgIsDarker) {
            // For dark background with light text
            output =
                (y ** 0.62 +
                    (-contrast / 100 - PERCEPTUAL_CONTRAST_CONSTANTS.OFFSET) *
                        (1 / PERCEPTUAL_CONTRAST_CONSTANTS.SCALE)) **
                (1 / 0.65)
        } else if (!bgIsDarker) {
            // For light background with dark text
            output =
                (y ** 0.57 +
                    (contrast / 100 + PERCEPTUAL_CONTRAST_CONSTANTS.OFFSET) *
                        (1 / PERCEPTUAL_CONTRAST_CONSTANTS.SCALE)) **
                (1 / 0.56)
        }
    }

    // Unclamp the output value if valid
    if (output !== undefined && !isNaN(output)) {
        output = unclampY(output)
    }

    // Validate final output
    if (
        output === undefined ||
        isNaN(output) ||
        !(output > 0 && output <= 1)
    ) {
        console.log("A color with the specifications does not exist")
        return false
    } else {
        return output
    }
}

After performing the perceptual contrast inversion, all I had to do was combine my code for reverse perceptual contrast with Ström's code:

import Color from "colorjs.io"

/**
 * Converts OKHSl color to sRGB array
 * @param {OkHSL} hsl - Array containing [hue, saturation, lightness]
 *   hue: number (0-360) - The hue angle in degrees
 *   saturation: number (0-1) - The saturation value
 *   lightness: number (0-1) - The lightness value
 * @returns {[number, number, number]} sRGB array [r, g, b] in 0-255 range
 */
export function okhslToSrgb(
    hsl: [number, number, number],
): [number, number, number] {
    // Create new color in OKHSl space
    let c = new Color("okhsl", hsl)
    // Convert to sRGB color space
    c = c.to("srgb")

    return [c.srgb[0] * 255, c.srgb[1] * 255, c.srgb[2] * 255]
}

/**
 * Converts Y (luminance) value to OKHSL lightness
 * Inspired by the formula found at https://github.com/Myndex/apca-w3/blob/c012257167d822f91bc417120bdb82e1b854b4a4/src/apca-w3.js#L418
 * @param {number} y - Linear luminance value (0-1)
 * @returns {number} OKHSL lightness value (0-1)
 */
export function yToOkhslLightness(y: number): number {
    const srgbComponent = y ** (1 / 2.4)
    const c = new Color("srgb", [srgbComponent, srgbComponent, srgbComponent])
    return c.okhsl[2]
}

/**
 * Color scale object with hex color values keyed by scale number
 */
interface ColorScale {
    [step: number]: [number, number, number]
}

/**
 * Compensates for the Bezold-Brücke effect where colors appear more purplish in shadows
 * and more yellowish in highlights by shifting the hue up to 5 degrees
 * Derived from https://mattstromawn.com/writing/generating-color-palettes/#putting-it-all-together%3A-all-the-code-you-need
 * Copyright (c) 2025 Matthew Ström-Awn
 * Licensed under MIT. See LICENSE file.
 * @param step - Scale step value (0-1000)
 * @param baseHue - Starting hue in degrees (0-360)
 * @returns Adjusted hue value
 * @throws If parameters are invalid
 */
function computeHue(step: number, baseHue: number): number {
    // Normalize step from 0-1000 range to 0-1
    const normalizedStep = step / 1000

    // Validate normalizedStep is between 0 and 1
    if (normalizedStep < 0 || normalizedStep > 1) {
        throw new Error("step must produce a normalized value between 0 and 1")
    }

    // Validate baseHue is between 0 and 360
    if (baseHue < 0 || baseHue > 360) {
        throw new Error("baseHue must be a number between 0 and 360")
    }

    if (baseHue === 0) {
        return baseHue
    }

    return baseHue + 5 * (1 - normalizedStep)
}

/**
 * Creates a parabolic function for chroma/saturation that peaks at middle values
 * This ensures colors are most vibrant in the middle of the scale while being
 * more subtle at the extremes
 * Derived from https://mattstromawn.com/writing/generating-color-palettes/#putting-it-all-together%3A-all-the-code-you-need
 * Copyright (c) 2025 Matthew Ström-Awn
 * Licensed under MIT. See LICENSE file.
 * @param step - Scale step value (0-1000)
 * @param minChroma - Minimum chroma/saturation value (0-1)
 * @param maxChroma - Maximum chroma/saturation value (0-1)
 * @returns Calculated chroma value
 * @throws If parameters are invalid
 */
function computeChroma(
    step: number,
    minChroma: number,
    maxChroma: number,
): number {
    const normalizedStep = step / 1000

    // Validate normalizedStep is between 0 and 1
    if (normalizedStep < 0 || normalizedStep > 1) {
        throw new Error("step must produce a normalized value between 0 and 1")
    }

    // Validate chroma values are between 0 and 1 and properly ordered
    if (minChroma < 0 || minChroma > 1 || maxChroma < 0 || maxChroma > 1) {
        throw new Error("Chroma values must be numbers between 0 and 1")
    }
    if (minChroma > maxChroma) {
        throw new Error("minChroma must be less than or equal to maxChroma")
    }

    const chromaDifference = maxChroma - minChroma
    return (
        -4 * chromaDifference * Math.pow(normalizedStep, 2) +
        4 * chromaDifference * normalizedStep +
        minChroma
    )
}

/**
 * Computes OKHSL lightness from a target contrast step using perceptual contrast
 * Derived from https://mattstromawn.com/writing/generating-color-palettes/#putting-it-all-together%3A-all-the-code-you-need
 * Copyright (c) 2025 Matthew Ström-Awn
 * Licensed under MIT. See LICENSE file.
 * @param step - Scale step value (0-1000)
 * @returns OKHSL lightness value (0-1)
 * @throws If target luminance cannot be calculated
 */
function computeLightness(step: number): number {
    // Clip values below minimum threshold to full lightness (white)
    if (step < 50) {
        return 1
    }

    // Rescale 50-999 to perceptual contrast's 5-106.04066 range
    const perceptualContrast = 5 + ((step - 50) * (106.04066 - 5)) / (1000 - 50)

    const targetLuminance = reversePerceptualContrast(
        perceptualContrast,
        1,
        false,
        "txt",
    )

    if (targetLuminance === false) {
        throw new Error(
            `Problem calculating the target luminance for step ${step}`,
        )
    }

    return yToOkhslLightness(targetLuminance)
}

/**
 * Options for generating a color scale
 */
export interface GenerateColorScaleOptions {
    /** Base hue in degrees (0-360) */
    baseHue: number
    /** Minimum chroma/saturation (0-1) */
    minChroma: number
    /** Maximum chroma/saturation (0-1) */
    maxChroma: number
    /** Array of scale values to generate (integer values between 0-1000) */
    steps: number[]
}

/**
 * Generates a complete color scale with accessible contrast levels
 * @param options - Configuration object for color scale generation
 * @returns Scale object with color srgb values keyed by scale number
 */
export function generateColorScale(
    options: GenerateColorScaleOptions,
): ColorScale {
    const { baseHue, minChroma, maxChroma, steps } = options

    if (baseHue < 0 || baseHue > 360) {
        throw new Error("baseHue must be a number between 0 and 360")
    }

    if (minChroma < 0 || minChroma > 1 || maxChroma < 0 || maxChroma > 1) {
        throw new Error("Chroma values must be numbers between 0 and 1")
    }

    if (minChroma > maxChroma) {
        throw new Error("minChroma must be less than or equal to maxChroma")
    }

    if (
        steps.some((step) => step < 0 || step > 1000 || !Number.isInteger(step))
    ) {
        throw new Error("All steps must be integers between 0 and 1000")
    }

    // Generate the color scale using map and reduce
    return steps.reduce((scale, step) => {
        const h = computeHue(step, baseHue)
        const s = computeChroma(step, minChroma, maxChroma)
        const l = computeLightness(step)

        const srgb = okhslToSrgb([h, s, l])

        return { ...scale, [step]: srgb }
    }, {})
}

And just like that, we can generate a color palette with predictable perceptual contrast based shades:

Shade Gray Blue Green Red Yellow
0
#fff
#fff
#fff
#fff
#fff
10
#e9e9e9
#e4eaf4
#dfeee1
#f4e6e4
#f2e8dc
20
#d7d7d7
#c7d9f5
#a9eab2
#f5ccc7
#f3d1a9
30
#c4c4c4
#a5c6fa
#66e37e
#faaea5
#f7b666
40
#b1b1b1
#81b2fe
#32d25b
#fd8c81
#f09c1b
50
#9c9c9c
#5a9cff
#00bd43
#ff5f58
#d88900
60
#878787
#3083f8
#2ba142
#f32c34
#bb7608
70
#707070
#2a6ecb
#3b8343
#c13938
#9a6317
80
#585858
#2e5892
#38643a
#8c3a37
#754f23
90
#3c3c3c
#2c3d56
#2f422f
#543230
#4b3926
100
#000
#000
#000
#000
#000

You can find the entire code in a Github repository. I mentioned that I did all this work in preparation for developing a new color palette for Canonical's design system. But in the end, we decided (for good reasons) to go with the WCAG-based approach, which I will write about in my next blog post. So stay tuned 🙂

10 November, 2025 12:49PM

November 09, 2025

Colin Watson: Free software activity in October 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

OpenSSH upstream released 10.1p1 this month, so I upgraded to that. In the process, I reverted a Debian patch that changed IP quality-of-service defaults, which made sense at the time but has since been reworked upstream anyway, so it makes sense to find out whether we still have similar problems. So far I haven’t heard anything bad in this area.

10.1p1 caused a regression in the ssh-agent-filter package’s tests, which I bisected and chased up with upstream.

10.1p1 also had a few other user-visible regressions (#1117574, #1117594, #1117638, #1117720); I upgraded to 10.2p1 which fixed some of these, and contributed some upstream debugging help to clear up the rest. While I was there, I also fixed ssh-session-cleanup: fails due to wrong $ssh_session_pattern in our packaging.

Finally, I got all this into trixie-backports, which I intend to keep up to date throughout the forky development cycle.

Python packaging

For some time, ansible-core has had occasional autopkgtest failures that usually go away before anyone has a chance to look into them properly. I ran into these via openssh recently and decided to track them down. It turns out that they only happened when the libpython3.13-stdlib package had different versions in testing and unstable, because an integration test setup script made a change that would be reverted if that package was ever upgraded in the testbed, and one of the integration tests accidentally failed to disable system apt sources comprehensively enough while testing the behaviour of the ansible.builtin.apt module. I fixed this in Debian and contributed the relevant part upstream.

We’ve started working on enabling Python 3.14 as a supported version in Debian. I fixed or helped to fix a number of packages for this:

I upgraded these packages to new upstream versions:

I packaged python-blockbuster and python-pytokens, needed as new dependencies of various other packages.

Santiago Vila filed a batch of bugs about packages that fail to build when using the nocheck build profile, and I fixed several of these (generally just a matter of adjusting build-dependencies):

I helped out with the scikit-learn 1.7 transition:

I fixed or helped to fix several other build/test failures:

I fixed some other bugs:

I investigated a python-py build failure, which turned out to have been fixed in Python 3.13.9.

I adopted zope.hookable and zope.location for the Python team.

Following an IRC question, I ported linux-gpib-user to pybuild-plugin-pyproject, and added tests to make sure the resulting binary package layout is correct.

Rust packaging

Another Pydantic upgrade meant I had to upgrade a corresponding stack of Rust packages to new upstream versions:

  • rust-idna
  • rust-jiter
  • rust-pyo3
  • rust-regex
  • rust-regex-automata
  • rust-speedate
  • rust-uuid

I also upgraded rust-archery and rust-rpds.

Other bits and pieces

I fixed a few bugs in other packages I maintain:

I investigated a malware report against tini, which I think we can prove to be a false positive (at least under the reasonable assumption that there isn’t malware hiding in libgcc or glibc). Yay for reproducible builds!

I noticed and fixed a small UI deficiency in debbugs, making the checkboxes under “Misc options” on package pages easier to hit. This is merged but we haven’t yet deployed it.

I notced and fixed a typo in the Being kind to porters section of the Debian Developer’s Reference.

Code reviews

09 November, 2025 03:33PM

November 07, 2025

Stéphane Graber: Introducing IncusOS!

After over a year of work, I’m very excited to announce the general availability of IncusOS, our own immutable OS image designed from the ground up to run Incus!

IncusOS is designed for the modern world, actively relying on both UEFI Secure Boot and TPM 2.0 for boot security and for full disk encryption. It’s a very locked down environment, both for security and for general reliability. There is no local or remote shell, everything must be done through the (authenticated) Incus API.

Under the hood, it’s built on a minimal Debian 13 base, using the Zabbly builds of both the Linux kernel, ZFS and Incus, providing the latest stable versions of all of those. We rely a lot on the systemd tooling to handle image builds (mkosi), application installation (sysext), system updates (sysupdate) and a variety of other things from network configuration to partitioning.

I recorded a demo video of its installation and basic usage both in a virtual machine and on physical hardware:


Full release announcement: https://discuss.linuxcontainers.org/t/announcing-incusos/25139

07 November, 2025 09:33AM

November 06, 2025

Ubuntu Blog: Web Engineering: Celebrating Our Third Annual Hack Week

The Web Engineering team is thrilled to announce the successful conclusion of our third annual Hack Week! Over the past three years, this initiative has become a cornerstone of our collaborative spirit and commitment to innovation. With 126 significant contributions to date, Hack Week provides a dedicated space for our engineers to tackle challenging problems, refine existing systems, and push the boundaries of what’s possible.

The key goals of these events is allowing us to talk with confidence about the true open source nature of our work. We get the opportunity to addressing issues we’ve identified upstream in projects that we use to benefit ourselves and others. By dedicating time to these fixes, we not only improve the stability and performance of our foundational technologies but also empower our team to gain a deeper understanding of complex systems and our dependency tree. The direct engagement with these challenges allows us to truly experience the difficulties firsthand, fostering a unique learning environment. These invaluable learnings are then taken back to our daily projects, where we reflect on the insights gained and implement improvements that benefit all our ongoing work. We are proud of the dedication displayed by everyone involved, and we look forward to continuing this initiative into the future with impactful contributions.

This year we focused on providing accessibility contributions to our internal corporate message application called Mattermost. All contributions are listed below:

Murtaza-Ax/Color-Converter
Mattermost #34132
Mattermost ##34128
Biome #7749
Mattermost #26961
Mattermost #34141
Mattermost #34142
Psycopg #1184
Upptime#269
HeroUI #5810
HeroUI #5811
Mattermost #34153
Mattermost #34154
TheAlgorithms/JavaScript #1842
adk-web #161
HeroUI #5814
HeroUI #5813
Ghost #25183
HeroUI #5818
HeroUI #5819
Ghost #25195
Upptime#271
Scrabble #3
Mattermost #34196
Countdown #155
Ghost #25197

Open source encourages compatibility with standards, making locking users in more difficult. This is why we love the freedom open source offers. Open source software allows for sharing knowledge, gaining knowledge, and practising. It promotes transparency in data collection and software systems. Freedom, therefore, is the gift that keeps on giving.

Please have a look at our open-source projects and reach out to us via the issues if anything is unclear.

06 November, 2025 11:22AM

hackergotchi for ZEVENET

ZEVENET

Mitigating DDoS and L7 Exhaustion: why one layer is not enough

In security discussions, the term DDoS is often used as if it referred to a single type of threat. In reality, today it covers two very different strategies that share the same goal but not the same execution: volumetric attacks at layers L3/L4 and application exhaustion attacks at layer 7.

Both aim to take a service offline, but they exploit different parts of the infrastructure — and therefore require different mitigation layers.

Two attack families, two impact surfaces

When some vendors claim that “modern DDoS attacks are stealthy and bypass traditional defences”, what they are actually describing is not classic volumetric DDoS, but L7 exhaustion: low-rate traffic, fully valid requests, almost indistinguishable from legitimate clients.

These attacks don’t flood the network — they drain the application from inside.

That doesn’t mean volumetric DDoS has disappeared. It remains cheap to launch, common in the wild, and extremely effective unless it is filtered before the kernel, firewall, or load balancer accepts the connections.

The threat has not changed — the point of mitigation has.

Type Layer Objective How the service breaks
Volumetric DDoS L3 / L4 Saturate bandwidth, connection tables, kernel resources The infrastructure collapses before the application can respond
Application-layer DoS L7 Exhaust CPU, memory, threads, or DB calls The service is “up”, but unusable for real users

Or, even more directly:

  • L3/L4 volumetric attacks → try to take down the network before the service responds
  • L7 exhaustion attacks → mimic valid traffic to drain the app’s internal resources

Layered defence: why L3/L4 and L7 do not compete — they complement each other

One of the most common misconceptions is assuming that a single protection layer is enough to stop any kind of attack. In practice, filtering only at layer 4 leaves the application exposed, while filtering only at layer 7 allows the kernel or load balancer to be overwhelmed before the WAF ever sees the request.

An L4 firewall can drop malformed packets or abnormal connection patterns before they consume resources, but it has no context to detect that a perfectly valid HTTP request is trying to exploit an SQLi pattern.

A WAF can detect that behaviour — but only after the connection has already been accepted, a socket has been created, and memory has been allocated.

Attack type Where it must be stopped What is inspected Typical tooling
Volumetric (L3/L4) Before accepting the connection (edge / kernel / LB) Packets, TCP flags, connection rate SYN flood protection, rate limiting, conntrack offload
Application exhaustion (L7) Once the TCP session is established HTTP headers, URL patterns, payload TWAF, OWASP rulesets, bot filtering

Effective protection is not about choosing the right layer — it is about dropping as much as possible before the app, and reserving deep inspection only for what deserves to reach it.

What happens when mitigation only works at L7 (and why it fails)

When protection is applied solely at the application layer, the TCP connection has already been accepted before any evaluation occurs. In other words, the system has completed the handshake, allocated a socket, reserved memory and promoted the session to HTTP/S before deciding whether the request should be blocked.

That removes the attacker’s need to generate massive traffic: a few thousand seemingly valid, slow, or incomplete connections are enough to consume server resources without ever saturating the network.

The result is not an immediate outage, but a progressive exhaustion:

    • Load balancer or backend CPU spikes
    • Response times increase exponentially
    • The service is still “up”, but unusable for legitimate users

This is the usual pattern of L7 exhaustion attacks: they don’t bring the network down; they wear the application out from the inside. And it happens for a simple reason: the blocking decision is made too late. First the connection is accepted, then the request is inspected, and only at the end is it decided whether to discard it. By then, the damage is already done.

How SkudoCloud applies two-phase mitigation

Effective protection against DDoS and exhaustion attacks is not about choosing between filtering at L4 or L7, but about enforcing both defenses in the right order. SkudoCloud implements this model natively inside the load-balancing engine itself, without relying on external scrubbing services or additional appliances.

Phase What is mitigated How SkudoCloud acts Where it happens
1. Early filtering (L4) TCP floods, anomalous connections, malformed packets Session rejected before allocation, per-IP/VIP limits, SYN protection, IP reputation / blocklists Load balancer kernel
2. Deep inspection (L7) SQLi, XSS, bots, valid-but-abusive requests Advanced WAF + behavioural rules HTTP/S module of the engine

This model ensures that high-volume traffic cannot saturate the system before being analysed, and that low-volume abusive requests cannot hide inside seemingly legitimate sessions. The result is an environment where the network does not collapse under load and the application does not degrade due to resource exhaustion.

Everything is managed from a single interface, with unified policies, metrics and event logging — without depending on multiple vendors, external mitigation layers or duplicated configurations.

👉 To see how this model works in a real deployment, follow the step-by-step guide: Configure the First SkudoCloud Service

06 November, 2025 11:12AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

November 05, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Azure VM utils now included in Ubuntu: boosting cloud workloads

Ubuntu images on Microsoft Azure have recently started shipping with the open source package azure-vm-utils included by default. Azure VM utils is a package that provides essential utilities and udev rules to optimize the Linux experience on Azure. This change results in more reliable disks, smoother networking on accelerated setups, and fewer tweaks to get things running. Here’s what you need to know:

What’s changing

  • Smoother storage on modern Azure VMs: Ubuntu now provides consistent device naming across SCSI and NVMe, reducing post-reboot surprises and easing automation.
  • Better handling of accelerated networking: environments using MANA or Mellanox benefit from safer defaults that avoid double-managing passthrough interfaces.
  • Less image customization: the utility and rules that many platform teams previously added now ship in the image, removing one more custom step from your pipelines.

Why it matters

  • Fewer post-boot surprises: predictable device names keep fstab, cloud-init and provisioning scripts stable across VM families and reboots.
  • Smoother NVMe adoption: newer VM families lean NVMe-first for performance; built-in rules make that transition painless while keeping SCSI setups working.
  • Less to maintain: the stock image now handles Azure disk naming and accelerated NICs (MANA/Mellanox), so teams can drop custom udev/Netplan snippets and avoid fstab surprises after reboots.

How to Get It

  • For New VMs: No action is needed. The package is included by default in new Ubuntu images.
  • For Existing VMs: You can install the package directly from the Ubuntu archive, where it’s available for all current LTS and interim releases: sudo apt update && sudo apt install azure-vm-utils

Quick ways to verify

azure-nvme-id --version           # tool present
find /dev/disk/azure -type l      # predictable Azure disk links

05 November, 2025 03:02PM

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Edge Networking gets smarter: AI and 5G in action

Organizations everywhere are pushing AI and networks closer to the edge. With that expansion comes a challenge: how do you ensure reliable performance, efficiency, and security outside of the data center? Worker safety, healthcare automation, and the success of mobile private networks depend on a robust technology stack that can withstand real-world challenges and still deliver results. Canonical has partnered with Dell Technologies, Intel, Druid, Airspan and Ecrio to publish a new solution brief addressing this question. The brief highlights how a fully integrated, edge-ready platform can meet the growing demand for intelligent, secure, and real-time computing at the edge. 

The brief showcases how to build a strong foundation for edge AI and networking by using a Dell PowerEdge XR8000 ruggedized edge network+compute platform consisting of two server sleds powered by Intel Xeon Scalable processors. Both sleds are running Canonical’s software infrastructure stack, which combines Ubuntu, MicroCloud, and Canonical Kubernetes. On the first sled, MicroCloud hosts two VMs: Airspan Control Platform (ACP) manages the 5G radio units, and Druid Raemis provides the cloud-native 5G core orchestrated by Canonical Kubernetes. The second sled hosts Ecrio’s iota-e platform, also managed by Canonical Kubernetes, which enables AI-powered real-time image-recognition, voice, video, and messaging services. These capabilities support critical business processes such as worker coordination in industrial settings, emergency response in healthcare, and secure team communications in remote or hazardous environments.

Download the solution brief to learn how this integrated platform supports advanced use cases, including AI-driven safety monitoring, smart factory operations, and 5G connectivity at the edge.

In the solution brief, you’ll discover how to:

  • Deploy AI and event detection workloads on optimized, securely designed infrastructure
  • Operate private 5G and RAN control software on edge-virtualized environments
  • Streamline orchestration and lifecycle management with Canonical Kubernetes and MicroCloud
  • Detect safety and operational risks in real time using integrated AI inference

Download the full solution brief

For more information on how Canonical supports your edge and AI journey, visit our related content:

  • Open source AI for the enterprise
    Discover how Canonical enables AI workloads from cloud to edge with tools for model training, trusted deployment, and lifecycle management. This webpage outlines Canonical’s full AI stack, from Ubuntu-optimized hardware acceleration to MLOps best practices, with links to blogs, whitepapers, and deployment guides.
  • Canonical Telco solutions
    Learn how Canonical helps telecom operators modernize their infrastructure using open source technologies. This hub covers solutions for 5G core networks and Radio Access Networks (RAN) built on Ubuntu, Canonical Kubernetes, OpenStack, MAAS and Juju. You’ll find case studies and insights into telco-grade performance and security.

05 November, 2025 07:11AM

hackergotchi for Qubes

Qubes

Fedora 41 approaching end of life

Fedora 41 is currently scheduled to reach end of life (EOL) on 2025-11-19 (approximately two weeks from the date of this announcement). Please upgrade all of your Fedora templates and standalones by that date. For more information, see Upgrading to avoid EOL.

There are two ways to upgrade a template to a new Fedora release:

Please note that no user action is required regarding the OS version in dom0 (see our note on dom0 and EOL).

05 November, 2025 12:00AM

November 04, 2025

hackergotchi for VyOS

VyOS

VyOS Project October 2025 Update

Hello, Community! The October update is here and it's dominated by bug fixes — as we are preparing to release the next VyOS Stream image on the way to the future VyOS 1.5 and working on the new 1.4.4 maintenance release as well. However, there are a few useful features as well, including support for DHCP options 82 (relay agent information) and 26 (interface MTU), containers health checks, and more.

04 November, 2025 01:04PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Deepin

Deepin

deepin Community Monthly Report for October 2025

I. October Community Data Overview II. Community Products 1. deepin 25 Official Release Update: File Management and System Experience Upgraded Again In October, the deepin 25 official release received the 25.0.9 version update, bringing multiple optimizations focused on file management efficiency and system interaction details: File Manager Efficiency Innovations: Supports grouping display by time, size, type, and name, making file finding clearer. Added a pin tab feature for one-click access to frequently used directories. Dragging files to the window edge triggers automatic scrolling, making long-distance operations more convenient. Automatically creates a new tab in the current window when opening a ...Read more

04 November, 2025 09:41AM by xiaofei

November 03, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 916

Welcome to the Ubuntu Weekly Newsletter, Issue 916 for the week of October 26 – November 1, 2025. The full version of this issue is available here.

In this issue we cover:

  • Upgrades to 25.10 (Questing Quokka) are now live!
  • Ubuntu Stats
  • Hot in Support
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • LoCo Events
  • Ubuntu Project docs: That’s a wrap!
  • Introducing architecture variants: amd64v3 now available in Ubuntu 25.10
  • [Ubuntu Studio] Upgrading from 25.04 to 25.10
  • Other Community News
  • What Say You
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • In Other News
  • Featured Audio and Video
  • Updates and Security for Ubuntu 22.04, 24.04, 25.04 and 25.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • irihapeti
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

03 November, 2025 11:53PM

Stéphane Graber: Announcing Incus 6.18

The Incus team is pleased to announce the release of Incus 6.18!

This is a reasonably busy release with quite a few smaller releases in every corner of Incus so there should be something for everyone!

The highlights for this release are:

  • Systemd credentials support
  • File operations on storage volumes
  • Exporting of ISO volumes
  • BFP token delegation
  • MacOS support in the Incus VM agent
  • VirtIO sound cards for VMs
  • Support for temporarily detaching USB devices
  • Configurable DNS mode for OVN networks
  • Configurable MAC address patterns for networks and instances
  • Extended IncusOS management CLI

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

03 November, 2025 07:21PM

November 02, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2025/10

The 10th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.17.6, 6.12.56-LTS, 6.6.115-LTS – Sparky 8.1-RC1 ARM64 for Raspberry Pi released – added to repos: Mousam Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in November too, please. *

Source

02 November, 2025 10:26AM by pavroo

October 31, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Scarlett Gately Moore: A New Chapter: Career Transition Update

I’m pleased to share that my career transition has been successful! I’ve joined our local county assessor’s office, beginning a new path in property assessment for taxation and valuation. While the compensation is modest, it offers the stability I was looking for.

My new schedule consists of four 10-hour days with an hour commute each way, which means Monday through Thursday will be largely devoted to work and travel. However, I’ll have Fridays available for open source contributions once I’ve completed my existing website maintenance commitments.

Open Source Priorities

Going forward, my contribution focus will be:

  1. Ubuntu Community Council
  2. Kubuntu/Debian
  3. Snap packages (as time permits)

Regarding the snap packages: my earlier hope of transitioning them to Carl hasn’t worked out as planned. He’s taken on maintaining KDE Neon single-handedly, and understandably, adding snap maintenance on top of that proved unfeasible. I’ll do what I can to help when time allows.

Looking for Contributors

If you’re interested in contributing to Kubuntu or helping with snap packages, I’d love to hear from you! Feel free to reach out—community involvement is what makes these projects thrive.

Thanks for your patience and understanding as I navigate this transition.

31 October, 2025 03:38PM

Podcast Ubuntu Portugal: E365 Encontrões Cimeiros

Há demasiadas coisas a acontecer, todas ao mesmo tempo, é o caos! De regresso da Ubuntu Summit em Londres, Lisboa e Porto, o Diogo sublinha os momentos mais importantes e entope o Internet Archive; o Miguel continua na missão de libertar pessoas do Windows; problemas técnicos caem-nos em cima em plena emissão; a Canonical tem uma nova Academia para certificações; revimos as novidades da mais moderna versão do Ubuntu Touch; apelamos a voluntários para fazerem respiração boca-a-boca ao Unity; alguém inventou um Recall para Linux e qual Oppenheimer, vemos ao longe a Framework pegar fogo à tenda do circo com Omarchy e Hyprland. E o mais importante: há um novo Super Tux Kart.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Os efeitos sonoros têm os seguintes créditos: [Short Elevator Music Loop by BlondPanda] (https://freesound.org/s/659889/). License: Creative Commons 0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

31 October, 2025 12:00AM