(中文) UOS AI 2.12 上线!MCP 服务自动调用,AI 随航再升级
27 November, 2025 08:38AM by xiaofei
27 November, 2025 08:38AM by xiaofei
Nesta semana e seguintes, vamos apostar no Ubuntu Touch e trazer o Software Livre nos telefones para as massas! O Miguel resolveu problemas com a ajuda da comunidade, o Diogo vai ao Porto evangelizar à bruta com o Ruben Carneiro e o sol brilhará para todos nós. A Canonical promete suporte para 15 anos; o Xubuntu foi hackeado porque não usa Hugo (womp, womp); o Miguel agora usa mano e bro em cada frase por causa de impressoras e vai converter-se ao Debian com Plasma e MEU DEUS, AS MÁQUINAS DA STEAM SÃO LINDAS!! FUUUUYYYYYOOOOH!!!
Já sabem: oiçam, subscrevam e partilhem!
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Os efeitos sonoros têm os seguintes créditos: [patrons laughing.mp3 by pbrproductions] (https://freesound.org/s/418831/) – License: Attribution 3.0. Concurso: [01 WINNER.mp3 by jordanielmills] – (https://freesound.org/s/167535/) – License: Creative Commons 0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.
Community Edition is the open source load balancer from SKUDONET, designed for engineers and administrators who need traffic control without relying on costly or complex solutions.
Built on Debian 12.8, it provides a stable and secure operating system with performance far above what is typical in free software:
It is 100% free, fully open source, and compatible with Linux and Windows server environments.
For technical users looking for an open source load balancer with a Web GUI, easy to deploy and free of extra dependencies, this edition is an ideal starting point.
Most open source load balancers require manual configuration, editing text files, or navigating a steep learning curve. This is where SKUDONET Community Edition stands out.
The software includes a full Web GUI that allows you to:
This makes it highly accessible—even for small teams that need immediate results without spending hours reading advanced documentation.
From a technical perspective, SKUDONET CE includes features typically found only in commercial products:
Handles large volumes of TCP/UDP connections with minimal overhead.
Designed for web applications, APIs, and HTTP/HTTPS services.
Enables SSL termination directly in the load balancer.
Ensures traffic is sent only to available backend servers.
A visual interface for managing all components without CLI.
Full automation for CI/CD pipelines, hybrid infrastructures, or external integrations.
Suitable for modern and distributed network architectures.
Beyond traffic distribution, SKUDONET Community Edition includes fundamental security mechanisms:
It is not a deep-inspection system like the Enterprise Edition, but it provides an initial layer of protection useful for development environments and small deployments. And if your infrastructure later requires deeper traffic inspection, advanced clustering, or professional support, you can migrate to SKUDONET Enterprise Edition, designed for critical environments.
SKUDONET Community Edition is distributed for free through SourceForge, where you can download the ISO or installation packages for your test or development environment.
26 November, 2025 03:05PM by Nieves Álvarez
26 November, 2025 12:30PM by t.lamprecht (invalid@example.com)
26 November, 2025 02:12AM by xiaofei
25 November, 2025 07:40AM by xiaofei
The collaboration makes it easy to boot directly into Ubuntu from AMI’s UEFI firmware solutions
Nuremberg, Germany, November 24, 2025 – Today, Canonical, the publisher of Ubuntu, announced a partnership with AMI, a provider of Unified Extensible Firmware Interface (UEFI) solutions. The partnership will enable users of AMI’s Aptio® V UEFI Firmware to netboot directly into Ubuntu by simply selecting Ubuntu Cloud Installation in the boot menu.
This new native boot functionality makes it easy and convenient to use Ubuntu, and eliminates the need for flashing images or using additional media or external devices. A simple Ethernet connection is enough to install and launch Ubuntu.

Alexander Lehmann (Sales Director – IoT, Canonical) and B. Parthiban (General Manager, Boot Firmware Group at AMI) are excited to provide users with the best out-of-the-box experience for Ubuntu.
“At AMI, we value partnerships that strengthen the ecosystem and deliver trusted solutions. Canonical’s widely adopted, community-supported platform is recognized for its stability and reliability, making this collaboration a natural fit,” commented B. Parthiban, General Manager, Boot Firmware Group at AMI. “Together, we’re enabling secure, high-performance experiences for customers everywhere.”
“Our collaboration with AMI furthers our commitment to deliver the best Ubuntu experience right out of the box. It’s now even easier to install Ubuntu,” said Alexander Lehmann, Sales Director – IoT, at Canonical.
The collaboration between Canonical and AMI kicks off at SPS – the Smart Production Solutions summit – in Nuremberg from November 25 to 27, 2025.
To find out more about Ubuntu, visit Canonical’s booth in hall 6, number 112 and AMI’s booth in hall 6, number 223.
* * *
About Canonical
Canonical, the publisher of Ubuntu Pro, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. Learn more at https://canonical.com/
About AMI
AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world’s compute platforms from on-premises to the cloud to the edge. AMI’s industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. AMI is a registered trademark of AMI US Holdings, Inc. Aptio is a registered trademark of AMI in the US and/or elsewhere.
Open source software is known for its ability to lower IT costs. But in 2025, affordability is only part of the story. A new Linux Foundation report, The strategic evolution of open source, reveals that open source has evolved from a tactical cost-saving measure to a mission-critical infrastructure supporting enterprise-grade investments, and delivering stronger business outcomes as a result.
This transformation is supported by academic research estimating that, without open source, companies would pay roughly 3.5 times more to build the software running their businesses – an $8.8 trillion increase.[1]
The 2025 World of Open Source Survey by the Linux Foundation reveals that open source is deeply embedded across enterprise technology stacks, making it a foundation for global IT operations. In fact, over 55% of analyzed tech stacks used a Linux-based operating system; and similarly, around half of all analyzed cloud, container, and DevOps technologies have Linux at their core.
The survey illuminates the many great reasons businesses are choosing open source: improved productivity, reduced vendor lock-in, and, unsurprisingly, lower total cost of ownership (TCO). Nearly half of organizations (46%) report an increase in business value from open source compared to last year, with 83% considering it valuable for their future. According to the World of Open Source Survey, 58% of organizations reported lower software ownership costs, and 63% cited higher productivity as a direct benefit of adopting open source. In addition, 62% reported reduced vendor lock-in and 75% judged their software quality to be higher thanks to OSS. Overall, 56% said the benefits of OSS exceeded the costs.[3] A Gartner study echoes these findings, showing that cost control and application development flexibility remain the top drivers of open source adoption.[2]
And it’s not just about the costs: organizations that invest strategically in open source are 20% more likely to perceive competitive advantage, while 78% report workplace satisfaction and better talent attraction. Nearly 80% say open source makes their organization a better workplace, and 74% say it improves their ability to attract technical talent.
One respondent put it this way: “Open source is not supplementary tooling but an ecosystem of core infrastructure dependencies.” This captures the shift perfectly: cost savings may start the conversation, but reliability, flexibility, and long-term value now drive adoption.
The same characteristics that make open source adaptable also make it economical:
Systems based on open source tend to have lower maintenance overhead and longer life cycles, advantages that compound fast. That’s why enterprises see real savings,not just from shifting license costs to labor, but through genuine efficiency gains across teams.
Here’s a real-world case study of that in action: Greek telecom leader Nova leveraged Canonical’s planning and open pricing to control its CAPEX and OPEX, benefiting from predictable costs and freedom from management software licensing fees. Support from Canonical paid for “real expertise that enriches our team, rather than paying for access.”[3]
Open source is also the backbone of AI, making it easier to adopt this increasingly must-have technology into business operations. McKinsey research highlights how open source frameworks accelerate AI adoption, enable faster product development, and catalyze ecosystem innovation, amplifying the total value beyond mere cost savings.[4] In fact, the LF’s survey found that AI is the technology that benefits the most from being open source, according to 38% of respondents, and research from the Microsoft AI cloud Partners team showed that Linux environments such as Ubuntu deploy 63% faster with up to 306% ROl over three years.[5]
The data is clear: open source software lowers IT costs, but cost benefits only reach their full potential when paired with enterprise-ready support. For technical audiences, this isn’t about “just” having a backstop: it’s about operational excellence, security, and resilience. When issues do arise, they must be addressed quickly and precisely.
The survey shows 71% of organizations expect response times under 12 hours for critical OSS production issues, marking a shift from traditional community support to commercial-grade service-level agreements. In financial services and manufacturing, over 90% consider paid OSS support essential. This need for enterprise-grade support peaks in mission-critical workloads (54%), systems handling sensitive data (43%), and regulated sectors (38%).
There’s a perception that support is ‘too expensive’, but quite to the contrary, paid commercial support does not diminish open source’s cost benefits; instead, it enhances them. Just like OSS adoption saves on costs and licences, robust support services protect organizations against the potentially disastrous costs of downtime, compliance failures, or data breaches.
Canonical’s own experience confirms that long-term OSS support is an increasingly strategic investment, especially in markets with high regulatory demands and cloud migration complexities.[6]
Take, for example, The European Space Agency (ESA), which depends on Canonical’s distributions of Kubeflow and Spark running on Kubernetes for its mission operations. ESA highlights that Canonical’s support lets them “sleep soundly,” focusing on space missions while trusting infrastructure experts for uptime and reliability.[3]
The biggest IT cost benefits of open source software come when free software innovation is combined with investments in professional support. After all, these low-cost (or sometimes free) tools are highly accessible and often intuitive to build with, but they can take a lot of time, effort, and specialized skills to maintain and secure in the long term.
Canonical takes away that time-consuming effort from developers, and allows them to focus on building, through Ubuntu Pro + Support, our comprehensive security maintenance and support service.
Ubuntu Pro + Support gives users a wide range of benefits, including:
Ubuntu Pro extends cost benefits beyond licensing into comprehensive lifecycle management, turning open source affordability into sustained business value.
In conclusion, the business benefits of open source are clear to see, and widely reflected in the business landscape, where record numbers of organizations and tech stacks have open source as a core part of their mission-critical systems. The permissive licences, lack of vendor lock-in, and flexibility of open source make it a clear cost optimizer; but the most significant IT cost savings emerge when organizations combine free software innovation with enterprise-grade support, governance, and active engagement. Those who treat open source as core infrastructure aren’t just saving money: they’re building competitive, secure, and innovative foundations for growth.
$8.8 trillion – that’s what open source is worth to the global economy. If you’re not building on it, you’re paying for it somewhere else. The organizations leading in innovation, efficiency, and resilience already know: open source is the foundation of competitive advantage.
24 November, 2025 09:18AM by xiaofei

What if you could work on real-world projects, shape cutting-edge technology, collaborate with developers across the world, make a meaningful impact with your design skills, and grow your portfolio… all without applying for an internship or waiting for graduation?
That’s what we aim to do with open design: an opportunity for universities and students of any design discipline.
Before we go further, let’s talk about what open design is. Many open source tools are built by developers, for developers, without design in mind. When open source software powers 90% of the digital world (PDF), it leaves everyday users feeling overwhelmed or left out. Open design wants to bridge that gap.
We aim to introduce human-centred thinking into open source development, enhancing these tools to be more intuitive, inclusive, and user-friendly. Most open source projects focus on code contributions, neglecting design contributions. That leaves a vast number of projects without a design system, accessibility audits, or onboarding documentation. That’s where designers come in, helping shape better user experiences and more welcoming communities.
Open design is about more than just aesthetics. Open design helps to make technology work for people; that’s exactly what open source needs. Learn more about open design on our webpage.
We want to raise awareness for the projects, the problems that currently exist, and how we can fix them together, and encourage universities and students to become advocates of open design.
We want universities to connect their students to real-world, meaningful design opportunities in a field that is currently lacking the creativity of designers. Our goal is to help and motivate students to bring their design skills into open source projects and become advocates, to make open design accessible, practical, and empowering!
We want to help universities help students to access:
We have provided universities with talks and project briefs, enabling them to prepare students to utilise their expertise and design a brighter future for open source. If you’re a department leader, instructor, or coordinator, exploring open source and open design will help you to give your students unique access to industry-aligned experiences, while embedding values of collaboration, open contribution, and inclusive design.
If you’re a student in UX, UI, interaction, service, visual, HCI design, or any other field with design influence, you’ve been told how important it is to build your portfolio, gain hands-on experience, and collaborate with cross-functional teams. Open design is your opportunity to do so.
The best part is, you don’t have to write a single line of code to make a difference! Open source projects are looking for:
If you’re in a design course, you already have, or are developing, the skills that open-source projects need.
Open design is an opportunity to develop by collaborating across disciplines, navigating ambiguity, and advocating for users: skills employers value. With open design, you’ll gain confidence in presenting ideas, working with international teams, and handling feedback in a real-world setting, growing in ways that classroom projects and internships often don’t offer.
If you’re aiming for a tech-focused design career, open design is one of the most impactful and distinctive ways to stand out!
Getting started is easier than you think, even if GitHub looks scary at first. Here’s how:
We’ve made a video guide to understanding GitHub, and curated a list of other videos to get to grips with GitHub.
It’s like a job board for design contributions. These projects are waiting for you.
Most projects on contribute.design list what they’re looking for in .design file or DESIGN.md guidelines.
Navigate to the Issues tab of the project repo, where you can filter for issues labelled for design. You can also use this tab to propose any issues you discover in the project.
Start adding your ideas, questions, and solutions to issues. You’ll be collaborating, communicating, and making meaningful contributions.
You can explore more projects through the GitHub Explore page, but not every project will have a design process in place; that’s where your skills are especially valuable. If you don’t see design issues, treat the project as a blank canvas. Suggest checklists, organise a design system, or improve documentation. The power is in your hands!
Reach out to maintainers, join community discussions, and don’t hesitate to introduce design-focused thinking. Your initiative can spark meaningful change and help open source become more user-friendly, one project at a time.
View every project as an opportunity; you don’t need an invitation to contribute, just curiosity, creativity, and the willingness to collaborate.
We’re looking for universities and departments interested in introducing open design to their students. Whether that’s through a talk, module project briefs, or anything else you’d like to see, we’re excited to find ways to work together and bring open design to campus.
Are you a program director, a design department, a student group, or an interested student? Let’s talk!
Reach out at opendesign@canonical.com
21 November, 2025 08:33AM by xiaofei
Enhanced Android device simulation, smarter diagnostics, and OIDC-enforced authentication
The Anbox Cloud team has been working around the clock to release Anbox Cloud 1.28.0! We’re very proud of this release that adds robust authentication, improved diagnostic tools, and expanded simulation options, making Anbox Cloud even more secure, flexible, and developer-friendly for running large-scale Android workloads.
Let’s go over the most significant changes in this new version.
Our OpenID Connect (OIDC)-based authentication and authorization framework is now stable with Anbox Cloud 1.28.0. This new framework provides a standardized approach for controlling access across web and command-line clients. Operators can now assign permissions through entitlements with fine-grained control, define authorization groups, and create and manage identities.
Configuring user permissions, understanding the idea of identities and groups, and looking through the entire list of available entitlements are all thoroughly covered in the new guides that come with this release. This represents a significant advancement in the direction of a more uniform and standards-based access model for all Anbox Cloud deployments.
This is one of our most exciting new features: developers testing telephony-enabled applications in Anbox Cloud can now simulate incoming SMS messages using the Anbox runtime HTTP API.
This new functionality allows messages to trigger notifications the same way they would on a physical device, generating more realistic end-to-end scenarios. A new how-to guide in our documentation provides detailed instructions on how to enable and use this feature.
Because we know accidents happen (especially in production environments…), in order to reduce operational risk, this release introduces the ability to protect instances from accidental deletion. This option can be enabled directly in the dashboard either when creating a new instance or later from the Instance details page under the Security section.
Once this protection option is turned on, the instance cannot be deleted, even during bulk delete operations, until the configuration is reset. This simple safeguard helps operators preserve important data and prevents costly mistakes in busy environments.
Working with ADB (the Android Debug Bridge) has also become more flexible. Anbox Cloud now allows up to five ADB shares to be managed directly from the dashboard. For those who prefer the command line, the new amc connect command provides an alternative to the existing anbox-connect tool. Together, these improvements make it easier for developers to manage and maintain multiple debugging or testing sessions at once.
With version 1.28.0, we’re introducing a new diagnostic facility in the dashboard. This tool is designed to simplify troubleshooting for both the instances and the streaming sessions themselves.
This feature helps collect relevant diagnostic data automatically, thereby reducing the work needed to identify and resolve issues. It also makes collaboration with our Canonical support teams more efficient, as users can now provide consistent and accurate diagnostic information in a structured, standard format.
Here’s another hotly anticipated feature: the Anbox Streaming SDK gains expanded sensor support in this release. Our SDK now includes gyroscope, accelerometer and orientation sensors, allowing developers to test applications more interactively.
Sensor support is disabled by default but can be easily enabled in the streaming client configuration. This addition opens up new possibilities for interactive use cases, such as gaming.
We think that Anbox Cloud 1.28.0 is our best release to date, and we are pleased to keep providing a feature-rich, scalable, and safe solution for managing Android workloads on a large scale.
This latest version makes it easier than ever for developers and operators to create and test Android apps by introducing more precise device simulation, improved troubleshooting tools, and stricter access controls, as we’ve explained above.
Try it now and stay tuned for further developments in our upcoming releases. For detailed instructions on how to upgrade your existing deployment, please refer to the official documentation.
Official documentation
Anbox Cloud Appliance
Learn more about Anbox Cloud or contact our team to discuss your use case
Android is a trademark of Google LLC. Anbox Cloud uses assets available through the Android Open Source Project.
Well, you are right, you can’t. At least not directly. This is well documented in many projects relying on interposing binaries, like faketime.
But what if we could write something that would take a static binary, replace at least the direct syscalls with ones going through libc and load it with the dynamic linker? We are in luck, because the excellent QEMU project has a user space emulator! It can be compiled as a dynamically linked executable, honors LD_PRELOAD and uses the host libc’s syscall – well, at least sometimes. Sometimes syscalls just bypass libc.
The missing piece was a way to make QEMU always take the interposable path and call the host libc instead of using an arch-specifix assembly routine (`safe_syscall_base`) to construct the syscall and going directly to the kernel. Luckily, this turned out to be doable. A small patch later, QEMU gained a switch that forces all syscalls through libc. Suddenly, our static binaries started looking a lot more dynamic!
$ faketime '2008-12-24 08:15:42' qemu-x86_64 ./test_static_clock_gettime
2008-12-24 08:15:42.725404654
$ file test_static_clock_gettime
test_clock_gettime: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, ...
With this in place, Firebuild can finally wrap even those secretive statically linked tools. QEMU runs them, libc catches their syscalls, LD_PRELOAD injects libfirebuild.so, and from there the usual interposition magic happens. The result: previously uncachable build steps can now be traced, cached, and shortcut just like their dynamic friends.
There is one more problem though. Why would the static binaries deep in the build be run by QEMU? Firebuild also intercepts the `exec()` calls and now it rewrites them on the fly whenever the executed binary would be statically linked!
$ firebuild -d comm bash -c ./test_static
...
FIREBUILD: fd 9.1: ({ExecedProcess 161077.1, running, "bash -c ./test_static", fds=[0: {FileFD ofd={FileO
FD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOFD #3 type=FD_PIPE_OUT w} {Pipe #0} close_o
n_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=FD_PIPE_OUT w} {Pipe #1} close_on_popen=fal
se cloexec=false}, 3: {FileFD NULL} /* times 2 */]})
{
"[FBBCOMM_TAG]": "exec",
"file": "test_static",
"// fd": null,
"// dirfd": null,
"arg": [
"./test_static"
],
"env": [
"SHELL=/bin/bash",
...
"FB_SOCKET=/tmp/firebuild.cpMn75/socket",
"_=./test_static"
],
"with_p": false,
"// path": null,
"utime_u": 0,
"stime_u": 1017
}
FIREBUILD: -> proc_ic_msg() (message_processor.cc:782) proc={ExecedProcess 161077.1, running, "bash -c
./test_static", fds=[0: {FileFD ofd={FileOFD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOF
D #3 type=FD_PIPE_OUT w} {Pipe #0} close_on_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=F
D_PIPE_OUT w} {Pipe #1} close_on_popen=false cloexec=false}, 3: {FileFD NULL} /* times 2 */]}, fd_conn=9.
1, tag=exec, ack_num=0
FIREBUILD: -> send_fbb() (utils.cc:292) conn=9.1, ack_num=0 fd_count=0
Sending message with ancillary fds []:
{
"[FBBCOMM_TAG]": "rewritten_args",
"arg": [
"/usr/bin/qemu-user-interposable",
"-libc-syscalls",
"./test_static"
],
"path": "/usr/bin/qemu-user-interposable"
}
...
FIREBUILD: -> accept_ic_conn() (firebuild.cc:139) listener=6
...
FIREBUILD: fd 9.2: ({Process NULL})
{
"[FBBCOMM_TAG]": "scproc_query",
"pid": 161077,
"ppid": 161073,
"cwd": "/home/rbalint/projects/firebuild/test",
"arg": [
"/usr/bin/qemu-user-interposable",
"-libc-syscalls",
"./test_static"
],
"env_var": [
"CCACHE_DISABLE=1",
...
"SHELL=/bin/bash",
"SHLVL=0",
"_=./test_static"
],
"umask": "0002",
"jobserver_fds": [],
"// jobserver_fifo": null,
"executable": "/usr/bin/qemu-user-interposable",
"// executed_path": null,
"// original_executed_path": null,
"libs": [
"/lib/x86_64-linux-gnu/libatomic.so.1",
"/lib/x86_64-linux-gnu/libc.so.6",
"/lib/x86_64-linux-gnu/libglib-2.0.so.0",
"/lib/x86_64-linux-gnu/libm.so.6",
"/lib/x86_64-linux-gnu/libpcre2-8.so.0",
"/lib64/ld-linux-x86-64.so.2"
],
"version": "0.8.5.1"
}
The QEMU patch is forwarded to qemu-devel. If it lands, anyone using QEMU user-mode emulation could benefit — not just Firebuild.
For Firebuild users, though, the impact is immediate. Toolchains that mix dynamic and static helpers? Cross-builds that pull in odd little statically linked utilities? Previously “invisible” steps in your builds? All now fair game for caching.
Firebuild 0.8.5 ships this new capability out of the box. Just update, make sure you’re using a patched QEMU, and enjoy the feeling of watching even static binaries fall neatly into place in your cached build graph. Ubuntu users can get the prebuilt patched QEMU packages from the Firebuild PPA already.
Static binaries, welcome to the party!
20 November, 2025 08:45AM by Joseph Lee
19 November, 2025 12:52PM by t.lamprecht (invalid@example.com)
A new Linux Foundation report reveals how organizations worldwide are adopting, using, and perceiving open source software.
The Linux Foundation’s latest report, The state of global open source, has just been released in collaboration with Canonical. The report follows the Linux Foundation’s European spotlight report, released earlier this year, and confirms that many of the trends the European spotlight report unveiled are true on a global scale. In particular, the global spotlight report confirms the role of open source software as the foundation of business-critical systems worldwide, and indicates a continued increase in adoption. However, organizations continue to lack the governance, security testing, and strategic maturity required to manage open source strategically and securely.
The report suggests that most organizations expect enterprise-grade performance from open source software, but under-invest in the required governance frameworks, security practices, and community engagement.
According to the report, the trend of increasing open source adoption in the enterprise is set to continue, as 83% of enterprises consider open source software adoption valuable to their future. Likewise, the report reveals the centrality of open source software to the modern enterprise. Globally, enterprises have adopted open source software throughout their technical stacks: 55% have adopted open source operating systems, whilst 49% have adopted open source cloud and container technologies, and 46% open source web and application development.
The widespread confidence that open source will play a pivotal role in many organizations’ futures is closely connected to a growing understanding of the benefits of open source software adoption.
This report confirms a shift in enterprises’ strategic mindset around open source: 82% of respondents considered open source as an asset that enables innovation. Historically, open source software was often reserved for specific projects or use cases, like setting up web servers – with wider organizational use being viewed with some scepticism.
Open source is now a “must-have.” Why is this the case? Here’s what the respondents had to say:

Compared to the benefits seen by organizations using open source software in 2024, 46% reported increased business value from open source over the past year. The growing interest in and use of open source technologies is particularly clear for certain technologies, like AI.
The growing value of open source can partly be attributed to the influence of AI. Since 2024, there has been an increase in the adoption of open source AI and machine learning (ML) applications from 35% to 40% – a rise of 5%. Globally, AI and ML were perceived to be the technology most benefiting from being open source. Code visibility ensures organizations can more easily audit their AI systems, which makes compliance simpler, provides more transparency into how the AI model functions, and enables companies to run the AI on their own infrastructure – ensuring sensitive data never leaves the organization’s control.
With growing adoption of AI and ML come new cybersecurity risks and requirements. However, the report indicates that organizations currently lack mature governance structures for their open source estates, creating additional complications to adopting AI and ML securely.
Despite increasing adoption of open source technologies, many organizations still lack a mature governance strategy for their open source software.
The number of organizations that have defined a clear open source strategy has grown by just 2% in the last year, to a total of 34%. That means that nearly two-thirds of organizations rely instead on informal strategies of governance of their open source estates, primarily due to budget constraints, shifting priorities and new strategic requirements. For example, when evaluating open source components for adoption:
With less than half of organizations taking these important formal strategies before adoption, the report indicates that this “creates significant risk exposure and limits organizations’ ability to capture the full strategic value of open source participation,” signalling that this is a concern that organizations must take seriously.
Similarly, organizations demonstrate a lack of consensus around which security features and assurances matter to them when adopting open source components, with no single certification or assurance mechanism achieving adoption by more than a quarter of open source solutions. Almost a third of organizations (28%) don’t know which assurances would make them more likely to trust an open source solution. This opens them up to serious security risks, like supply chain attacks.
As a result, enterprises are increasingly turning to paid support options for their open source estates.
More than half of respondents consider paid support for their open source essential. As open source technologies have become critical to business infrastructure, expectations for open source software support are beginning to mirror that of commercial software standards:
Acquiring paid support for open source software makes this level of support achievable, which organizations broadly accept. On a granular level, the industries with the highest proportion that consider paid support essential are those that process sensitive or valuable data, such as manufacturing (97%) followed by financial services (96%), IT (91%) and government (92%).
The Linux Foundation’s The state of global open source reveals that enterprises are relying on open source software and perceiving its benefits. However, increasing engagement with open source communities, more structured governance of open source estates, and structured security evaluations of open source elements before adoption will help organizations to strengthen the resilience of their open source infrastructure.
18 November, 2025 01:19PM by Greenbone AG
Today, virtually every business depends on its online infrastructure. Companies manage multiple websites, internal applications, or e-commerce platforms that must remain available at all times.
As these environments grow, maintenance and stability become as critical as security — and every change must be applied without affecting the user experience. Maintaining uptime during configuration changes requires solutions that support reload configuration without downtime.
Even a minimal interruption —a restart, a reconnection, or a lost session— can translate into frustrated customers, disrupted operations, or lost revenue. And in an environment where reputation depends on availability, these incidents can have visible consequences.
Keeping services running while updating configurations or network policies is no longer just a technical challenge — it’s an operational requirement.
In daily practice, administrators plan maintenance and updates through a controlled service restart. They pick a low-traffic time, apply the changes, and assume a brief service interruption.
Although the process is fast and almost imperceptible to users, it still implies a short disruption. This method works well in small environments, where a few seconds of downtime go unnoticed.
The challenge arises when managing dozens or hundreds of sites simultaneously — as in the case of service providers, public administrations, or hosting environments.
In those scenarios, even a short interruption can have a direct impact:
That’s why, in modern web traffic and service management, the ability to apply changes without restarting has become an essential function.
This is where the concept of reload becomes relevant.
This feature allows changes to be applied without restarting active processes — reloading the configuration in the background while keeping all connections open.
In other words, reload replaces the classic “stop and restart” with a smarter process where the system updates its configuration, preserves active sessions, and ensures that traffic continues to flow without interruptions. Technically, reload forces a re-read of the service configuration in memory, synchronizing new rules, policies, or certificates without closing sockets or terminating active connections.
From a technical point of view, it may seem like a small change, but in practice it represents a qualitative leap for any infrastructure that requires true continuity.
The difference between restart and reload is simple: one stops and starts again, the other updates without stopping.
Not every business requires the same level of continuity, but there are scenarios where reload makes a clear difference:
In these cases, reload isn’t a technical luxury — it’s what allows an infrastructure to keep operating without visible interruptions while evolving internally.
At SKUDONET, we believe that availability should never be sacrificed each time a configuration is updated.
That’s why the system allows services and load-balancing policies to be reloaded without restarting, keeping traffic flowing smoothly even in high-demand environments.
This means that:
Thanks to this capability, SKUDONET helps organizations maintain full service availability, even in continuous activity environments or when managing dozens of simultaneous applications.
Everything is managed from a single visual interface, with unified metrics, logs, and events that simplify control without manual processes or scheduled restarts.
SKUDONET Enterprise Edition combines load balancing, security, and traffic inspection in a single platform — designed for business environments that can’t afford downtime but still need to evolve quickly.
Want to try SKUDONET Enterprise Edition with all its features, including reload configuration without downtime? Request your free 30-day trial here:
18 November, 2025 08:00AM by Nieves Álvarez
Dear Friends! It’s time for our annual fundraiser for our servers. So let’s get started! By January 15, 2026, we need to raise and pay for the servers €510 plus a minimum of €1100 for our monthly payments: domains, internet, electricity, gas, water, fuel, rent, medications, and life, which is getting more expensive and difficult each month. We also have non-monthly but equally important…
17 November, 2025 09:42PM by pavroo
FIPS 140 is a highly demanding security standard that’s mandatory for almost all high-security and federal environments. It can be hard to get right and may be a daunting part of the journey for those trying to meet compliance requirements like FedRAMP or CMMC. We get a lot of questions about FIPS 140-3, and so we decided to put together this comprehensive collection of video resources to answer the most burning ones we’ve had so far.
In this collection, you’ll be able to get answers to the most frequently asked FIPS questions, including:
We’ll start with the most common question: how do you enable FIPS on Ubuntu? The basic prerequisite is an Ubuntu Pro subscription, which is available either free for personal use or with a 30-day free trial for enterprise users. After subscribing, you’ll get access to a dashboard where you can find a token that you can attach to an Ubuntu instance and get access to the FIPS certified modules. All you need to do is open your terminal and enter the following commands:
sudo pro attach <token>
sudo pro enable fips-updates
sudo reboot
You should see output like the following, indicating that the FIPS packages have been installed:
Installing FIPS Updates packages
FIPS Updates enabled
A reboot is required to complete install.
Enabling FIPS should be performed during a system maintenance window since this operation makes changes to underlying SSL-related libraries and requires a reboot into the FIPS-certified kernel.
After enabling FIPS mode, it is good to verify that it is activated. Luckily it’s very straightforward to verify that FIPS mode is enabled. Just run this command in the terminal:
cat /proc/sys/crypto/fips_enabled
The output that indicates that FIPS mode is enabled is “1”.
It is very easy to enable FIPS in public clouds. In contrast to on-prem usage, Ubuntu images for public clouds already have FIPS enabled. Decide on the Ubuntu version you’d like to run, visit the relevant marketplace for your public cloud provider (for example: AWS, Azure, or GCP), and search for the relevant image. Here is an example of how it would look:
Sometimes it can be tricky to figure out exactly which modules and hardware have been FIPS 140-3 certified. This video goes into extensive detail outlining the modules and components you’ll be able to make full use of with FIPS 140-3 certified Ubuntu.
To give a brief overview, the following certified cryptographic modules are available with Ubuntu 22.04 LTS:
These modules have been developed and tested on a range of hardware platforms:
Canonical’s container images are trusted and pre-approved for high-security use cases. Hardened Ubuntu images are already certified and available in the U.S. Department of Defense’s Iron Bank, the official repository of security-hardened containers for government systems. You can find the code to build your own image here, or get the actual container that passed all the automated compliance checks here. Note, you would need to first register to get access to the platform.
Canonical has also recently added FIPS and STIG-compliance to Canonical Kubernetes. Built on Ubuntu Pro hosts, Canonical Kubernetes now includes FIPS 140-3 validated crypto modules out of the box and can be hardened for DISA-STIG. This means you can deploy secure, compliant clusters built on Ubuntu, making it much easier to meet FedRAMP and other federal compliance requirements right from your Kubernetes base.
Compliance always comes with challenges, but when we know the issues, we can help. The video above explains how to solve the most common issues that teams run into when enabling FIPS 140-3, including:
If you’d like to raise a bug/issue with FIPS compliance on Ubuntu, you can do it on Launchpad. Here is an example of OpenSSL bugs.
We hope this blog has been useful for you to learn more about FIPS 140-3 on Ubuntu. You can easily get FIPS 140-3 compliance with an Ubuntu Pro subscription, which is free for personal use and offers a free trial for enterprise-focused projects. Additionally, an Ubuntu Pro subscription is not limited to only FIPS 140-3: the subscription also includes access to our hardening automation tools such as Ubuntu Security Guide, expanded security maintenance, Ubuntu fleet management, and more. And if you’re looking for assistance with more complex enterprise use cases, you can simply contact us.

Finnix is a Linux-based utility live distribution. Write it to a USB flash drive or burn it to a CD, boot it, and you’re seconds from a root prompt with hundreds of utilities available for recovery, maintenance, testing and more. Finnix 251 has been released today, including new official OCI / Docker images, and containing new packages, features and fixes.
Finnix 251 is the first release to distribute official OCI container images. The official Finnix container contains all the same software as the ISO release, and may be launched from Podman, Docker, Kubernetes, etc.
docker run -it --rm finnix/finnix
podman run -it --rm docker.io/finnix/finnix:latest
kubectl run finnix-$(uuidgen | cut -b -4 | tr A-Z a-z) --image=finnix/finnix --restart=Never -it --rm
This is particularly useful for Kubernetes users, giving you a quick utility shell in the namespace of your choice.
The finnix/finnix:latest container currently includes architecture support for amd64, arm64 and riscv64.
Otherwise, Finnix 251 is a regular semiannual utility release:
Please visit finnix.org to download Finnix 251 today!
a635a75a155d8640956c5640fc49268b4a89ce7013fdc7d19f1dfdc8529e9a9eStarting November 18, 2025, you can sign in to VyOS Support Portal, Community Forum, and Issue Tracker using a new, unified authentication method: Single Sign-On (SSO).
This change brings several benefits, and there are a few key things you’ll need to know — read on to ensure you’re prepared.
17 November, 2025 04:48PM by Taras Pudiak (taras@vyos.io)
17 November, 2025 12:28PM by Joseph Lee
Hello, Сommunity!
VyOS Stream 2025.11 and its corresponding source tarball are now available for download. You can find them at the end of this post. This is the third VyOS Stream release on the way to the upcoming 1.5/Circinus LTS release and includes many of its features for you to test — most notably, a VPP-based accelerated dataplane.
17 November, 2025 11:24AM by Daniil Baturin (daniil@sentrium.io)
17 November, 2025 09:44AM by xiaofei
There is a new desktop available for Sparkers: Hyprland What is Hyprland? Features: – All of the eyecandy: gradient borders, blur, animations, shadows and much more – A lot of customization – 100% independent, no wlroots, no libweston, no kwin, no mutter. – Custom bezier curves for the best animations – Powerful plugin support – Built-in plugin manager – Tearing support for better…
16 November, 2025 01:30PM by pavroo
The Qubes OS Project will have a stand at FOSDEM 2026, which will take place in Brussels, Belgium on January 31 and February 1, 2026. FOSDEM is a top annual meeting for free and open source software developers. Entry is free, and no registration is required. If you attend, stop by and say hello!
In September, we prioritized fixes for the out-of-box experience on PureOS Crimson. For the rest of the beta milestone, we are prioritizing work that benefits the most from user feedback. That includes functionality that must account for device-to-device variations and that is strongly influenced by users' individual preferences.
The post PureOS Crimson Development Report: October 2025 appeared first on Purism.
14 November, 2025 06:58PM by Purism
Radio Helsinki wird 30, und feiert das mit einem prall gefüllten Musikprogramm am Samstag, 22.11.2025 im Forum Stadtpark. 🥳 Wir sind mit der HTU-Bigband mit dabei, und spielen ab ~19:30 Uhr für ~1,5 Stunden. Schaut vorbei und feiert mit!
PS: wer es wirklich nicht hinschafft, möge zumindest das Radio einschalten oder den Livestream aufdrehen! 🤓
Foto-Quelle / Copyright: graz.social/@radiohelsinki/115428633169757433
Open source has come a long way. Recently I was watching a keynote address by our founder, Mark Shuttleworth, in which he discussed his vision for Ubuntu to provide quality support and security maintenance across the broad open source ecosystem, and it made me reflect on how far the open source software (OSS) community has come. Indeed, when looking at today’s interoperable open source landscape, the fragmented, disconnected landscape of the past seems like another planet.
But where is open source going next? What’s in store for open source in the coming years, particularly in relation to security? Here’s my reflection on the state of open source, and the trends that I expect to have an impact going into 2026.
Open source has become a ubiquitous part of software development – just look at the numbers. The average app today contains three times as many open source files as it did just four years ago, to the point where 97% of all applications contain OSS. At the same time, research commissioned by Canonical and IDC revealed that seven out of ten organizations consider open source to be “extremely important” for running their mission-critical workloads. In fact, a Harvard study found that if OSS didn’t exist, the global expenditure on software would be 3.5 times higher.
Put simply, the modern software landscape and market depends upon open source. Open source is popular because it is transformative for businesses, blending cost-effectiveness with access to sophisticated software. For instance, one company reduced their cloud total cost of ownership (TCO) by 76% – saving approximately $370,000 – simply by transitioning to open source cloud infrastructure from Canonical. Previously unthinkable deployments, such as carrier-grade private 5G mobile networks, are now entirely achievable with open source tools, as we demonstrated last year in the Netherlands.
It’s astonishing to think that just 20 years ago, most software companies explicitly forbid the use of any OSS in their contracts and terms of service. Open standards and interoperability – now synonymous with open source – were far from mainstream, and as a result, companies were forced into an uncomfortable decision: use the one expensive product that works, or spend months building and integrating everything from scratch.
That’s a lot of change in just 20 years: an entire ecosystem turned on its head. That’s why I think it’s important to pay critical attention to what’s going around us, to spot trends that could be just as revolutionary (or disastrous), and work around them so that we can keep growing in the next 20 years.
Now that I’ve covered the current state of play, let’s dive into the potential tech revolutions – or challenges – in-waiting.
The decision to move away from the constraints of proprietary systems is grounded in promise: as research by Canonical and IDC shows, businesses everywhere are using open source to keep down costs, fully own their infrastructure, and open up their systems to innovation. Ensuring that this promise is reflected in reality requires a proactive, forward-thinking approach.
Your ability to adopt and adapt to the latest innovations in open source software will be vital. Two things are needed:
Without a clear plan, you don’t know where you’re going; and without a management system, you’ll be recreating the difficult, fragmented environment open source was 20 years ago. However, I believe that the ever wider adoption of open source will lead to interoperability and simplified supply chains becoming the norm in the software landscape – essentially, that open source software will reshape the software landscape in line with its values. People want open source software that’s quick to install, easy to learn and use, and effortless to deploy and manage. If you’re looking for a place to begin exploring how you can approach adopting open source into your project or organization, I highly recommend visiting our new dedicated webpage to helping you do exactly that.
The tumultuous geopolitical and cyberthreat landscape of 2025 has sparked a new movement towards independence and ownership in mainstream circles. Long story short, companies don’t want to be left in the cold and dark if something happens with their overseas software or services provider – with the appeal of open source software being the control and freedom it offers to users.
Most notably, we’ve seen a major increase in interest, from businesses and governments, in open source, and repatriated products and infrastructure. For example, communal, municipal, and government authorities in nations like Germany and Denmark have expressed strong interest in moving away from proprietary systems in favour of open source alternatives.
This doesn’t mean proprietary tools will vanish. But it does mean that the pressure will increase on software providers to give peace of mind to end users and consumers that whatever system they use will remain online and functioning – even if new terms of service, sanctions, politics, or laws present unforeseen hurdles. Product features will still be important, but things like documentation, interoperability, system-agnostic design, training for users and system admins, and clear handover processes will be a new ‘normal’ in software offerings.
The developer landscape in 2025 has no shortage of tools, libraries, and solutions. If you want to build an app or service, you could build it from scratch, get portions of the solution from open source, or use already-built solutions. The challenge today is creating systems that give you a full view of these tools, and which allow them to be used securely and sustainably in the long term, without major costs.
Security is hard. There is no one-size-fits-all solution, which introduces the challenge of complexity. Developers have a lot of shiny toys to choose from, and keeping them all securely managed within a usable, minimally complex environment will be a real challenge. Indeed, securing your ever-growing stack won’t be easy. All this new tech doesn’t just mean maximized performance and efficiency: it also means a bigger attack surface and new attack vectors born of the intricate interdependencies between systems.
This is made harder by wider organizational habits. Canonical and IDC’s research shows that in general, organizations prize stability over constant updates: over 50% do not automatically upgrade to the newest versions of software when available. Instead, they wait until new features are needed or the program of free updates stops. They also draw these updates from various places: 57% draw from upstream source repositories, such as Github or Gitlab, and 51% draw from ecosystem packages, like pip or npm.
This approach presents clear problems: if you draw packages from multiple different sources and only apply them when you’re forced to update, it leads to more manual work and less certainty that you’re meeting increasingly strict cybersecurity standards in today’s market.
Organizations still have some work to do in order to meet the challenge of complexity. Our research with IDC shows that 70% of organizations mandate vulnerability patching within 24 hours of identification for “high” and “critical” container vulnerabilities – however, just 41% of respondents are “very confident” or “completely confident” in their organization’s ability to execute this policy.
Remember that innovation isn’t just driven by “the good guys”: bad actors are also working to develop new attack methods and techniques, as AI becomes increasingly powerful and AI tools become more connected and widespread.
We’ve all heard about and read about vibe coding, which is when a software engineer uses AI to generate and debug code. The hype wave of AI has led to fast adoption of generative AI tools as incredible productivity magnifiers. While the allure of faster go-to-market times and improved cost-effectiveness is undeniable, the widespread adoption of AI tools in primary codebases, especially in environments with busy developers stretching across multiple projects, is creating significant security issues. The rapid, often less scrutinized generation of code through these tools can introduce vulnerabilities and amplify existing security challenges in complex software supply chains.
In the next few years, I predict the rise of a new category of cyberincidents stemming from vibecoded feature additions. Organizations everywhere will need a clear policy on use of these tools, and robust checks and quality assurance processes to ensure that the vibecoded additions don’t ignore instructions, or hallucinate package names and inadvertently execute malicious code inside production environments.
We’ve seen a wave of regulation sweep across the US, EU, and UK in the last 4 years. As open source is adopted at the biggest levels of software, it will inherit the steep, strict demands that come with operating in a prestigious global playing field.
Our research with IDC gives a clearer view into the challenges and frustrations that organizations are experiencing with regulations and compliance:
As more regulation is rolled out and tightened up, these challenges will only become harder. Simply hitting the check box of compliance or hardening needed for enterprise eligibility isn’t the point any more – security teams have their work cut out in establishing a clear, transparent track record of your software’s trust lifecycle, and embed this transparency into your development practices.
This means more work for people like me whose job it is to keep open source robust and trusted. But it’s undeniably the right path. After all, we don’t just want solutions that work – we want solutions that reflect, support, and continue the legacy of openness and contribution that allowed them to exist in the first place. And we want Canonical to be a leader in transparency and accountability, and institute practices that demonstrate the trustworthiness and compliance-readiness of our software and services. We recently published our Trust Center – a web portal with all our certifications and compliance efforts – in order to demonstrate that when it comes to that hard work of regulations, we’re doing it right.
In short: the future will be characterized by even more adoption of open source, increased regulation, a surge in AI-driven attack vectors, and a critical need for organizations to implement robust security policies and practices. Businesses must prioritize security without compromising stability, manage their open source supply chains effectively, and adapt to a landscape where transparency and compliance are paramount.
No matter the upcoming challenges to security in open source, I’m truly excited about this next phase. The openness we spent decades building has created a truly remarkable landscape of interoperability, where you can combine and integrate almost any technology into a functioning model. I’ve seen first hand how open source has changed everything in the software landscape, and I know how much more revolutionary it could be in the coming years.
A sophisticated malware campaign, dubbed “EvilAI” by cybersecurity firm Trend Micro, is leveraging AI-enhanced software to infiltrate organizations across the globe. By posing as legitimate productivity tools, this malware evades traditional security measures and exploits user trust.
The post EvilAI Malware Exploits AI Targets Organizations Worldwide appeared first on Purism.
14 November, 2025 01:16AM by Purism
Sylva 1.5 becomes the first release to include Kubernetes 1.32, bringing the latest open source cloud-native capabilities to the European telecommunications industry
With the launch of Sylva 1.5, Canonical Kubernetes is now officially part of the project’s reference architecture. This follows its earlier availability as a technology preview in Sylva 1.4.
The Sylva project is backed by Europe’s largest telecom operators and vendors, including Nokia and Ericsson, and is designed to deliver an open, telco-friendly cloud-native framework. By focusing on interoperability, performance, and automation, Sylva addresses the unique requirements of telecommunications providers building Kubernetes telco platforms for their IT, 5G core, O-RAN, and edge services. Canonical is thrilled to be included as part of the Sylva project, supporting the important work of creating an open source cloud-native reference architecture capable of hosting the mission-critical workloads of the telco industry.
Canonical Kubernetes brings unique advantages to Sylva’s mission of reducing fragmentation and simplifying operations across telecom networks. One of its defining features is up to 12 years of long-term support (LTS). For operators running critical workloads, this ensures stability, ongoing security updates, and compliance with industry standards over a much longer lifecycle than other Kubernetes distributions.
Canonical Kubernetes also provides the flexibility needed for large-scale Kubernetes telco deployments, from core networks to the far edge. Operators benefit from a distribution designed to be both lightweight and maintained with security in mind, while remaining capable of handling advanced workloads such as 5G core, O-RAN, and AI-driven services.
Guillaume Nevicato, Sylva Technical Steering Committee co-chair and Orange Telco Cloud Product Manager, recognized the importance of this contribution:
Canonical is a major open-source player that has achieved the integration of their Canonical Kubernetes distribution into Sylva. They fully embrace Sylva’s full-stack automation, including cluster lifecycle management, storage, networking, observability, GreenDashboard, and security enhancements. This represents a significant step forward in Sylva’s adoption.
A critical part of Sylva’s role is validating network functions against its reference framework, ensuring that cloud-native network functions (CNFs) and virtualized network functions (VNFs) perform reliably across any Sylva-compliant infrastructure. Following its technical preview in the previous Sylva release, Canonical Kubernetes is now included with 1.32 LTS in Sylva 1.5. This allows it to enter the validation process with the Sylva Validation Workgroup, covering key telecom workloads such as 5G Core, O-RAN, and distributed edge services.
For operators, this means they can deploy Kubernetes telco workloads with confidence, knowing that interoperability and performance have already been tested. Vendors also benefit, since a single certification process ensures compatibility across multiple Sylva-aligned platforms, reducing time to market for new services.
Canonical is now exploring how its broader infrastructure portfolio, including technologies like Canonical OpenStack, a featureful, highly customizable cloud, and MAAS, bare-metal server automation software, could complement Sylva’s approach in the future. These solutions could help create a more unified environment for both virtualized and cloud-native network functions, enhancing the flexibility of Kubernetes telco deployments.
As Sylva evolves, Canonical will continue engaging with operators, vendors, and the wider community to identify opportunities where its open source software can add value.
The inclusion of Canonical Kubernetes in Sylva represents a milestone in the adoption of open source telco cloud infrastructure. Operators now have access to a validated, commercially supported Kubernetes telco distribution that combines long-term stability, security, and interoperability with the innovation of cloud-native technologies.
With Kubernetes at the foundation, operators can accelerate the rollout of next-generation network functions and services, while benefiting from the reliability and flexibility that only open-source collaboration can deliver.
Are you building your telco cloud strategy? Learn how Canonical Kubernetes can give you a stable, validated, and open foundation for 5G, O-RAN, and edge workloads.
Signaling System 7 (SS7) was designed in the 1970s to let telecom carriers route calls, deliver SMS, and enable roaming. It was never built with authentication or encryption in mind. Security wasn’t part of the plan. The assumption was simple: All carriers are trusted.
The post Librem PQC Encryptor: Future‑Proofing Against Both SS7 and Quantum appeared first on Purism.
13 November, 2025 11:15AM by Purism
13 November, 2025 03:36AM by xiaofei
Update Tor Browser to 15.0.1.
Tor Browser 15.0 is based on Firefox 140 and inherits from it several new features that are particularly useful if you use many tabs:
Update Thunderbird to 140.4.0.
Update the Linux kernel to 6.12.57.
Remove Root Console.
To open a root console, you can execute the following command in a Console.
sudo -i
Show Don't ask again notifications only after the clock has been synchronized.
For more details, read our changelog.
Automatic upgrades are available from Tails 7.0 or later to 7.2.
If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.
Follow our installation instructions:
The Persistent Storage on the USB stick will be lost if you install instead of upgrading.
If you don't need installation or upgrade instructions, you can download Tails 7.2 directly:
O Miguel está enervadíssimo com Ubuntu Touch e fez peixeirada no Telegram, mas o Diogo tem boas notícias: para além de ter um monitor todo «gamer» para jogar SuperTuxKart, ele e o Ruben Carneiro vão ao Porto combater bravamente oligopólios malvados nos telefones! Revimos excelentes novidades do Firefox 145 que ajudam no combate às invasões de privacidade; discutimos a entrevista de Jon Seager sobre a última BRONCA da Canonical com Flatpaks e o que esperar do Ubuntu Core Desktop; debatemos violentamente as lojas de aplicações móveis e para acabar, planeámos raptar pessoas, inventar séries de Netflix com Linux e explicar porque é que o Pipewire não é um tubo ligado a um fio.
Já sabem: oiçam, subscrevam e partilhem!
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Os efeitos sonoros têm os seguintes créditos: [crowd booing by HowardV] (https://freesound.org/s/264378/) – License: Creative Commons 0; [Police Car Siren in Traffic by hyderpotter] (https://freesound.org/s/268809/) License: Creative Commons 0; [patrons laughing.mp3 by pbrproductions] (https://freesound.org/s/418831/) – License: Attribution 3.0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.
There is the 1st update available for Sparky 8 – 8.1. This is a quarterly update of the Sparky 8 “Seven Sisters” stable release. Sparky 8 is based on and fully compatible with Debian 13 “Trixie”. Main changes: – All packages updated from the stable Debian and Sparky repositories as of November 10, 2025. – Linux kernel PC: 6.12.48-LTS (6.17.7, 6.12.56 LTS, 6.6.115-LTS in sparky repositories) …
12 November, 2025 09:45AM by pavroo
When evaluating load balancers, teams often look at features, benchmarks, or latency claims. But the factor that usually determines how far a load balancer can scale is much simpler: where the traffic is processed inside the operating system.
In Linux, packets originate and are handled in the kernel, where the TCP/IP stack runs. User space — where most reverse proxies and L7 load balancers operate — is a separate execution context. When a load balancer is implemented in user space, every packet must travel back and forth between these two layers — which is fundamentally different from kernel-level load balancing, where forwarding happens inside the kernel.
This boundary crossing is subtle, but it has a real cost.
A typical user-space load balancer (like HAProxy in TCP mode, NGINX, Envoy, or Traefik) receives a packet in the kernel, copies it to user space for processing, then returns it to the kernel to send it out. This happens for every packet in the flow.
Each transition triggers:
Individually, these operations are insignificant. Under moderate or heavy load, they accumulate into two visible symptoms:
And this is why user-space load balancers often reach a scaling ceiling long before hardware limits are reached. The system is not slow — it is simply doing more work than necessary to move each packet.
Figure 1. In user-space load balancers, forwarding requires repeated transitions between kernel and user space, increasing latency and CPU overhead.
Linux already provides a capable packet-processing engine in the kernel: netfilter for filtering and NAT, and conntrack for connection tracking. If forwarding decisions are made inside the kernel, packets do not need to move up into user space at all — they stay where they originate.
This is the core idea behind kernel-level load balancing. The forwarding path becomes:
Packet arrives → Kernel processes it → Packet leaves
This drastically reduces overhead and keeps CPU usage predictable as load increases.
Figure 2. When forwarding occurs in the kernel data plane, packets avoid user-space transitions entirely.
This approach is not theoretical. On standard mid-range hardware running SKUDONET L4xNAT, with no DPDK or kernel bypass optimizations:
This demonstrates that the improvement does not come from specialized hardware or experimental networking stacks — it comes from where the work is done.
Kernel-level forwarding supports multiple operational models:
DSR offers the lowest latency, while SNAT provides the most operational control.
The right choice depends on network topology, not performance capabilities.
Working directly with netfilter can be complex. Its chains, rule priorities, and packet classification logic require a detailed understanding of kernel-level networking.
SKUDONET addresses this by providing a control plane that defines services, backends, and policies at a higher level, while automatically generating and maintaining the underlying kernel configuration. In this model, the forwarding logic never leaves the kernel, but operators still retain full visibility and control over how traffic is handled.
This is the separation of concerns that makes the architecture both efficient and maintainable:
Whether a load balancer processes traffic in user space or in the kernel fundamentally affects:
User-space load balancing offers flexibility and extensibility, but that flexibility comes at the cost of additional data movement and processing overhead. Kernel-level load balancing forwarding avoids this by keeping packet handling in the layer where the traffic already resides, eliminating unnecessary copies and context switches while preserving visibility and control.
If you want to explore the architecture in depth — including packet flow breakdowns, performance measurements, and forwarding mode selection — you can read the full technical paper below.
Download the full technical whitepaper here:
12 November, 2025 09:30AM by Nieves Álvarez
12 November, 2025 09:07AM by xiaofei
Deploy a FedRAMP-ready Kubernetes cluster and application suite, with FIPS 140-3 crypto and DISA-STIG hardening,
Today at KubeCon North America, Canonical, the publisher of Ubuntu, released support to enable FIPS mode in its Kubernetes distribution, providing everything needed to create and manage a scalable cluster suitable for high-security or Federal deployments. As of version 1.34, Canonical Kubernetes is available with a built-in FIPS 140-3 capability that uses certified cryptographic modules. Your deployment with this FIPS capability can be easily hardened to DISA-STIG standards using comprehensive documentation when deployed as a snap package.
KubeCon attendees in Atlanta can learn more about FIPS-enabled Canonical Kubernetes at booth 821.
Canonical Kubernetes is a performant, lightweight, and securely designed CNCF-conformant distribution of Kubernetes. It provides everything needed for a fully functioning cluster, including a container runtime, a CNI, DNS services, an ingress gateway, metrics server, and more. New versions of Canonical Kubernetes ship within a week of the upstream release, and Long Term Support (LTS) versions (which are released every 2 years) are fully supported and security maintained by Canonical for up to 12 years. Long Term Support for Ubuntu and FIPS-enabled Canonical Kubernetes is offered through an Ubuntu Pro subscription. Canonical’s FIPS 140-3 compliant Kubernetes is also available as part of the NVIDIA AI Factory for Government reference design.
Canonical is the first software provider to offer 12 years of support for Kubernetes, which is far beyond the support window offered by upstream CNCF and other vendors. Upstream Kubernetes is typically maintained and supported for about 14 months by the Kubernetes community, with 3 releases per year. In comparison, Canonical maintains an LTS release every 2 years, in line with the Ubuntu LTS release cadence.
Traditionally, Kubernetes clusters must be upgraded one version at a time. However, Canonical’s “interim” versions will be supported for 1 year past the next LTS release, allowing customers to upgrade within 1 year of the next LTS release, without downtime, all while knowing their cluster is fully covered by security maintenance.
Each component of the Kubernetes stack is backed by Canonical’s CVE patching service. Our dedicated security team triages all relevant vulnerabilities and backports upstream fixes to the currently supported software versions, ensuring a completely stable base without breaking existing deployments.
Canonical has been publishing FIPS-certified cryptographic modules for Ubuntu since 2016. These modules are vital for customers across the Federal sector and for on-premises and public clouds, powering a wide range of FedRAMP deployments. With the availability of Canonical Kubernetes and its built-in FIPS 140-3 mode using certified cryptographic modules, customers will have a faster and more direct route to meet their FedRAMP requirements.
FIPS 140-3 functionality requires Kubernetes to be deployed on top of a FIPS-enabled Ubuntu LTS host Operating System. Canonical Kubernetes enables Kubernetes DISA-STIG, and allows you to deploy onto a host OS hardened to DISA-STIG guidelines using the Ubuntu Security Guide (USG) tool. What’s more, applicable STIG controls can be applied to enable hardened containers, along with embedded FIPS cryptographic libraries. Ubuntu STIG hardening has been extensively tested and deployed across the Federal landscape, making it a proven route to meeting FedRAMP security standards.
FIPS modules and STIG hardening are available with an Ubuntu Pro subscription. Ubuntu Pro subscriptions apply on a per-machine basis, which means that any containerized application running on a Pro-enabled host machine is also included within Pro when the Pro token is enabled.
Visit us at our booth 821 at KubeCon North America on November 11-13, 2025 for an in-person conversation about how Canonical Kubernetes powers FedRAMP compliant deployments.
Canonical, the publisher of Ubuntu, provides open source security, support ,and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.
Learn more at https://canonical.com/
This new release brings the stability and security of Ubuntu to Axion-based N4A virtual machines on Google Compute Engine.
November 6, 2025 – Today Canonical, the publishers of Ubuntu, and Google Cloud announced the immediate availability of optimized Ubuntu images for the new Axion-based N4A virtual machines (VMs) on Google Compute Engine. This collaboration brings the stability, security, and expansive ecosystem of Ubuntu, the world’s most popular cloud operating system, to Google Cloud’s most cost-effective N-series offering, enabling enterprises to maximize the total cost of ownership (TCO) for a wide range of general-purpose workloads.
The new N4A VMs are powered by Google’s custom-designed Axion ARM-based CPUs and offer up to 105% better price performance and 80% better performance-per-watt than comparable, current-generation x86-based VMs. By integrating optimized Ubuntu images at launch, Canonical helps ensure developers and operators can immediately take advantage of this breakthrough efficiency for demanding workloads.
Canonical has long supported ARM infrastructure, helping to ensure that Ubuntu provides a consistent, reliable, and secure experience across heterogeneous computing environments. Our deep experience in solving the challenges of mixed x86 and ARM deployments allows us to bring a robust and fully optimized operating system to the N4A series from day one.
The availability of optimized Ubuntu on N4A ensures developers can use the familiar packages and libraries of the latest Long-Term Support (LTS) releases, guaranteeing longevity and simplifying migration. This is crucial for businesses looking to adopt N4A’s cost savings without compromising on operational consistency across Google Cloud’s Compute Engine, Google Kubernetes Engine (GKE), and other services.
These optimized Ubuntu images are backed by rigorous testing to help ensure enterprise-grade stability and compatibility with Google Cloud’s core features.
Canonical and Google Cloud have executed thorough validation across the entire image lifecycle, confirming that Ubuntu on N4A performs exceptionally well with Google Cloud services and VMs. This extensive testing includes validation of:
This comprehensive testing suite allows customers to deploy Ubuntu on N4A with total confidence.
To get started, simply select the N4A machine type and choose your preferred Ubuntu image when creating a VM in Google Cloud Compute Engine, or when configuring node settings in GKE.
The optimized images are available now in the public preview regions for N4A (us-central1, us-east4, europe-west3, and europe-west4).
Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. Learn more at https://canonical.com/
Today marks something special for us and for everyone who loves what Volumio does. After nearly a year of work, we’re releasing Volumio 4 for Raspberry Pi and PC platforms.
You might fire it up and think, “Wait, it looks the same.” And you’d be right. We didn’t redesign the interface or move buttons around. What we did was rebuild the foundation.
Think of Volumio 4 like replacing the engine in your favorite car. From the driver’s seat, everything feels familiar. But under the hood, we’ve swapped in something more powerful, more efficient, and ready to take you places the old engine simply couldn’t go.
We’ve moved to Debian Bookworm, which is the technical way of saying we’ve given Volumio a completely modern foundation. This isn’t just about what you’ll see today. It’s about what we can build tomorrow, and the year after that, and beyond.
This kind of work doesn’t happen alone. Andy, Marco, Pascal, Ash, Gé, Josh, and our entire inner circle of moderators and developers spent countless hours testing, troubleshooting, and pushing this forward. When you’re rebuilding something from the ground up while keeping it running, you need people who care as much about getting it right as we do. We couldn’t have done this without them.
Here’s where things get exciting. Volumio 4 works hand in hand with our new Volumio app, which just landed on the app stores. This is the first step in our vision: one seamless ecosystem for music playback, everywhere, for every kind of digital music.
The new app doesn’t just look better. It’s fundamentally more resilient. We’ve built a new connection method that stays rock solid even when your network gets cranky. The onboarding is now straightforward instead of confusing. You’ll spend less time troubleshooting and more time listening.
Your CDs will play quietly now. Most USB drives used to make noise during playback. That’s fixed. Silent CD playback with the vast majority of drives out there.
Bluetooth that actually works well. We completely rewrote the Bluetooth stack. Lower latency, better compatibility, and we didn’t compromise on sound quality to get there. There’s also a new plugin that lets you send audio from Volumio to Bluetooth speakers, and your Bluetooth remotes will now work with Volumio.
Everything feels snappier. Browsing your library is smoother. Things respond faster. It’s one of those improvements that’s hard to quantify but impossible to miss once you experience it.
NVME storage support. So many of you asked for this. Read performance with NVME devices is dramatically better now.
Better handling of big libraries. We’re running the latest version of MPD, which means if you have thousands of albums, Volumio handles them more gracefully and reliably.
More DACs just work. We’ve expanded USB quirks support, so if your DAC supports direct DSD, Volumio will recognize it.
Security updates built in. We’re on the latest kernel, which means better security as the world around us keeps changing.
Touchscreen displays. If you want to connect an HDMI touchscreen panel, Volumio 4 has you covered with improved display management.
Let’s talk about something important. The Volumio 3 launch was a success in many ways, but we heard you loud and clear about one thing: plugins weren’t ready at launch like they were for Volumio 2. That hurt the experience for many of you, and we get it. Plugins aren’t just nice to have, they’re part of what makes Volumio yours.
We listened. We learned. And we made sure not to repeat that mistake.
All the plugins you love are available right now for Volumio 4. From day one. We didn’t want anyone sitting around waiting for their favorite functionality to come back.
But we didn’t stop there. We’ve also added new capabilities to Volumio 4 that open up possibilities for plugin developers. This means the plugins you already use can get better, and new plugins can do things that weren’t possible before.
Volumio 4 is available today for Raspberry Pi and PC platforms. If you have one of our Volumio products or something from our OEM partners, you can expect to see the update roll out in Q1 2026.
First, you can’t update over the air from Volumio 3 to Volumio 4. You’ll need to reflash. It’s not ideal, but with a change this fundamental, it was the only way to ensure everything works correctly.
Second, we had to make a difficult call. Raspberry Pi 1 and Raspberry Pi Zero are no longer supported. The necessary binaries simply aren’t available for these older boards. We know this affects some of you, and we didn’t make this decision lightly.
Today is about giving Volumio a foundation that can support everything we want to build. The interface looks the same because we wished to this transition to feel seamless for you. But now, with Volumio 4 and the new app working together, we can start building the features and improvements that weren’t possible before.
We’ve spent a year on the foundation. Now comes the fun part.
Welcome to Volumio 4.
Download Volumio 4 and get the new app from your device’s app store. As always, if you run into any issues, our community is here to help.
The post Volumio 4 is Here: A Year in the Making, Built for What’s Next appeared first on Volumio.
10 November, 2025 01:38PM by Volumio Team
About 95% of my Debian contributions this month were sponsored by Freexian.
You can also support my work directly via Liberapay or GitHub Sponsors.
OpenSSH upstream released 10.1p1 this month, so I upgraded to that. In the process, I reverted a Debian patch that changed IP quality-of-service defaults, which made sense at the time but has since been reworked upstream anyway, so it makes sense to find out whether we still have similar problems. So far I haven’t heard anything bad in this area.
10.1p1 caused a regression in the ssh-agent-filter package’s tests, which I bisected and chased up with upstream.
10.1p1 also had a few other user-visible regressions (#1117574, #1117594, #1117638, #1117720); I upgraded to 10.2p1 which fixed some of these, and contributed some upstream debugging help to clear up the rest. While I was there, I also fixed ssh-session-cleanup: fails due to wrong $ssh_session_pattern in our packaging.
Finally, I got all this into trixie-backports, which I intend to keep up to date throughout the forky development cycle.
For some time, ansible-core has had occasional autopkgtest failures that
usually go away before anyone has a chance to look into them properly. I
ran into these via openssh recently and decided to track them down. It
turns out that they only happened when the libpython3.13-stdlib package
had different versions in testing and unstable, because an integration test
setup script made a change that would be reverted if that package was ever
upgraded in the testbed, and one of the integration tests accidentally
failed to disable system apt sources comprehensively enough while testing
the behaviour of the ansible.builtin.apt module. I fixed this in
Debian
and contributed the relevant part
upstream.
We’ve started working on enabling Python 3.14 as a supported version in Debian. I fixed or helped to fix a number of packages for this:
I upgraded these packages to new upstream versions:
I packaged python-blockbuster and python-pytokens, needed as new dependencies of various other packages.
Santiago Vila filed a batch of
bugs
about packages that fail to build when using the nocheck build
profile, and I fixed several of
these (generally just a matter of adjusting build-dependencies):
I helped out with the scikit-learn 1.7 transition:
I fixed or helped to fix several other build/test failures:
I fixed some other bugs:
/usr/bin/env: 'python': No such file or
directoryI investigated a python-py build failure, which turned out to have been fixed in Python 3.13.9.
I adopted zope.hookable and zope.location for the Python team.
Following an IRC question, I ported linux-gpib-user to pybuild-plugin-pyproject, and added tests to make sure the resulting binary package layout is correct.
Another Pydantic upgrade meant I had to upgrade a corresponding stack of Rust packages to new upstream versions:
I also upgraded rust-archery and rust-rpds.
I fixed a few bugs in other packages I maintain:
I investigated a malware report against tini, which I think we can prove to be a false positive (at least under the reasonable assumption that there isn’t malware hiding in libgcc or glibc). Yay for reproducible builds!
I noticed and fixed a small UI deficiency in debbugs, making the checkboxes under “Misc options” on package pages easier to hit. This is merged but we haven’t yet deployed it.
I notced and fixed a typo in the Being kind to porters section of the Debian Developer’s Reference.
After over a year of work, I’m very excited to announce the general availability of IncusOS, our own immutable OS image designed from the ground up to run Incus!

IncusOS is designed for the modern world, actively relying on both UEFI Secure Boot and TPM 2.0 for boot security and for full disk encryption. It’s a very locked down environment, both for security and for general reliability. There is no local or remote shell, everything must be done through the (authenticated) Incus API.
Under the hood, it’s built on a minimal Debian 13 base, using the Zabbly builds of both the Linux kernel, ZFS and Incus, providing the latest stable versions of all of those. We rely a lot on the systemd tooling to handle image builds (mkosi), application installation (sysext), system updates (sysupdate) and a variety of other things from network configuration to partitioning.
I recorded a demo video of its installation and basic usage both in a virtual machine and on physical hardware:
Full release announcement: https://discuss.linuxcontainers.org/t/announcing-incusos/25139
In security discussions, the term DDoS is often used as if it referred to a single type of threat. In reality, today it covers two very different strategies that share the same goal but not the same execution: volumetric attacks at layers L3/L4 and application exhaustion attacks at layer 7.
Both aim to take a service offline, but they exploit different parts of the infrastructure — and therefore require different mitigation layers.
When some vendors claim that “modern DDoS attacks are stealthy and bypass traditional defences”, what they are actually describing is not classic volumetric DDoS, but L7 exhaustion: low-rate traffic, fully valid requests, almost indistinguishable from legitimate clients.
These attacks don’t flood the network — they drain the application from inside.
That doesn’t mean volumetric DDoS has disappeared. It remains cheap to launch, common in the wild, and extremely effective unless it is filtered before the kernel, firewall, or load balancer accepts the connections.
The threat has not changed — the point of mitigation has.
Or, even more directly:
One of the most common misconceptions is assuming that a single protection layer is enough to stop any kind of attack. In practice, filtering only at layer 4 leaves the application exposed, while filtering only at layer 7 allows the kernel or load balancer to be overwhelmed before the WAF ever sees the request.
An L4 firewall can drop malformed packets or abnormal connection patterns before they consume resources, but it has no context to detect that a perfectly valid HTTP request is trying to exploit an SQLi pattern.
A WAF can detect that behaviour — but only after the connection has already been accepted, a socket has been created, and memory has been allocated.
Effective protection is not about choosing the right layer — it is about dropping as much as possible before the app, and reserving deep inspection only for what deserves to reach it.
When protection is applied solely at the application layer, the TCP connection has already been accepted before any evaluation occurs. In other words, the system has completed the handshake, allocated a socket, reserved memory and promoted the session to HTTP/S before deciding whether the request should be blocked.
That removes the attacker’s need to generate massive traffic: a few thousand seemingly valid, slow, or incomplete connections are enough to consume server resources without ever saturating the network.
The result is not an immediate outage, but a progressive exhaustion:
This is the usual pattern of L7 exhaustion attacks: they don’t bring the network down; they wear the application out from the inside. And it happens for a simple reason: the blocking decision is made too late. First the connection is accepted, then the request is inspected, and only at the end is it decided whether to discard it. By then, the damage is already done.
Effective protection against DDoS and exhaustion attacks is not about choosing between filtering at L4 or L7, but about enforcing both defenses in the right order. SkudoCloud implements this model natively inside the load-balancing engine itself, without relying on external scrubbing services or additional appliances.
This model ensures that high-volume traffic cannot saturate the system before being analysed, and that low-volume abusive requests cannot hide inside seemingly legitimate sessions. The result is an environment where the network does not collapse under load and the application does not degrade due to resource exhaustion.
Everything is managed from a single interface, with unified policies, metrics and event logging — without depending on multiple vendors, external mitigation layers or duplicated configurations.
To see how this model works in a real deployment, follow the step-by-step guide: Configure the First SkudoCloud Service
06 November, 2025 11:12AM by Nieves Álvarez
06 November, 2025 08:34AM by xiaofei
Ubuntu images on Microsoft Azure have recently started shipping with the open source package azure-vm-utils included by default. Azure VM utils is a package that provides essential utilities and udev rules to optimize the Linux experience on Azure. This change results in more reliable disks, smoother networking on accelerated setups, and fewer tweaks to get things running. Here’s what you need to know:
azure-nvme-id --version # tool present
find /dev/disk/azure -type l # predictable Azure disk links
05 November, 2025 10:09AM by xiaofei
Organizations everywhere are pushing AI and networks closer to the edge. With that expansion comes a challenge: how do you ensure reliable performance, efficiency, and security outside of the data center? Worker safety, healthcare automation, and the success of mobile private networks depend on a robust technology stack that can withstand real-world challenges and still deliver results. Canonical has partnered with Dell Technologies, Intel, Druid, Airspan and Ecrio to publish a new solution brief addressing this question. The brief highlights how a fully integrated, edge-ready platform can meet the growing demand for intelligent, secure, and real-time computing at the edge.
The brief showcases how to build a strong foundation for edge AI and networking by using a Dell PowerEdge XR8000 ruggedized edge network+compute platform consisting of two server sleds powered by Intel Xeon Scalable processors. Both sleds are running Canonical’s software infrastructure stack, which combines Ubuntu, MicroCloud, and Canonical Kubernetes. On the first sled, MicroCloud hosts two VMs: Airspan Control Platform (ACP) manages the 5G radio units, and Druid Raemis provides the cloud-native 5G core orchestrated by Canonical Kubernetes. The second sled hosts Ecrio’s iota-e platform, also managed by Canonical Kubernetes, which enables AI-powered real-time image-recognition, voice, video, and messaging services. These capabilities support critical business processes such as worker coordination in industrial settings, emergency response in healthcare, and secure team communications in remote or hazardous environments.
Download the solution brief to learn how this integrated platform supports advanced use cases, including AI-driven safety monitoring, smart factory operations, and 5G connectivity at the edge.
Download the full solution brief
For more information on how Canonical supports your edge and AI journey, visit our related content:
Fedora 41 is currently scheduled to reach end of life (EOL) on 2025-11-19 (approximately two weeks from the date of this announcement). Please upgrade all of your Fedora templates and standalones by that date. For more information, see Upgrading to avoid EOL.
There are two ways to upgrade a template to a new Fedora release:
Recommended: Install a new template to replace an existing one. This option is simpler for less experienced users, but it won’t preserve any modifications you’ve made to your template. After you install the new template, you’ll have to redo your desired template modifications (if any) and switch everything that was set to the old template to the new template. If you choose to modify your template, you may wish to write those modifications down so that you remember what to redo on each fresh install. To see a log of package manager actions, open a terminal in the old Fedora template and use the dnf history command.
Advanced: Perform an in-place upgrade of an existing Fedora template. This option will preserve any modifications you’ve made to the template, but it may be more complicated for less experienced users.
Please note that no user action is required regarding the OS version in dom0 (see our note on dom0 and EOL).
Hello, Community! The October update is here and it's dominated by bug fixes — as we are preparing to release the next VyOS Stream image on the way to the future VyOS 1.5 and working on the new 1.4.4 maintenance release as well. However, there are a few useful features as well, including support for DHCP options 82 (relay agent information) and 26 (interface MTU), containers health checks, and more.
04 November, 2025 01:04PM by Daniil Baturin (daniil@sentrium.io)
04 November, 2025 09:41AM by xiaofei
The Incus team is pleased to announce the release of Incus 6.18!
This is a reasonably busy release with quite a few smaller releases in every corner of Incus so there should be something for everyone!

The highlights for this release are:
The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:
You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
Enjoy!
The 10th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.17.6, 6.12.56-LTS, 6.6.115-LTS – Sparky 8.1-RC1 ARM64 for Raspberry Pi released – added to repos: Mousam Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in November too, please. *
02 November, 2025 10:26AM by pavroo