There are new SparkyLinux 2025.09 ISO images available of the semi-rolling line, codenamed “Tiamat.” This new release is based on the Debian “Forky” testing. Changes: – Packages updated from the Debian and Sparky testing repositories as of September 14, 2025. – Linux kernel 6.16.7 (6.16.7, 6.12.47-LTS & 6.6.105-LTS in Sparky repos) – Calamares installer updated to version 3.4.0…
The 2025 Vol.13 deepin Bi-weekly Technical Progress Report is now officially released. We will provide a detailed overview of the work achievements from various deepin technical teams over the past two weeks, along with a brief outline of the plans for the next two weeks. This report aims to provide detailed information for all friends who love and follow deepin, enabling everyone to better understand deepin's development progress and actively participate, working together to promote deepin's continuous advancement. If you have suggestions or feedback, please feel free to visit the deepin community forum for discussion and interaction. DDE(deepin Desktop Environment) ...Read more
Today Canonical, the publisher of Ubuntu, announced support for the NVIDIA CUDA toolkit and the distribution of CUDA within Ubuntu’s repositories.
CUDA is a parallel computing platform and programming model that lets developers use NVIDIA GPUs for general-purpose processing. It exposes the GPU’s Single-Instruction Multiple Thread (SIMT) architecture, enabling fine-grained control over threads, memory hierarchies, and kernels to accelerate large-scale numerical and tensor operations.
Thanks to the long-time collaboration between Canonical and NVIDIA, the CUDA runtime has always been tested on Ubuntu, which is used in thousands of data centers. Distributing CUDA within Ubuntu will make it even easier for developers to build, iterate and deploy their AI apps using CUDA.
A new, easier way to install NVIDIA CUDA
Historically, developers would download the CUDA Toolkit directly from NVIDIA’s website. Today, Canonical is making it even easier for developers to access CUDA natively through their development environment. The CUDA toolkit and runtime will be directly distributed within Ubuntu. Developers using this new distribution channel will be able to use CUDA on their hardware with a native Ubuntu experience. Once CUDA redistribution is fully integrated into Ubuntu, application developers and system administrators can expect the current multi-step CUDA installation process to become a single command.
For application developers targeting Ubuntu systems, this new distribution model means they can simply declare the CUDA runtime, while Ubuntu manages its installation and compatibility across a wide range of supported NVIDIA hardware. This ensures that CUDA will be more accessible and integrated into a widely-used and trusted Linux distribution.
Ubuntu’s software ecosystem
Besides supporting compatibility with runtimes like CUDA which are widely used for AI development, Ubuntu offers a securely-designed platform for developers using open source.
For over 20 years, Canonical has been committed to a consistent release schedule for Ubuntu, maintaining its software supply chain and establishing it as a reliable platform for running workloads across the cloud, desktop, and edge devices. Ubuntu uses Advanced Package Tool (APT) to manage software from hundreds of decentralized contributors, enabling projects to scale seamlessly to millions of users. By streamlining software distribution at scale, Ubuntu makes it possible for software to reach a massive user base while preserving the security and integrity of the open source software supply chain. Users of Ubuntu can get access to a number of options that help maintain their work, including:
Long-Term Support (LTS) Releases: Ubuntu offers LTS releases, which are guaranteed to receive security updates for at least five years. This is critical for enterprise customers who need stability and security for long-term deployments.
Trusted Repositories: Canonical maintains a vast network of software repositories, ensuring that all packages are tested, validated, and compatible with each other.
Ubuntu Pro: With an Ubuntu Pro subscription, users get expanded security maintenance for a wide range of open source packages, offering a complete and trustworthy software supply chain. With Ubuntu Pro, organizations can also expand the security maintenance period for every LTS for an additional 7 years, bringing coverage to a total of 12 years. Ubuntu Pro also includes access to Landscape, which facilitates systems management for Ubuntu machines.
Ubuntu Pro is free for personal use on up to 5 machines, and enterprises can try it for free for 30 days.
Canonical, the publisher of Ubuntu, provides open source security, support, and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.
The August 2025 Threat Report underscores how quickly high-risk vulnerabilities can shift from disclosure to active exploitation. Citrix, Fortinet, N-able, and Trend Micro flaws were weaponized within days. Other critical flaws in highly targeted software, such as Microsoft Exchange, emerged. Mainstream enterprise applications, such as Docker Desktop, Git, and Zoom, were also exposed to new […]
A federal jury in San Francisco found Meta liable under the California Invasion of Privacy Act for improperly collecting and using sensitive reproductive health data from users of the Flo Health app.
The case revealed that Meta’s embedded SDKs harvested intimate details—like menstrual cycles and pregnancy intentions—without informed consent, violating state privacy laws.
It was the turn of the millenium when I got my first computer fresh at university. Windows seemed uninteresting, it was impossible to work out how it worked or write programs for it. SuSE Linux 6.2 was much more interesting to try and opened a world of understanding how computers worked and wanting to code on them. These were the days of the .com boom and I went to big expos in London where they showered you with freebies and IBM competed with SuSE and Red Hat for the biggest stall. IBM said that Linux had made it on the server and now was going to take over the desktop so I realised that working with KDE would be a good idea. And as a novice coder it was very perfect for learning Qt and how open development worked and I loved the free software ideals. Going to the pre-Akademy conference (it was called Kastle then) in Nove Hrady was a great intro to the community in person and in some ways I learnt more about software development in a week there then my years at uni.
So clearly this was a good way to make a career. I dossed around for a year until the Quaker geek collective heard tale of an African Spaceman who was funding a new Linux distro called SSDS (Shuttleworth’s Super Secret Debian Startup) so I got into Debian packaging and made a point that KDE should be involved. Before long they came knocking and I went to the first Ubuntu conference in Australia. I spent about ten amazing years brining KDE to Ubuntu or bringing Ubuntu to KDE for what was already called Kubuntu (not my name choice), a successful community project I’m really proud of. At one point Nokia wanted to use it alongside Plasma Active to sell on a tablet thing along with phones, this could well have taken over the world but y’know, iPhone happened and Kubuntu never found a commercial use after that although it still gets used in big places like Google or the City of Munich or Weta digital (watch those Hobbit DVD extras). I loved being invited out to Nigeria or India to give talks and spread the world of open software. Looking back there’s probably a million business cases that would have been possible but I’m not the best at being a future visionary. Eventually Canonical decided to stop funding it which is fair enough.
But then Blue Systems came along, another nice guy with deep pockets wanting to help and we carried on. When Canonical decided to kill off lots of community projects we came up with the idea of moving directly into KDE to make KDE neon. It has always been crazy how open source communities like KDE are reliant on separate companies to take their software out to the world so we wanted to change that, and I like to think we succeeded. Using CI systems we could create a much more manageable setup. Still the system was never as resiliant as it should have been and several times KDE neon ended up shipping a duff update which will have been very painful for users. We had three people working full time on it at the start but before long it was just me and a volunteer and the quality suffered as a result.
Last winter I drove to the Blue Systems schoße for a routine conference and was organising people to give talks when the guy who pays us started off by saying he was dying and the company would be shutting down. Which was very sad but it makes sense to end it on a high. After years of having no business modal and not knowing what the aims of the company were, which caused several people to genuinely go mad, we finally had a business model of sorts with Valve paying us to make Plasma up to the standards needed to ship it as Desktop Scope on the Valve Steam Deck games console. Nate had been given advanced notice of the company shutting down and had already started another company, Tech Paladin, to take on the business. Shouldn’t this be run as a cooperative we wondered? No that was too complex he said. The next day I ended up at a funeral for some German accountants and when I came back there had been some more discussion and we watched a video about Igalia who make the other operating system for Valve. They are a cooperative socialist paradise and Nate said he’d look into doing that instead of the setup where he had full control and all the profit. It was clear there was to be no other discussion on the matter of our future.
A few weeks later we had an online meeting where I proposed a useful agenda but was ignored, instead Nate gave his updated plan for a business which was to give Dave a slice of the profit and otherwise he’d keep all the profit and all the control. So I gave my proposal I’d been working on for a company with equal ownership, equal profit, a management structure and workers rights. A couple weeks later we had anther video call but Nate called me first and told me I’d be excluded from it. No explanation was given beyond I had “made some comments and would not be happy”. If someone is telling you what your emotions that is when controlling behaviour starts to become abusive. And thus ended my 25 years with KDE.
And what of my colleagues? Surely they wouldn’t want a setup where they have no control over their professional life and all their profit goes to one person? Well dunno, they’ve stopped speaking to me. Nothing. Silence. Nil. Not so much as a “cheereo”, nor “sorry we chose the option were you got excluded” and certainly no explanation. From people who I have worked with for some twenty years in some cases that hurts. I don’t know why they stopped talking to me, I can only speculate and I don’t want to do that.
We never had workers rights at Blue Systems, we were all on self employment contracts. This will continue at Tech Paladin. It is illegal but unenforceable when done on an international setup. But employment rights are not a luxury you can chose to do without if you enjoy your job and want some more flexibility in your work day. They are fundamental and life altering rights that change people’s lives as I discovered when my adopted children were taken away from me. Nobody should be doing business with or taking money from Tech Paladin else be party to illegal workers rights abuses.
Then I started to get sad, being cut off from my life for the last 25 years was too much for me. All things come to an end and I’ve seen plenty people had to leave KDE because the money ran out or maybe they had a disagreement with someone in the project, but never a profiteering control struggle like this. I struggled to get out of bed on some days. I’ve given my life to KDE, I’ve seen it gone from a sure fire project to take over the world to being one open desktop project in a world of many to seeing the revival in recent years where we can honestly say we make some of the best software out there. I like to think I’ve been part of keeping it alive, progressing, relevant and at the forefront of commercial, government and community usage. It’s been an amazing ride full of opportunities and adventures the likes of which I’m sure my peers from my university course have never had.
But in the end I lost my friends, my colleagues, my job, my career and my family. What’s a spod who just tried to do the right thing for society to do? Dunno. For now, if you want me, you can find me surfing the endless wave whenever the sun sets over my digital nomad coliving paddleshack at the end of the world.
On September 8th, Check Point Research confirmed what many of us in the privacy and security trenches have been warning about for years: the weakest link in your security posture may not be your systems at all — it may be the invisible web of third-party integrations you’ve tacitly approved, often without full visibility into their downstream dependencies.
After months of hard work and collaboration with our community, we are thrilled to release the beta version of Proxmox Datacenter Manager. This version is based on the great Debian 13 "Trixie" and comes with a 6.14.11 Kernel as stable default and ZFS 2.3.4.
Note: The current release of Proxmox Datacenter Manager 0.9 is a beta version.
Here are some of the highlights of the Proxmox Datacenter Manager 0.9 Beta version
EVPN configuration for Software-Defined Networking between...
How Vaultara achieved FedRAMP compliance with Ubuntu Pro
Ubuntu Pro helps businesses worldwide to innovate and shape the future. In this edition of What our users make with Pro, we talk to Dave Monk, CTO of Vaultara, a FedRAMP approved data-sharing platform trusted by the US government. Dave shares how Ubuntu Pro became a cornerstone of Vaultara’s security, compliance, and open-source ethos.
Tell us about your project or business.
I’m the CTO of Vaultara. We’ve built a secure data-sharing platform that helps clients and their partners exchange information across separate networks. Our solution is currently used by the US government, and we’ve achieved FedRAMP compliance – which is a major milestone for any technology company.
Are you a part of the Ubuntu or broader open-source community?
I’ve been using Ubuntu for close to 20 years. I started in a Windows environment but quickly gravitated toward Linux because of the openness and flexibility. I wouldn’t call myself a programmer – more of a technologist – but I’ve built much of my career and our business around open source.
We contribute by giving feedback to projects we use, pushing PRs for open-source libraries, and supporting the community ethos. For me, giving back is essential, whether through technical contributions or simply sharing our story of how open source is used in real-world solutions.
What challenges were you facing before engaging with Ubuntu Pro?
Security and compliance were the big ones. When pitching to government clients, we needed FIPS-certified cryptography and timely vulnerability patching to meet strict requirements. Ubuntu Pro provided exactly that – certified components, regular security updates, and the assurances we needed to meet regulatory standards.
Equally important was the support. When we raise an issue, we don’t just get an automated response, we get timely, knowledgeable help. In fact, we often receive a response within 20 to 30 minutes. In our industry, where every delay matters, that kind of responsiveness is a game changer.
That combination of compliance, security, and rapid support gave us the confidence to stick with Ubuntu. Today, all our systems, from desktops in the office to the servers hosting our applications, run on Ubuntu, backed by Pro.
Why did you choose Ubuntu Pro?
I’ve always believed in doing things differently rather than just following the crowd. Ubuntu was the operating system I trusted and was comfortable with. Ubuntu Pro gave me the additional security, compliance, and support I needed to make it viable for highly regulated markets.
It also helped us prove to clients that Ubuntu is every bit as robust and secure as any other option – and in many ways, more streamlined for our needs.
We haven’t had one issue with our customer base turning around and saying, “You’re not dealing with this in a timely manner.” And I think that’s important. We’re reasonably small, we’re very nimble, and as a result, I like working with partners that are just as nimble. That’s a big thing to me because, at the end of the day, it’s how the customer responds to any pros or cons regarding a solution. And if you can respond in a timely manner that exceeds their expectations, you’re always going to win in my opinion.
What’s the most important feature for your business?
Timely security updates. Because we operate under FedRAMP, we’re required to resolve vulnerabilities within strict timeframes. Ubuntu Pro ensures we can patch quickly and reliably, while also giving us direct access to Canonical’s support team when something needs extra attention.
That responsiveness is critical. We’ve even had cases where our feedback on a vulnerability assessment influenced how quickly a patch was released. That kind of partnership keeps our solution secure and compliant.
What business processes does Ubuntu Pro enhance? Does it save you time or resources?
Simply put, we couldn’t operate in our market without it. Ubuntu Pro gives us the Linux security, consistency, and compliance we need to meet customer expectations. It allows us to reassure clients that vulnerabilities are being addressed as soon as they’re identified and that fixes will always be delivered on time.
About 90% of my focus is on ensuring our solution remains secure and stable. Ubuntu Pro makes that possible – and by extension, it helps us maintain the trust of our customers year after year.
Have there been moments when Canonical went above and beyond?
One small but memorable example: I had suggested improvements to the support ticketing system, like better text formatting, to make communication clearer. Not long after, those improvements were rolled out. It might seem minor, but it shows that feedback is taken seriously – and it makes a big difference when you’re providing detailed security reports.
Ubuntu Pro is always free for personal use – whether you’re experimenting, building, or running your own projects. Join a global community of Linux enthusiasts and builders who rely on Ubuntu every day.
Missed the last installation of What our users make with Ubuntu Pro? Read the blog for an insight into how Marc Grondin secured homelabs for him and his family.
Do you have an Ubuntu Pro story you want to share with us? Fill out this form and we will get in touch with you!
When the FBI, CISA, and NSA release joint advisories, it’s not theory — it’s reality. The latest warnings about Salt Typhoon and Volt Typhoon should be read as a weather alert: the storm isn’t “out there,” it’s already in progress. These campaigns target infrastructure, communications, and supply chains with patience and persistence.
The migration of the Qubes documentation to Read the Docs is now complete! As of today, the new documentation hosted by Read the Docs has officially replaced the old documentation, and all old documentation URLs will automatically redirect to their new counterparts at https://doc.qubes-os.org from now on. We’re incredibly grateful to the Qubes community for making this migration a success. Special thanks go to parulin and tokideveloper for reviewing the new documentation during our 20-day community testing period, to maiska for managing the migration, and to Solène Rapenne for helping to push the merge through the final stretch.
Desta feita recebemos a visita do Malmen (em directo de São Miguel), do André Barbosa (Wikimedia) e da Joana Simões (OSgeo) para falarmos um pouco sobre as suas aventuras tecnológicas e do que esperam da Festa do Software Livre no Porto. O Diogo e o Miguel visitaram o interior do país e trouxeram de lá muitas novidades boas; pelo meio discutimos as catástrofes globais da semana trazidas pelo jogo Alentejo Tinto’s Law, dramas da Comunidade e até Tamagochis no Ubuntu Touch.
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast.
O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.
Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel (e há ali um cavaquinho desafinado do Barbosa), pelo que pedimos desculpa pelos incómodos causados. OS efeitos sonoros têm os seguintes créditos: Multidão com archotes e forquilhas: rowdy crowd.wav by xtrgamr – https://freesound.org/s/243798/ – License: Attribution 4.0; Sad Trombone: wah wah sad trombone.wav by kirbydx – https://freesound.org/s/175409/ – License: Creative Commons 0.
Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.
A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.
Yesterday, Apple unveiled the latest iPhone lineup adding an iPhone Air — described by them as their thinnest, lightest yet. The headlines were predictable fanfare while missing the larger points.
Saat ini, demo slot pragmatic semakin booming di para pecinta hiburan. Ada banyak alasan yang bikin game slot jadi favorit, mulai dari praktis dimainkan, koleksi slot bejibun, hingga hadiah besar yang bikin penasaran. Gak heran kalau slot gacor selalu jadi buruan utama bagi pemain baru maupun pemain pro.
Mengenal Game Slot
Permainan slot online adalah bentuk digital dari mesin slot klasik yang dulunya cuma ada di rumah judi. Lewat internet, setiap orang bisa menikmati slot online hanya dengan laptop mereka. Inilah alasan game slot menjadi tren, karena gampang dimengerti, fleksibel, serta bikin nagih.
Daya tarik slot online juga ada di visual kece, soundtrack epik, dan koleksi tema unik. Dari retro vibes sampai slot modern, semua ada. Bahkan banyak developer game bikin slot bertema anime. Jadi, user bisa tentuin game favorit dengan bebas.
Manfaat Slot Demo
Mode gratis berguna banget buat pemula yang baru kenal slot. Dengan mode ini, pemain bisa mengetes mekanisme slot tanpa resiko kehilangan saldo. Mereka juga bisa belajar fitur scatter. Bahkan player berpengalaman sering menggunakan demo slot untuk uji RTP sebelum main sungguhan.
Kesimpulannya, slot demo adalah alat belajar yang bikin nyaman, bukan hanya untuk player baru tapi juga user senior. Dengan begitu, saat masuk ke slot online, mereka lebih paham.
Fenomena Slot Gacor
Di kalangan penggemar slot, istilah slot hoki jadi bahan obrolan. Banyak yang yakin kalau ada periode spesial di mana slot kasih big win. Walaupun hasilnya RNG, kemenangan besar bikin pemain makin percaya ada slot gacor. Itulah kenapa akses slot gacor selalu heboh pemain.
Tips Cari Situs Slot Aman
Platform permainan slot ada banyak banget sekarang, tapi gak semuanya aman. Makanya pemain wajib cermat sebelum main. Beberapa hal penting yang patut diperhatikan antara lain:
Proteksi data biar informasi pribadi tetap terjaga.
Support 24 jam yang cepat tanggap buat bantu masalah pemain.
Review positif supaya user lebih percaya diri saat main.
Transaksi lancar agar gak ribet untuk semua pemain.
Tren Toto Slot
Game toto belakangan makin naik daun. Fitur inovatif dari jenis ini ada di bonus spesial yang seru. Berbeda dengan slot klasik, toto slot sering ngasih free spin, bikin user merasa semakin enjoy tiap kali nyoba.
Panduan Bermain Slot Santai
Supaya main makin menyenangkan, ada beberapa trik yang bisa diterapkan:
Mulai dari slot demo biar ngerti cara main dulu.
Kontrol budget dengan hati-hati, jangan terlalu napsu.
Jangan lupa, permainan slot itu bergantung hoki, jadi fokus ke hiburan daripada ngejar profit.
Penutup
Game slot digital makin seru dengan teknologi baru. Dari demo slot untuk latihan, slot mudah jackpot yang banyak dicari, sampai toto slot yang lagi trend, semua tersedia buat pengguna. Yang penting, mainlah santai, pilih situs aman, dan selalu sadari kalau keseruan lebih utama daripada cari profit.
New research suggests 86% of European organizations believe open source is valuable for the future of their industry – but only 34% have a clear and visible open source strategy
The Linux Foundation’s latest report, Open source as Europe’s strategic advantage: trends, barriers, and priorities for the European open source community amid regulatory and geopolitical shifts, provides key insights into how European enterprises are using open source software (OSS), as well as the barriers towards further development of open source in the continent. Released in collaboration with Canonical, the report surveyed 316 enterprises and interviewed experts to give a clear picture of the state of open source in Europe.
This blog gives a breakdown of the key findings of the report, answering how European enterprises are using open source, what the biggest opportunities and challenges are, and how organizations can benefit most from OSS adoption.
Incorporating OSS into workflows and infrastructure has become increasingly common in organizations throughout Europe. The report indicates that 64% use OSS for their operating systems, 55% for cloud and container technologies, and 54% for web and application development.
[Figure 1]
The rationale behind the popularity of OSS is clear, as the report suggests: by using OSS, respondents reported that they benefited from higher productivity (63%), higher quality software (75%), reduced vendor lock-in (62%) and lower software ownership costs (58%). In addition, OSS was considered a gateway to greater transparency (49%), innovation (58%), and collaboration across organizational boundaries (48%). Overall, 56% of those surveyed consider that the benefits of OSS exceed – or greatly exceed – the costs.
[Figure 3]
Many organizations have adopted open source to improve their cost-effectiveness. For example, Nova, one of the leading telecommunications providers in Greece, adopted open source with Canonical. Nikolaos Dasyras, Senior Manager of Cloud Infrastructure & Services at Nova, emphasizes how Canonical’s straightforward planning and pricing has benefited Nova’s ability to control costs:
“From a capital expenditure (CAPEX) perspective, Canonical’s roadmap is fully iterable …. This makes it simple for us to estimate the time and cost needed when embarking on new projects. From an operational expenditure (OPEX) perspective, the support from Canonical is directly related to the footprint and number of servers – with no additional cost for management software, as Juju and MAAS are free and open source tools. Without licensing fees, we’re paying for real expertise that enriches our team, rather than paying for access.”
Why is open source development a priority in Europe?
55% consider open source alternatives a priority for European sovereignty
Beyond its numerous benefits, recent geopolitical turbulence has catalyzed the importance of OSS for business leaders and policymakers alike. Concerns have arisen about trade disputes and political changes causing a sudden and disruptive loss of services and technology, leading to widespread trends such as geopatriation. Open source offers a way of developing alternatives, reducing dependence on proprietary technologies and a small set of vendors. Awareness of this has led to investment and development of OSS being recognized as a critical element to achieving digital sovereignty.
Canonical has been at the forefront of enabling digital sovereignty through open source software. For instance, we have partnered with UNICC to build a sovereign cloud based on Canonical OpenStack, providing the UN with the data sovereignty they sought. Find out more in the announcement.
What are the most important opportunities developing in open source?
41% use OSS for AI and Machine Learning
In the light of DeepSeek’s release in January of 2025, open source AI in Europe – and public support for it – has blossomed. The report highlights this as the trigger for a growing interest in building European-based open source AI models. This trend links back to the growing interest in achieving digital sovereignty: 89% of respondents considered OSS the most important approach to sovereign AI.
[Figure 16]
Europe has an abundance of AI/ML talent and projects. Mistral AI is one of the leading examples. But there’s also rising public sector interest in AI Factories and similar initiatives, such as the Barcelona AI Factory and IT4LIA AI Factory.
Organizations like the European Space Agency are also making headway into the use of AI. The European Space Operations Centre (ESOC), one of ESA’s premises in Germany, hosts the engineering teams that control spacecraft in orbit, manage the global tracking station network, and design and build the systems on the ground which support their missions in space. By 2030, ESA aims to significantly expand the number of satellites it currently launches. ESA relies on a number of AI-supported software tools during their missions. However, these tools require a platform on which to run.
ESA chose Canonical’s distributions of Kubeflow and Spark on top of Canonical Kubernetes, helping ESA to focus on optimizing their missions, rather than troubleshooting the supporting infrastructure. As Michael Hawkshaw, ESA Mission Operations Infrastructure IT Service Manager at ESOC put it, “we can sleep soundly knowing Canonical is supporting us and we won’t get a call at night telling us that something’s gone wrong.”
Find out the full story in the case study, or check out our webpages for more information about Canonical’s AI and ML solutions.
Whilst AI and ML pose significant opportunities in Europe, developing and adopting these technologies is not without challenges. Complex new legislation and regulations, like the AI act, are not broadly understood by organizations. Developing a formal OSS strategy, with robust security controls and a clear compliance roadmap is critical.
What are the key challenges faced by enterprises using OSS?
66% of organizations lack a formal open source strategy
Despite growing awareness of OSS benefits and adoption of OSS, over 50% of organizations contributed to OSS projects in either limited ways, did not contribute, or weren’t sure if they contributed. Of those who did engage significantly with OSS projects, by employing full-time contributions or maintainers, 81% saw high, or very high, value in their investment. Why, then, is contribution limited?
[Figure 14]
The report suggests that the barriers preventing OSS contribution include legal and licensing concerns (31%) and fear of leaking IP (24%). These issues are linked to a broad lack of maturity in OSS strategy: 66% of organizations lack a formal open source strategy, and 78% have not implemented an open source program office (OSPO).
[Figure 7]
OSPOs are internal teams that oversee an organization’s open source engagement. OSPOs help to ensure compliance and avoid legal risks, whilst optimizing the organization’s use of and contributions towards OSS projects. To find out more about OSPOs, as well as how to create and successfully run an OSPO in your own organization, read our guide.
The report suggests a concerning lack of understanding of how best to comply with cybersecurity regulations and standards. However, the introduction of new legislation like the EU Cyber Resilience Act (CRA) will require organizations to develop roadmaps for compliance, especially if they commercialize products with digital elements. For a comprehensive breakdown of the CRA from our CISO, read our blog, or find a more concise overview on our webpage.
The state of open source in Europe
Europe’s open source community is flourishing, with enterprises incorporating open source across the technical stack and becoming increasingly aware of the benefits it provides. In particular, the opportunities to achieve digital sovereignty and establish European open source AI are drawing interest and attention.
However, receiving the full benefits of open source requires engagement and a structured approach to OSS within the organization. Similarly, understanding regulatory and legislative requirements, and achieving compliance can be challenging; however, working with trusted partners like Canonical, and establishing internal open source program offices can help organizations to overcome these barriers to success.
Open source software is everywhere. Research shows that around 97% of codebases contain open source software, and it’s clear to see why. It’s always magical to realize that there are thousands of free-to-use, ready-built programs and code repositories that solve problems you’d otherwise need to spend weeks building the solutions for from scratch.
However, like with all software, you still need to ensure that your software supply chain is secure and safe to consume. This means managing two things: dependencies, and (consequently) security risks. In this blog, we’ll explore what dependencies are, why they matter, and why they cause so many problems in vulnerability management.
What are dependencies?
In software engineering, dependencies refer to external components, libraries, modules, or services that a software application or system relies on to function correctly. These external elements are not part of the core codebase but are integrated into the software’s build or runtime environment. All of these dependencies come in two forms: direct dependencies and indirect dependencies (also known as transitive dependencies).
What are direct dependencies and what are transitive dependencies?
Direct dependencies are all the libraries that you explicitly declare, include, or use in your software project or code. These direct dependencies are typically essential or mandatory for your project to work properly. Direct dependencies are, by definition, directly declared by the project team.
In contrast, indirect dependencies are the dependencies of your direct dependencies: libraries, modules, or components that your direct dependencies rely on to function. These indirect dependencies are not explicitly declared, but are pulled in through the direct dependencies.
An example of an indirect dependency might be seen in a Node.js application which relies on Express, which in turn needs the debug package to function. By this transitive function, your application would require ‘debug’ to function properly – meaning that if your project uses node.js it will be indirectly dependent on the ‘debug’ package.
Now, dependencies are clearly a good thing, given the DRY principle of software engineering (‘Don’t Repeat Yourself’): you want to avoid as much as possible duplicating code and repeating logic, and dependencies are a great way to represent useful functions without stuffing your codebase with duplicating all the components, packages, and libraries in your codebase.
However, dependencies still need to be taken care of. Given the large number of direct and indirect dependencies that software can have, projects can quickly develop large, exceedingly complex dependency trees that make it very difficult and time-consuming to manage their security, patch them or upgrade them to new versions where necessary, and resolve any errors that arise over time from their use.
Why are dependencies a risk in software development?
Dependencies can be a problem in your software security for a number of reasons. Let’s take a look at a few.
Dependencies can mean vulnerabilities
If your application relies on a particular package, framework, or library that becomes affected by known vulnerabilities, your app could be compromised or at risk.
Generally speaking, dependencies contain several flaws that can introduce risk into your programs or supply chain.
For example, attackers can exploit security flaws present in many libraries and packages. These flaws can range from common vulnerabilities like SQL injection, command injection, authentication bypass, and buffer overflows, to more complex logic errors or insecure configurations. Exploiting such weaknesses can allow attackers to gain unauthorized access, manipulate data, or disrupt services.
Sometimes, malicious actors can introduce malicious code into legitimate packages, meaning that anyone who uses this otherwise ‘trustworthy’ project will have their software supply chain compromised.
Here are some ways malicious actors can attack your systems through dependencies:
Account compromises
This happens when a developer’s credentials are compromised, giving attackers a legitimate vector to upload malicious code into seemingly safe repositories or libraries. This is one of the biggest threats in software supply chain management, given how many cyber attacks are aimed at gaining access to accounts that have privileges in a system.
Dependency confusion attacks, or namespace attacks
A dependency confusion attack happens when one package or library is confused for another. This can happen when an attacker uploads a malicious package with the same name as legitimate, internal, or private packages, into a different namespace, but perhaps the most common approach is to name something very similar to the legitimate project.
Hallucination attacks
This new method of attack has gained traction with the rapid rise of GenAI. Quite often, a GenAI tool will hallucinate when generating code for user projects, referencing packages or libraries that don’t exist. Attackers figure out what names are consistently and recurrently being hallucinated by LLMs and then register those names, may even upload functioning code to them at first, but then eventually they upload malicious code to them, waiting for the day a dev uses GenAI-created code that ‘looks fine’ in their project.
Dependencies can increase your attack surface
Many libraries or packages in your codebase contain code for your project to execute or reference. That means that every package is a potential pathway for malicious actors or code to make its way into your apps and systems, or is a place where bugs, out-of-date code, or other issues can cause your project to fail.
Dependencies can make compliance hard to achieve
Many cybersecurity regulations and security standards have strict requirements around how you use, update, and secure your dependencies. For example, FedRAMP security controls have many security requirements for software, including that you fix high-risk vulnerabilities within 30 days – and this implicitly applies to any install dependencies.
Not properly securing or updating your dependencies could lead to a loss of your compliance status.
How to patch dependencies and transitive dependencies
There are two ways to deal with dependencies.
The first is to not have any dependencies at all in your project. That’s problematic for a very obvious reason: there is so much useful code out there that you can use in your project instead of writing it all yourself from scratch. Saying “well, let’s just avoid dependencies” is unrealistic, and creates far more hurdles and resource demands than it is worth.
The second strategy is to ensure that you manage these dependencies, have visibility over your entire software supply chain, and take a risk-based approach to mitigating threats to your supply chain, including ensuring that they are patched against known vulnerabilities.
This second strategy is the far more reasonable one, and it’s the industry standard when dealing with direct and indirect dependencies. On top of this, you should also ensure that you’re only using the smallest number of mandatory packages, in order to minimize your total number of dependencies. Try to avoid ‘nice to have’ packages that aren’t required for core functionality.
Unfortunately, matching and managing dependencies is not as simple as just getting updates from public repositories. These open source software and public package repositories, such as the npm registry or PyPI (the Python Package index), do give you access to a huge number of high-quality software – but as we’ve pointed out above, these public repositories are not immune to attacks. And that means risks in your software supply chain.
The common approach to dealing with these risks is scanning for outdated or at-risk packages, and updating them to the latest fixed versions. In some cases, there might not be upstream updates available, meaning some uncommon cases where you might have to fix specific flaws in the version of the package you depend on.
However, both of these approaches can be time-consuming and manual, or slow to catch malicious components or actively-exploited vulnerabilities before a security incident occurs. In fact, according to IDC research, only 41% of organizations are confident that they can patch vulnerabilities within desired timeframes.
The best way to patch and deal with dependencies
Organisations need two things to address the root issue that leaves them vulnerable:
A trusted source that vets and monitors software packages
Timely, automatic security updates that do not break functionality and maintain software stability
Let’s explore each of them.
A trusted source for your software supply chain
This is needed for the entire software supply chain, including your everchanging list of transitive dependencies. The great news is that you can get all of this from your OS. Ubuntu is a great foundation for even the most complex of deployments, and it’s trusted by thousands of developers around the world. In fact, Ubuntu is the most popular Linux distribution in the world.
An Ubuntu Pro subscription takes that securely designed baseline even further, giving you access to comprehensive security maintenance for thousands of open source packages on top of the operating system, as well as hardening and patching automation tools that enable you to apply patches at scale. It’s free for personal users, and up to five machines – and there are free trials available for enterprises.
This has advantages over the approach of constantly following the latest upstream version. It’s simpler, faster, involves less manual effort, and allows you to consume updates for your dependencies in one single place, rather than an app-by-app or project-by-project basis – making for easier monitoring and compliance work.
Timely, automatic security updates
You need a trusted source for your patches and package, but equally critical is having some way of applying these vital updates at regular intervals. The best way to do this on systems using Ubuntu is through Ubuntu Pro, a service that automatically applies available fixes for all your software. Ubuntu Pro is far more efficient and effective than simply following the many upstream versions of the packages and libraries you use.
This approach also has several advantages over manual or alternative methods. First, it’s less time consuming, as all your packages are updated without needing your direct input. Second, it usually delivers security updates and maintenance across a much wider range of packages and libraries (in the case of Ubuntu Pro, you’ll get updates to all packages available in both the Main and Universe repositories, so you’ll get both community-maintained OSS and the Canonical-supported OSS). Finally, these fixes are typically backported to older or legacy versions of software or Ubuntu. Doing this work yourself, especially in software environments with frequent version changes (like PHP or Node.js) is skilled, time-consuming work. And why do that work yourself when we do it already, thousands of times a year, for millions of devices and end users?
Our approach to vulnerability management has been proven to help companies secure their operations, systems, and services, and simplify the pathway to meeting demanding regulatory compliance. For example, we’ve helped large data analytics companies, and SaaS providers like Lucid meet FedRAMP requirements, through our cryptographic modules and vulnerability management processes.
In summary, managing software dependencies is crucial for maintaining a secure and stable software supply chain. While dependencies offer significant benefits in terms of code reuse and efficiency, they also introduce potential vulnerabilities and increase your overall attack surface. Effective dependency management involves understanding the difference between direct and transitive dependencies, recognizing the risks they pose, and adopting proactive strategies for securing and maintaining them. Relying on trusted sources for software packages and implementing timely, automatic security updates, like those provided by Ubuntu Pro, can significantly enhance your organization’s security posture and simplify regulatory compliance. By taking a strategic approach to dependency management, you can harness the power of open source software while minimizing your overall risk profile.
If you want to know more about how Ubuntu Pro helps you with comprehensive vulnerability management for your open source dependencies, visit ubuntu.com/security or ubuntu.com/pro.
Sekarang ini, slot gacor slot online terus meroket di kalangan penggemar hiburan. Banyak penyebab kenapa permainan slot online bisa trending, mulai dari akses mudah, koleksi game melimpah, sampai bonus yang membuat betah. Salah satu yang paling sering dicari tentu saja slot gacor yang jadi pintu masuk bagi pemain pemula maupun penggemar setia.
Ngomongin Tentang Game Slot
Slot online adalah pengembangan online dari mesin slot klasik yang dulu cuma bisa dimainkan di arena judi. Dengan hadirnya internet, semua orang bisa mengakses slot online kapan saja pakai smartphone. Faktor inilah yang menjadikan slot online trending dan disukai banyak orang, karena simple, bisa dimainkan di mana saja, serta ngasik banget.
Keunikan lain dari slot online ada di grafis keren, soundtrack keren, dan desain beragam. Dari yang klasik sampai modern, semua tersedia. Bahkan banyak developer slot yang bikin mitologi. Jadi, user bebas menentukan game favorit kapan saja.
Alasan Coba Demo Slot
Slot demo merupakan cara buat newbie yang masih belajar. Lewat versi ini, user bisa mengetes game tanpa takut rugi. Mereka bisa belajar simbol kayak wild. Bahkan player pro juga sering main mode gratis buat uji game baru sebelum taruh uang beneran.
Intinya, slot demo bukan cuma mainan iseng, tapi media latihan buat semua kalangan biar lebih pede saat main slot online.
Fenomena Slot Gacor
Di dunia slot online, istilah game gacor banyak dibicarakan. Ada anggapan kalau ada momen spesifik di mana mesin game lebih sering bagi hadiah. Walaupun menggunakan random number generator, perasaan dapet kemenangan besar bikin orang makin percaya. Karena itu, pencarian jalur gampang jackpot selalu heboh di komunitas.
Tips Pilih Situs Slot
Banyak situs slot yang tersedia. Tapi gak semua jujur. Pengguna harus teliti sebelum daftar. Beberapa faktor utama yang perlu dicek:
Proteksi akun supaya akun tetap aman.
Customer service yang profesional biar masalah bisa segera beres.
Nama baik biar percaya diri waktu main.
Proses deposit withdraw cepat untuk tanpa ribet.
Mengenal Slot Variasi Toto
Game toto banyak dibicarakan belakangan ini. Daya tarik dari jenis ini ada di fitur bonus yang bikin pengguna semangat coba. Tidak sama dari slot klasik, toto slot sering kasih free spin, sehingga waktu bermain makin nyaman.
Saran Biar Main Asik
Biar main makin fun, ada beberapa tips yang bisa dipakai:
Coba dulu slot demo buat paham mekanisme.
Atur modal dengan disiplin, jangan emosi.
Jangan lupa, slot online itu random, jadi fokus ke fun ketimbang ngejar untung.
Kesimpulan
Dunia slot online makin maju dengan teknologi baru. Dari slot demo buat latihan, slot gacor yang selalu diburu, hingga slot alternatif sebagai pilihan baru, semuanya hadir buat user. Yang penting, nikmati dengan bijak, pilih situs terpercaya, dan ingat bahwa fun lebih utama daripada sekadar memburu uang cepat.
Bahasan Santai Permainan Slot, Demo Slot, dan Website Slot Populer Masa Kini
Bersama mesin slot jadul belakangan ini makin populer. Banyak pemain yang pengen nyoba hiburan yang seru tapi juga praktis. Nah, versi percobaan gratis dan slot gacor jadi salah satu pilihan yang sering dibahas. Alasannya? Karena simple, koleksinya banyak, dan bonusnya lumayan.
Kenalan Sama Game Slot
Game slot sebenernya turunan modern dari rtp slot. Bedanya, kalau dulu harus nongkrong di tempat judi, sekarang cukup buka HP aja udah bisa main kapanpun dan dimanapun. Inilah yang bikin permainan slot digital trending banget, karena fleksibel, efisien, dan lebih nyaman buat banyak orang.
Selain itu, game slot online punya visual bagus, efek suara menarik, dan tema variatif. Ada yang jadul, ada juga yang futuristik, bahkan sampai anime. Jadi pemain bisa menyesuaikan minat. Nggak heran kalau game slot jadi favorit di kalangan anak muda.
Kelebihan Demo Slot
Sebelum main beneran, banyak yang latihan dulu lewat demo slot. Kenapa? Karena tanpa resiko, jadi bisa ngerti mekanisme dulu. Misalnya, dari demo slot kita bisa ngecek cara kerja fitur scatter, wild card, sampe putaran gratis.
Orang baru biasanya butuh banget pakai mode demo ini biar lebih ngerti saat nanti deposit sungguhan. Bahkan yang udah pengalaman juga sering tes dulu game baru lewat slot demo buat lihat tingkat kemenangan. Jadi intinya, demo slot itu bener-bener berguna buat semua kalangan.
Bahasan Seru Slot Lagi Bagus
Di kalangan komunitas slot, istilah slot hoki udah selalu trending. Banyak yang yakin kalau ada jam tertentu di mana slot lebih gampang kasih jackpot. Padahal sistem tetep random number generator, tapi sensasi menang besar bikin orang optimis kalau game gacor itu nyata.
Makanya, gak heran kalau jalur gampang jackpot selalu ditunggu-tunggu. Banyak forum dan komunitas juga bagi info tentang game yang lagi hoki. Walaupun belum pasti, tapi hal ini nambah seru tersendiri buat pemain slot online.
Tips Nyari Situs Slot Aman
Sebelum masang modal, penting banget buat menentukan situs slot terpercaya. Soalnya, gak semua platform itu jujur. Ada juga yang nakal. Biar main seru, pastiin cek dulu beberapa hal:
Sistem enkripsi biar data pengguna gak bocor.
Layanan CS 24 jam biar kalau ada masalah bisa segera diatasi.
Review positif biar lebih percaya waktu main.
Deposit dan withdraw lancar biar gak ribet.
Tren Baru Permainan Slot Toto
Game toto akhir-akhir ini lagi trending. Bedanya sama slot biasa, toto slot punya gaya beda, bonus melimpah, dan sensasi baru. Cocok banget buat penikmat game yang suka tantangan.
Selain itu, toto slot sering kasih free spin, jadi bikin waktu santai makin asik. Banyak penggemar hiburan digital yang akhirnya pindah main game toto karena beda suasana dibanding slot pada umumnya.
Saran Slot Online
Biar main makin asik, ada beberapa saran yang bisa dipake:
Coba dulu mode gratis biar ngerti mekanismenya.
Pilih website resmi buat jaga data pribadi tetap terlindungi.
Atur uang main pake bijak, jangan terlalu napsu.
Ingat, permainan slot itu acak, jadi utamain have fun daripada serius cari untung.
Kesimpulan
Singkatnya, slot gacor lagi booming karena menyenangkan. Ada mode gratis buat latihan, ada slot gacor buat yang nyari sensasi, dan ada juga slot alternatif buat yang pengen pengalaman beda. Intinya, mainlah enjoy, pilih website aman, dan nikmatin hiburan ini dengan gembira tanpa drama.
Purism set extremely ambitious goals starting with its founding as a Social Purpose Company—putting protecting digital rights above profit, putting individual freedom above controlling users, inventing and innovating GNU/Linux for mobile, creating the first truly convergent OS (PureOS), to shipping the first phone with Made in USA Electronics. The constant throughout the entire company is a belief that people should be in control of their technology. This belief is also extremely valuable to organizations and agencies who need the security provided by controlling their devices fully.
Mousai is a simple application that can recognize songs similar to Shazam. Just click the listen button, and then wait a few seconds. It will magically return the title and artist of that song!
This is Part 2 in the series about our website Design sprint. View Part 1 if you haven’t already!
At Canonical, our design team is dedicated to providing valuable resources and insights into designing for open source, especially when it comes to complex products. Recently, we started a project to refresh our online presence, organizing a sprint to redesign our website. This article is the second part of a two-part series that takes you through the methods and tools we used to streamline and redefine our design team’s digital footprint.
In part 2, you will read about:
Executing the sprint – Day-by-day insights into the methods and tools used during the sprint, including how the team collaborated to implement changes.
Post-sprint actions and delivery – A reflection on what we achieved, how the team communicated updates, and the next steps for maintaining a cohesive online presence.
For each section, you will find reflections on how our team experienced each exercise, the biggest obstacles, and lessons learned for next time. I hope you find valuable insights and can learn from our experience to organize your own sprint.
Note: Be sure to check out part 1, where we dive into the preparation phase!
Executing the sprint
Day 1 – Information architecture, acceptance criteria, user needs, page outlines
The goals
Day one was all about setting the tone for the rest of the week. We kicked it off with a presentation on our motivation for this project and our mission statement, followed by an overview of the agenda for the week and the main goals we wanted to hit:
Set the tone for the week with a kick-off covering the:
Goals for the week:
Create content and wireframes for priority pages
Create a plan for remaining deliverables (spreadsheet with assignees, deliverables, and timelines to hand over to project management)
Mission statement
Future website information architecture
Ownership of website sections to maintain and create content for
Present sprint structure
Walk through resources and where to find them
The journey
For day 1, we wanted our participants to understand the big-picture idea for the website.
The kick-off meeting focused on a couple of points to set the tone for the week:
The outcomes of the information architecture workshop, to show how the different areas of the website would relate to each other.
The mission statement, to give an idea of the tone we wanted to see throughout the pages.
Insights from our Head of Design, to give the team more information on the type of people he was hoping we would reach with our website, and what he was excited to see at the end of the week.
Once we established the big picture, it was time to focus on individual sections. We divided the team into groups of 2 or 3 and assigned a section of the website to work on. Each group was tasked with understanding the type of people we wanted to reach with our content and the questions they might have. We also shared links to existing content for participants to reference, reuse, or mark as outdated to avoid duplicating work when possible.
To ensure consistency across all groups, we provided a template for each group to complete, which could later be used as the foundation for the page’s meta description.
TEMPLATE
Page title (60 characters max)
Page description (160 characters max)
Suggested page URL
Suggested navigation
Sample: Home > About the team > Mission & Vision
Target audience(s)
Top 3 visitor questions to answer on the page
Headlines
Page owner
Link to your images and logos in the asset manager or elsewhere
Link to Figma file
Most groups did some version of affinity mapping to work through their ideas.
Open design working group: what questions should we address with our content?
The outcome
By the end of day one, all teams had completed their template documents, outlining their target audience, content needs, and a few headline ideas.
Day 2 – Headlines, connections, content
The goals
Refine headlines and add content bullet points under each
Work out connections to other pages
Include basis for visuals in content planning
The journey
With clear outlines for everyone, we moved on to refining headlines, expanding on key content, and working out the connections between the different pages to facilitate future navigation. The goal was to enhance usability while laying the groundwork for future growth.
We also wanted to spend time finding media to support key points. Our visual design team gave a refresher on brand guidelines and shared examples of successful pages. This helped spark ideas and inspired ways to make pages more impactful and engaging.
Visuals and content designers work together to create engaging and easy-to-digest pages, so it was crucial to plan so the visual team had time to develop high-quality assets. Crafting great visuals takes time, and identifying where visuals can enhance the narrative or break up the page improves the outcome for everyone.
Following the brand presentation, the attendees audited existing content to find what could be linked, reused, or repurposed. Additionally, we asked participants to include external resources that would add depth to their content. There is a lot of great information out there, and we want to amplify valuable resources and voices so our audience can find them more easily.
The rest of the day was spent writing more detailed bullet points under each headline, ensuring the content addressed key topics concisely and effectively. Teams were also asked to provide ideas on the types of visual assets (images, infographics, and videos), their place on the page, and a short description for them.
The outcome
By the end of the day, we had skeletons for all priority pages ready to be expanded into fully fleshed-out content and some ideas for illustrations or pictures to add to the pages.
Overall, this second day highlighted the benefits of layout templates for structuring pages and the motivational power of pair work, though the need for breaks and a more evenly distributed workload became clear.
Day 3 – Content, assets
The goals
Refine bullet points
Start writing full sentences
Format your document to be a proper copy doc
The journey
We focused on refining content and ensuring it was structured for effective web use. Our content expert gave the team a presentation on our copy style guide and our tone of voice to help set the stage. Writing for the web requires an emphasis on clarity and formatting, making this day essential for resolving any content-related issues before moving into the visual design phase.
We also provided templates to help teams format their text files according to the web team guidelines. This was key to keep things consistent and make implementation faster.
Organizing the pages according to internal text formatting practices provided valuable insights into discrepancies between groups. It highlighted areas where some pages were too dense and needed to be streamlined or supplemented with visuals to improve clarity, and areas where additional depth was needed to be valuable. This structured approach also made it easier to review content in context, offering a clear sense of how it would ultimately appear on the site without visual distractions. It allowed teams to refine phrasing and style effectively while keeping in mind the overall page layout and flow.
The outcome
By the end of day 3, all groups had finalized the content for their pages and developed a clear vision of the overall layout.
At that stage, keeping motivation high through recognition of achievements was crucial, as the intense focus had left many participants feeling drained. Concerns about consistency and standards highlighted the need for clearer expectations, particularly regarding the level of detail required in the content. To maintain momentum, we addressed these concerns in the next day’s kick-off session.
Using the open design page as an example of how we use Google Docs formatting to experiment with layouts at the copy stage
Day 4 – Layout, wireframes, final content touches
The goals
Finalize content writing
Create low-fidelity wireframes
The journey
With the content and basic layout in mind, we started this day with a presentation from our visual team on how to effectively balance layouts using our grid systems and examples of best practices when designing layouts in Figma. The idea was to get the teams in a creative mood to experiment with visuals and focus on the overall flow of their pages.
From there, we guided the teams on how to transfer their work from Docs to Figma using components from our low-fi webpage library, encouraging them to experiment with different layout options to find the best structure for each page. This process gave us a clearer sense of the final product and allowed us to refine the visuals and overall flow. The Round Robin review exercise was particularly valuable at that stage. It allowed the different teams to examine other groups’ work, offering inspiration and identifying areas for improvement.
The outcome
At the end of that day, we had low-fi wireframes for all the priority pages. Some teams also started experimenting with higher fidelity wireframes, using pictures they took during our in-person sprints to liven up the pages.
Extract of the low-fi prototype for the open design page
Day 5 – Wrap-up, finalizing planning sheet
The goals
Refine wireframes from the previous day
Fill out the planning spreadsheet
The journey
The final day was planning-heavy. Teams focused on refining the wireframes created earlier and compiling all necessary information for the web team into the planning spreadsheet. The goal was to ensure that the person responsible for delivering the website could access all the details needed to build the pages without being familiar with each one. To streamline communication, groups assigned a point of contact for each page, ensuring the project owner knew who to ask for additional information or clarification. If further work was required on a page, the designated contact person would also be responsible for tracking progress and keeping the project owner updated.
The outcome
By the end of the week, most of the pages were close to a final, high-fidelity version. A few pages still required additional visual refinement, and these were flagged for completion after the sprint. The planning spreadsheet was helpful to clarify responsibilities and served as a final progress checklist to wrap up the sprint effectively.
Seeing all the work coming together was incredibly rewarding and showed the significant progress made throughout the process.
Extract of one of the high-fidelity wireframe options for the open design page
Post-sprint actions and delivery
A snapshot of what our pages looked like on the final day
The goals
Finalize leftover work
Arrange a handover to the web team
Gather feedback
Communicate success to other teams
The journey
Handing over the pages
After completing the sprint, we reviewed the high-fidelity wireframes and copy documents to ensure they were aligned. One key detail we focused on was verifying links and their intended destinations. Once we finalized each page, we cross-referenced the planning sheet with the Figma files and coordinated with the Web team to hand over the work. The structure was helpful to make sure we didn’t forget anything and expedite the briefing process.
Gathering feedback
At the end of the week, we sent a survey to the participants to gather feedback on the sprint’s organization and outcomes. The responses showed that despite an initial adjustment period, participants had a great time collaborating on a project that benefited the entire team. Many noted that the results exceeded their expectations, with the deliverables alone justifying the time commitment. Interestingly though, participants noted that the most meaningful outcome was the strengthened connections among team members.
Communicating success
Given the significant time investment required to execute this sprint, it was important to show a clear return on that investment.
We organized a presentation for the entire Design team to showcase the work attendees completed throughout the week, with special thanks to our moderators and in-house experts. Additionally, we shared the feedback gathered during the week, highlighting the broader impact of the exercise, strengthening team cohesion, and fostering a renewed appreciation for the incredible work our content team does every day.
We also decided to write this very series! The effort that went into planning and executing this sprint is worth sharing and celebrating, and hopefully, if you’ve made it this far, you’ll find it valuable too.
The takeaways
I was glad to see the positive reactions about connections, as this was an intentional focus from the start. As the team grows, building strong relationships becomes more challenging. We naturally tend to orbit around the same groups of people and have limited opportunities to collaborate with designers outside our product teams. With this in mind, I carefully paired people with different skill sets and seniority levels to encourage fresh perspectives and stronger bonds. This was tricky, as we didn’t receive the final attendee list until the sprint began, but we did our best to organize ahead of time. Seeing it pay off was one of my personal highlights.
On the flip side, we received a lot of feedback about the intensity of the week. In hindsight, we could have been even more cautious about how much we scheduled. Despite factoring in breaks, our agenda was still too ambitious, and midway through, we realized we should have scaled back further. Content creation demands deep focus and creativity, which can be exhausting without proper downtime. It’s tempting to maximize every available moment, but in the long run, this approach can be counterproductive. Breaks are essential for recharging and sparking new ideas, and without enough breathing room, engagement is more likely to drop.
Key takeaways and final thoughts
The sprint week was an intense but rewarding experience, filled with valuable lessons and achievements. Overall, the experience highlighted the importance of proactive communication, realistic pacing, and collaboration to create a successful and sustainable workflow.
Here are some of my takeaways from the experience:
Spotlight your experts Providing a platform for team members to share their expertise with the rest of the team was a great way to recognize our internal talents and for everyone to build empathy with one another.
Round-robin exercises are the best The round-robin reviews were instrumental in accelerating the review process, offering multiple perspectives that helped identify inconsistencies and actionable improvements more quickly.
Keep it real Recognizing progress and setbacks was important to avoid people becoming overwhelmed or disengaged. The recap session at the end of each day helped with keeping the energy up as the week went on. It provided a space for the teams to chat and share advice or ideas, or just appreciate one another.
Under-plan and over-communicate Clearer and more proactive communication ahead of the sprint would have been helpful to gather more participants. Even with a conservative approach, our goals were ambitious. Having more hands on deck would have allowed us to lighten the workload and ensure a more balanced experience for all participants.
Plan twice as many breaks as you think you may need Building in breaks is essential to avoid burnout and maintain productivity. It is especially important when working on brain-intensive work like copywriting, which requires a lot of focus.
This article is the spiritual successor of this great article published years ago. I found it helpful when I worked on setting up this whole thing. Check it out next for more planning inspiration!
About two years ago I wrote a Rust crate to fulfill this promise:
If a crash occurs while updating a file, the file either contains the old
contents, or the new contents, nothing in between.
This crate essentially solves the following problem: when you update a file,
generally you open it, truncate it, write to it block-by-block, and eventually
close it. The problem with this process is that if a crash occurs at any point
during this process (the program segfaults, the kernel panics, the machine
loses power, …), your file will be left in an intermediate state where the
old contents are completely lost, and the new contents are only partially
written (if at all). My crate solves this problem by making file updates atomic.
This crate is publicly available as atomic-write-file and you can read its
description for more details (sorry, I’m very bad at naming things).
The way the crate works is simple:
Instead of opening the target file directly, the crate opens a temporary
file.
You write to the temporary file.
Once all changes are in, the temporary file contents are synced to the
storage, and it is atomically renamed so that it replaces the target file.
This is not by any chance a new technique that I invented, and it’s a very
common strategy to solve the problem. This technique guarantees that if a crash
occurs at or before step 3, then the file will have the old contents,
unchanged. If a crash occurs after step 3, then the file will have the new
contents.
There’s a small caveat with this technique: if a crash occurs between steps 1
and 3, then the crate will leave behind a temporary file that occupies some
space on the storage device with no purpose.
This is where Linux anonymous temporary files come into play: this
Linux-specific feature allows you to create temporary files that are written on
the filesystem, but are not given a path in the filesystem. If a crash occurs
between step 1 and 3, the temporary file will simply be forgotten into
oblivion.
Meet the opponent: btrfs
Using Linux anonymous temporary files seemed very appealing for
atomic-write-file to avoid leaving leftovers behind, although I knew that
they’re not the perfect solution: there’s limited support (they’re supported
only on Linux, and only on some filesystems), they require the use of Linux
features that may not be available all the time (the /proc filesystem), and
they also have the problem that, sooner or later, they will need to be given a
name if we want to be able to replace the target file, and this leaves a small
time window during which we could leave some cruft behind.
But none of these are huge problems, and in fact atomic-write-file is able to
do a best effort attempt at using anonymous temporary files, and reverting to
regular files if that doesn’t work.
This problem was specific to btrfs, and I argue that it’s actually caused by a
btrfs bug, rather than a bug in the crate itself, but nonetheless this issue
had revealed a critical flaw of my crate: I did not have any tests to check
if my crate was fulfilling its promise or not!
Therefore I decided to create a small test suite for Linux to simulate crashes
and inspect the contents of files after those crashes occurred on a variety of
filesystems, which is what this blog post is about. This test suite now runs on
my laptop and on GitHub Actions, and potentially any
other CI environment.
Testing strategy
Here’s the idea I came up with: I can create a virtual machine, with a virtual
storage device attached to it, and a filesystem initialized with a file. The
virtual machine updates the file using atomic-write-file, and then triggers a
kernel panic. After that, the file is inspected for consistency.
So here is how it works in practice: there are 3 main pieces:
1. A test binary that is responsible for updating the test file using
atomic-write-file.
1. An init.sh script that runs inside the virtual machine.
1. The run-tests.sh script that puts everything together and starts up the
virtual machine.
Nothing special to see here: this is the minimal code required to use atomic-write-file.
The init.sh script
The init.sh script is more interesting: this script in fact is meant to run
twice. The first time it runs, init.sh will run the test binary above and
trigger the kernel panic. The second time, it will read the test file contents
and report them back run-tests.sh script for inspection.
Let’s take a closer look at what it does: first, it mounts the /proc and
/sys virtual filesystems. These are needed for some basic system functions,
and also to enable the Linux anonymous temporary file feature.
mount-tprocnone/proc
mount-tsysfsnone/sys
Then it mounts the filesystem containing the test file. Before it can do that,
however, it needs to load the correct kernel module for the filesystem. Nothing
too fancy here: the filesystem type is simply specified on the kernel command
line (read through /proc/cmdline):
Then, if this script is run for the first time, it will run the test executable and quit:
echo'Running test binary'
atomic-write-file-test
init.sh (as the name suggests) is run as the init script of the virtual
machine. In Linux, if the init script quits without invoking the proper
shutdown sequence, the kernel will panic. So we don’t actually need to do
anything special to trigger a panic. I could have inserted a echo c >
/proc/sysrq-trigger line to make it more explicit, but then I figured that
having that line wouldn’t be nice for people who want to test the script on
their system.
You might ask: how does init.sh know if it’s the first time it gets called or
the second? Again, nothing fancy here: run-tests.sh gives that hint using a
kernel command line argument:
ifgrep-qtest.verify/proc/cmdline;then# ...fi
If test.verify is specified on /proc/cmdline, then it means this is the
second time this script is run. In that case, the file contents are just
printed on the console:
ifgrep-qtest.verify/proc/cmdline;thenecho'Verifying test file contents'echo'-----'xxd/test/file
echo'-----'poweroff-f
fi
This uses xxd so in case there are garbled characters on screen, we can
comfortably analyze the output and figure out what happened. At the end, the
system is more gracefully shut down using poweroff to avoid another kernel
panic (this is not really necessary).
The first thing it does is compiling the test binary above. The virtual machine
runs in a very barebone environment, so I used static linking to avoid having
to copy additional libraries like libc:
Then run-tests.sh creates a minimal initramfs for the virtual machine. The
initramfs is a filesystem that gets mounted by Linux early at boot. It’s
generally used by Linux distributions to load the main operating system, but in
my case the initramfs contains everything needed for the test. In particular,
it contains:
The test binary.
The init.sh script.
BusyBox, to get the needed utilities like xxd and poweroff.
The kernel modules to mount filesystems and their dependencies.
The kernel modules are a bit more complicated. First of all: where can we get
them from? Ideally, I would have liked to download them from some minimal Linux
distribution, but this turned out to be more complicated than expected. In the
end, I decided to just steal them from the host operating system:
What this does is copying all the module files from the host, getting all the
dependencies for the filesystem module using modprobe, and then uncompressing
those modules if they’re compressed (I found this much easier than adding
compression support in the virtual machine). An alternative approach would be
to simply decompress all the modules, but that’s much slower.
Once everything has been copied over and uncompressed, the initramfs is
created:
Again we’re stealing the kernel from the host operating system, which is also
where the kernel modules come from. The drive with index=0 is the initramfs
and will be visible as /dev/sda in the guest; the other drive with index=1
is our test storage and will be visible as /dev/sdb.
root=/dev/sda tells the kernel to boot from our initramfs image. panic=-1
tells the kernel not to reboot in case of a panic (panic=N specifies the
number of seconds to wait before rebooting in case of kernel panic; specifying
a negative value blocks that behavior). console=ttyS0 allows us to capture
the output printed by the init script. test.fs=... is the option that
init.sh is going to look for to understand what filesystem type to use.
Once that runs, the virtual machine is expected to do its job and crash with a
kernel panic. We then need to restart it to verify the file contents. The QEMU
command is exactly the same except for the additional test.verify option:
The output from this command is captured in a file that can be analyzed. If you
remember how init.sh is written, you might have noticed that it writes the
test file contents between two markers:
echo'-----'
xxd/test/file
echo'-----'
So what we have to do to verify the contents of the file is simply look for
those two ----- markers, get the content in between, and parse it through
xxd -r:
The next step for me was to set up some automation to make sure that my test
script was run whenever I made any change to the crate. Because my project was
already hosted on GitHub, I decided to go with GitHub Actions. I was a bit
worried that I would have had to struggle to make it work because my script
works by stealing the kernel from the host, and I thought that the GitHub
runners could have some protections in place to prevent me from reading the
kernel. To my surprise, there were no such restrictions. In fact, I did not
have to modify a single line of code to make my tests work in GitHub Actions:
check out the workflow file if you’re curious.
Result
In the end, the test suite was able to reproduce the issue with btrfs and
anonymous temporary files, as well as show that the fix was working as
intended. This is what it looks like (output in the screenshot was trimmed down
a bit):
Output from the run-test.sh script before applying the fix, showing a failure.
Output from the run-test.sh script after applying the fix, showing a success.
Future
I’m pretty satisfied with the general approach of running in a virtual machine
and simulating a crash, but the implementation has a huge limitation: because
it steals the kernel from the host, it cannot simulate crashes on other
kernels, operating systems, or platforms.
I think in the future I’m going to take a look at cross-rs, a cross-platform
tool for Rust crates. Cross-rs works kinda in a similar way as my test suite,
in that it uses QEMU emulation to run tests. Maybe I can reuse their QEMU
images and tweak them to run my crash tests. If that is feasible, then I will
be able to extend my test suite to all major platforms and operating systems.
Due to a bug, Viridian extensions are currently not enabled in Qubes OS. Although Viridian extensions are enabled in our libvirt config, this setting is mostly ignored by libvirt. While it is used when libvirt converts the XML config to the xl config format, it is not used when actually creating a VM. Advanced users who wish to confirm this on their own systems may do so with the command sudo xl list -l <NAME_OF_HVM> in dom0.
This XSA affects only XAPI, which is an alternative toolstack. Qubes OS uses libxl instead of XAPI.
About this announcement
Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.
In recent years, digital transformation has emerged as a key driver of business growth. Customers expect fast, secure, and always-available experiences, while IT teams must ensure continuity in an increasingly complex environment.
The cloud has been the natural response: it enables instant scalability, reduces physical infrastructure, and allows services to be deployed in minutes. But moving to the cloud is not enough. As applications become the core of digital operations, companies face new challenges:
User volumes can spike suddenly at key moments such as campaigns or product launches.
Digital experience is decisive: every second of delay can translate into lost revenue.
The pressure to optimise costs coexists with the need to keep teams lean, without oversizing resources.
It’s not just about being in the cloud — it’s about doing so with the reliability to absorb traffic peaks, the flexibility of a managed service, and without adding the burden of constant maintenance.
This is where SkudoCloud comes in: a managed cloud load balancer with zero operational maintenance and built-in security, designed to deliver performance without complexity.
The challenge of managing web traffic in the cloud
Managing traffic in cloud environments can be more challenging than it appears. What starts as a simple solution to distribute requests can easily turn into a fragmented ecosystem that is difficult to maintain.
In practice, many companies realise they have simply moved the same old complexity to a new environment — one that is more dispersed and often more expensive:
Partial solutions: different services for load balancing, firewalls, WAF, or monitoring, contracted separately and not always working well together.
Constant technical maintenance: updates, manual configurations, and adjustments that require specialised time and oversized teams.
Hidden costs: not just licenses or usage fees, but also the hours of staff needed to keep systems running.
Risk of failure: The more dispersed the components, the more points of failure there are. And any outage, no matter how brief, has a direct impact on revenue and reputation.
Anyone who has experienced downtime during a traffic spike or had to troubleshoot issues in the middle of a campaign knows it’s not a minor detail. The cloud promised simplification — but for many teams, the result has been the opposite: increased complexity and a heavier reliance on internal maintenance.
SkudoCloud: secure and balance your applications in minutes
After years of fragmented architectures, businesses are looking for a more straightforward way to manage cloud traffic. That’s where SkudoCloud comes in: a managed SaaS solution that combines advanced load balancing and built-in web security in a single platform.
SkudoCloud is a cloud-based load balancer that ensures your services are always available and secure. It distributes traffic efficiently and provides visibility into what’s happening across your applications — all without requiring your team to worry about configurations or maintenance.
With SkudoCloud, your team gets:
Zero Ops: no maintenance for your team. Updates are automatic, and SkudoCloud engineers keep the service always available.
Integrated security: SSL/TLS, web application firewall (WAF), and DDoS protection included by default. No extra configurations or hidden costs.
High availability: services are always online, with no additional hardware or ongoing monitoring required.
Simplified management: a single platform that unifies everything, reduces administration, and frees your team to focus on real value projects.
SkudoCloud is already available and designed to get you started in just a few minutes. There are no long deployments or the need for a dedicated technical team: instead of spending weeks configuring and fine-tuning services, you gain direct access to a secure cloud load balancer.
A managed, clear, and simple solution that gives you high availability and application protection — without the burden of maintenance.
California’s New Privacy Rules Are a National Signal The California Privacy Protection Agency’s latest regulations redraw the playing field. This is the first set of rules in the U.S. to require: Annual, independent cybersecurity audits for high-risk businesses Comprehensive risk assessments for key data processing activities Bias and privacy impact reviews for automated decision-making systems […]
There is a minor update of live/install ISO images of Sparky of the stable line 8.0.1. The new ISO images feature fixed issue of the Calamares installer which did not set a new system well on an encrypted disk. There is no need to reinstall Sparky if you have it already installed, simply keep Sparky up to date. New ISO images can be downloaded from the Download SparkyLinux stable page.
The telecommunications industry is undergoing its most significant transformation in decades. The move from vertically integrated, proprietary systems to disaggregated, cloud-native infrastructure has unlocked enormous potential for agility and innovation. Yet, for many operators, the challenge has been how to realize that potential while meeting the stringent performance, security, and interoperability requirements that telecom networks demand.
In Europe, this challenge is being addressed head-on by the Sylva project, an open source initiative driven by the continent’s largest telecom operators and network equipment vendors, including Nokia and Ericsson. Canonical joined the Sylva project in 2023. Sylva’s mission is to create a telco-friendly, cloud-native infrastructure stack that not only meets the technical requirements of next-generation telecom workloads, but also aligns with Europe’s priorities for technology sovereignty, security, and regulatory compliance.
We have been working on supporting Sylva by integrating Canonical Kubernetes into the Sylva reference architecture. At the end of the first quarter of 2025, we introduced a Cluster API bootstrap provider for Canonical Kubernetes, and integrated it as a technology preview in Sylva 1.4. The next release, Sylva 1.5, will be the first Sylva version to support Kubernetes 1.32 including Canonical Kubernetes 1.32 LTS, launched in February 2025, with full commercial support. This represents a significant alignment between Sylva’s telco-driven framework and Canonical’s long-term supported Kubernetes distribution.
Why Sylva exists: addressing telco fragmentation
The Sylva whitepaper describes a reality that many in the industry know all too well: fragmentation at the cloud layer. Operators often deploy proprietary container as a service (CaaS) platforms, tailored for specific network functions, which leads to multiple “islands” of infrastructure. This creates complexity for both operators and network function vendors. Operators must manage and skill teams for different stacks, while vendors must certify their software against multiple platforms.
The result is higher costs, slower rollouts, and difficulty in achieving the agility that cloud-native technologies promise.
Sylva’s approach is to converge the cloud layer around open source, interoperable, and production-grade components, with Kubernetes at the core. The project’s Integration and Validation programme ensures that network functions tested against the Sylva framework will run across any compliant implementation, enabling a true “write once, deploy anywhere” model for telecom workloads.
Canonical Kubernetes: a natural fit for Sylva’s mission
Canonical Kubernetes already delivers on many of the core requirements that Sylva has defined. It supports multi-cluster deployments on bare metal, enabling automation across hundreds or even thousands of distributed edge sites. Canonical Kubernetes aligns with GitOps principles and Infrastructure-as-Code workflows, allowing operators to manage fleets of clusters declaratively. It also provides deep support for performance-optimized networking and enhanced platform awareness (EPA), including DPDK, SR-IOV, GPU acceleration, and PTP synchronisation, all essential for 5G RAN, O-RAN, and other latency-sensitive workloads.
Perhaps most significantly, Canonical offers 12 years of long-term support (LTS) for Kubernetes. This is unprecedented in the Kubernetes ecosystem, where typical support timelines are measured in months. For operators managing critical infrastructure, the ability to standardize on a Kubernetes release for more than a decade is invaluable. They receive guaranteed security updates and fixes throughout this period. This long-term stability reduces operational risk, and helps ensure compliance with evolving standards and regulations.
Validated for telco workloads
Sylva’s Validation Workgroup plays a central role in ensuring that infrastructure components can support the wide range of workloads telcos require. Canonical is working directly with this workgroup to validate Canonical Kubernetes for all major telecom network functions, from the 5G core to RAN to far-edge services.
This validation covers not only compatibility but also performance benchmarks, security hardening, and resource efficiency. In doing so, it ensures that when an operator deploys a network function on Canonical Kubernetes within a Sylva-compliant environment, they can have confidence it will perform to specification, whether in a central data centre or at a remote edge site serving a rural community.
A step toward a fully integrated open source telco cloud
The Canonica Cluster API bootstrap provider introduces Canonical Kubernetes as a core component of the Sylva stack. Together with Canonical OpenStack, MAAS, and other infrastructure technologies, this creates the foundation for a unified, open source telco cloud.
Such an environment can host both virtualised network functions (VNFs) and cloud-native network functions (CNFs) on the same automated platform. Operators are able to manage workloads consistently across private data centres, central offices, and distributed edge sites, simplifying operations and reducing the cost of integrating diverse technologies.
European technology sovereignty and sustainability
Beyond technical alignment, Canonical’s involvement in Sylva speaks to a shared commitment to Europe’s strategic objectives. By basing the infrastructure on open source technology and open APIs, Sylva avoids the lock-in risks associated with hyperscale public cloud providers. It also gives operators and governments full visibility into and control over their network infrastructure – a crucial factor for security, privacy, and regulatory compliance.
Sylva also places a strong emphasis on energy efficiency, an area where Canonical’s automation and optimization capabilities can make a tangible impact. By reducing unnecessary compute resource usage and optimizing workloads across distributed edge sites, operators can cut both operational costs and carbon footprint.
Sylva 1.5 and Canonical Kubernetes 1.32 LTS
As Sylva 1.5 is the first release to support Kubernetes 1.32, it unlocks the use of Canonical’s latest long-term supported Kubernetes version, announced in early 2025. Now operators can deploy Canonical Kubernetes 1.32 LTS directly within a Sylva-compliant environment. This combination provides operators with a validated, production-grade platform that brings together Sylva’s telco requirements, Canonical’s automation and lifecycle management, and a 12-year security and maintenance guarantee.
As more network functions are validated on Canonical Kubernetes within the Sylva framework, operators will have a growing catalog of ready-to-deploy workloads, from mobile core to edge AI applications, all interoperable by design.
Conclusion: building the future of telco clouds, together
Sylva represents a new model for how the telecom industry can build its next-generation infrastructure: collaboratively, openly, and with a focus on interoperability and long-term sustainability. Canonical’s contribution of Kubernetes, and its commitment to long-term support, validation, and integration, strengthens this model and accelerates its adoption.
The journey is just beginning. As Sylva evolves, so too will Canonical’s involvement, with potential to integrate more of its infrastructure portfolio and to work closely with operators and vendors in shaping the platform’s future. Together, we can deliver a telco cloud that is not only technically advanced, but also open, secure, and built to last.
Utrecht will be the central meeting point for the cybersecurity community on September 10 – 11, 2025. OPENVAS B.V. will make its first appearance at Cybersec Netherlands, marking an important milestone in expanding our local presence after establishing the Benelux office and fostering direct connections with customers and partners. The area is one of Europe’s […]
Here at Canonical we are excited to announce that we have shipped the first release of our solution for enterprise-ready data lakehouses, built on the combination of Apache Spark and Apache Kyuubi. Using our Charmed Apache Kyuubi in integration with Spark, you can deliver a robust, production-level, and open source data lakehouse . Our Apache Kyuubi charm integrates tightly as part of the Charmed Apache Spark bundle, providing a single and simpler-to-use SQL interface to big data analytics enthusiasts.
Data lakehouse: an architecture overview
The lakehouse architecture for data processing and analytics is an enterprise data management paradigm shift. Historically, organizations have been forced to make a trade-off between the raw, scalable storage of data lakes and the fast-performing queryability of structured data warehouses. The lakehouse approach is able to bridge the gap, enabling enterprises to store large quantities of structured and unstructured data in a single platform, perform data streaming, batch processing, and rapid analytics all bundled up in a wrapper of transactional integrity and governance. Canonical’s approach to data lakehousing relies on the integration of Apache Spark and Apache Kyuubi, creating a platform where batch and streaming data can coexist, be processed at scale, and be made available for advanced analytics and AI/ML in an instant.
At the heart of this lakehouse blueprint is Apache Spark, the industry’s standard distributed data processing engine. Spark’s in-memory, fault-tolerant architecture allows a user to run high-throughput ETL, data transformation, and iterative machine learning workloads. Canonical’s approach leverages Spark OCI images with Kubernetes as a cluster manager, targeting a modernized approach to standard Spark jobs, optimizing for cost and performance. The integration supports many data sources like any S3 compliant storage and Azure Blob storage for data ingestion, as well as other databases as metastores for processing.
One of the greatest challenges in the deployment of enterprise Spark has always been to provide secure, multi-user, and easy-to-use SQL access to business users, analysts, and data scientists. That is where Apache Kyuubi shines. Kyuubi is a high-throughput, multi-tenant SQL gateway for Spark that provides a single JDBC and ODBC endpoint well-suited to integrate with data explorers like Tableau, Power BI, and Apache Superset. Unlike Spark’s own Thrift Server, Kyuubi provides true session isolation so that each application or user runs its own secure Spark context. This not only provides an additional layer of security but also enables fine-grained resource allocation, workload prioritization, and strict auditing which are critical capabilities for compliance and governance within regulated industries.
A charming lakehouse, fit for an enterprise
Canonical’s Spark and Kyuubi lakehouse stack is built for speed and reliability. In fact the deployment is automated end-to-end using Canonical’s charmed operators, which oversee the lifecycle of Spark, Kyuubi, and supporting components. This includes automated cluster provisioning, rolling upgrades, fault-tolerance, security patching, and cloud-native elastic scaling across Kubernetes environments.
Security is built into every layer of the bundle. The release of the Charmed Apache Spark/Kyuubi bundle includes end-to-end encryption, native integration with the Canonical Observability Stack, and security-hardening with improved documentation. In addition we have been working on the patching of several critical and high CVE for this launch, enhancing overall the product security posture. The bundle now includes back up and restore for Kyuubi, with the benefits of reliability and continuity of business, and adds in-place upgrades to minimize downtime and complexity. High-availability support allows servers running Kyuubi to be scaled reliably for mission-critical workloads.
The spark-kyuubi bundle is platform agnostic, supporting hybrid and multi-cloud, as well as on-premises deployments. This is done with the goal of avoiding vendor lock-in, empowering organizations to optimize cost, performance, and compliance on the infrastructure of their choice. Whether greenfielding a new analytics platform or refactoring a legacy Hadoop deployment, Canonical’s solution provides an easy way forward with expert support every step of the way.
Alongside new features and security patches, the release brings improved usability and documentation. The deployment process is fully explained, and the solution is made available via the standard Canonical channels, so we’d encourage you to go look at the documentation and the release notes and ultimately to give it a try.
We have also recently delivered a webinar “Open source data lakehouse architecture with Spark and Kyuubi – an engineering deep dive” that you can follow for a guided deployment experience. It all comes down to a more secured and innovative big data analytics stack available to be deployed on-premises or in the cloud by enterprises. With the new launch, organizations can go ahead with confidence that they are benefiting from the latest developments in the open source domain for big data analysis.
In summary, Canonical’s Kyuubi and Spark-based data lakehouse enables organizations to unify data architecture, accelerate analytics, and future-proof data strategy. By combining open source innovation with enterprise-grade support, Canonical empowers businesses to unlock the true potential of their data – reliably, efficiently, and at scale. We invite data engineers, architects, and IT enthusiasts to test the solution and find out more about how Canonical can help you build the next generation of data-driven applications and insights.
forky is
open!
As a result I’m starting to think about the upcoming Python
3.14. At some point we’ll doubtless do
a full test rebuild, but in advance of that I concluded that one of the most
useful things I could do would be to work on our very long list of packages
with new upstream
versions.
Of course there’s no real chance of this ever becoming empty since upstream
maintainers aren’t going to stop work for that long, but there are a lot of
packages there where we’re quite a long way out of date, and many of those
include fixes that we’ll need for 3.14, either directly or by fixing
interactions with new versions of other packages that in turn will need to
be fixed. We can backport changes when we need to, but more often than not
the most efficient way to do things is just to keep up to date.
So, I upgraded these packages to new upstream versions (deep breath):
setuptools upstream will be removing the setup.py install
command on 31
October. While this may not trickle down immediately into Debian, it does
mean that in the near future nearly all Python packages will have to use
pybuild-plugin-pyproject (note that this does not mean that they
necessarily have to use pyproject.toml; this is just a question of how the
packaging runs the build system). We talked about this a bit at DebConf,
and I said that I’d noticed a number of packages where this isn’t
straightforward and promised to write up some notes. I wrote the
Python/PybuildPluginPyproject
wiki page for this; I expect to add more bits and pieces to it as I find them.
On that note, I converted several packages to pybuild-plugin-pyproject:
Upgrading Pydantic was complicated, and required a rust-pyo3 transition
(which Jelmer Vernooij started and Peter Michael Green has mostly been
driving, thankfully), packaging rust-malloc-size-of (including an upstream
portability fix), and
upgrading several packages to new upstream versions:
Europe’s open source ecosystem is at a turning point. The Linux Foundation’s Open Source as Europe’s Strategic Advantage: Trends, Barriers, and Priorities for the European Open Source Community amid Regulatory and Geopolitical Shifts report shows organizations across the continent are broadly adopting open source software (OSS). But adoption alone doesn’t guarantee resilience, innovation, or security. As OSS becomes central to digital sovereignty and critical infrastructure, the gaps in investment, contribution, and – most crucially – the risks associated with managing OSS without expert support, are increasingly unsustainable.
In today’s enterprise environments – where uptime, compliance, and rapid response are critical – organizations need support models that complement the strengths of open source communities.
Meeting these demands requires bridging the gap between open collaboration and production-grade resilience. In this blog, we’ll highlight some of the key findings and insights from the latest Linux Foundation’s 2025 Open Source as Europe’s Strategic Advantagereport, sharing Canonical’s perspective on the challenges and opportunities they reveal, as well as how our solutions aim to fulfil such requirements.
The open source support maturity gap
The survey reveals a telling maturity gap. While 64% of European organizations rely on OSS for operating systems and 55% use it for cloud infrastructure, only 34% have formal open source strategies, and just 22% have established Open Source Program Offices (OSPOs) – both of which are essential for scaling contribution and aligning open source work with business goals.
Respondents that view OSS as valuable to the future of their organization
Most significantly, only 62% of C-level executives recognize open source as strategically important, compared to 86% of non-C-level employees. Many organizations are leveraging OSS tactically, but have not yet made the strategic shift needed to invest in its long-term sustainability and security.
A more mature approach to OSS adoption involves not just using open source, but investing in the processes and expertise that sustain it. Our goal at Canonical is to help our customers outline a clear path to mature OSS adoption. For example, BT Group set out to deliver a UK-wide 5G network and sought greater control, flexibility, and community support than proprietary solutions could offer. By partnering with Canonical to deploy an open source, virtualised network infrastructure based on OpenStack, BT Group was able to successfully virtualize network functions, automate operations, and accelerate service innovation. Through Ubuntu Pro + Support, Canonical provided not only the technology, but also expert guidance and training so BT’s teams could fully operate the platform internally. This collaborative approach meant BT Group gained the agility to launch new 5G services rapidly, maintain the highest levels of reliability, and support over 75% of the UK population, all while strengthening their internal open source expertise and overall network resilience.
Ubuntu Pro + Support is Canonical’s all-in-one subscription for enterprise-grade open source support and security. It provides long-term stability, enhanced security coverage, and direct access to deep technical expertise, helping enterprises unlock the full value of open source.
Delivering technical support
One of the biggest barriers to broader OSS adoption and contribution is the lack of enterprise-grade support. According to the survey:
40% cite lack of technical support as a key barrier to using OSS
26% say improved security support would increase their OSS use
53% say they need paid support for mission-critical workloads
54% expect long-term support guarantees from paid vendors
53% prioritize rapid security patching in their support needs
These numbers reflect a core reality: the OSS enterprises depend on is not always backed by the lifecycle guarantees, patch responsiveness, or compliance assurances that production environments demand, especially in regulated sectors.
Our customers are facing these challenges too. For example, a customer reported a device that wouldn’t boot, a critical issue for their production line. Our support engineer reviewed a screenshot of the device in its failed state, immediately spotted a corrupted file system, and provided a solution that restored service on the spot.
Many organizations encounter gaps in long-term security, compliance, and expert support as they scale open source adoption. Experiences like these illustrate how bridging support gaps isn’t just about access to tools or updates, but about solving complex operational challenges as they arise. Recognizing this need, Ubuntu Pro + Support delivers 12 years of security maintenance for Ubuntu LTS releases across thousands of open source packages, along with around-the-clock access to Canonical’s experienced support team. This combination helps organizations reduce downtime, manage compliance more effectively, and address even the most nuanced technical issues, ultimately enhancing operational resilience and confidence.
Focusing on security and compliance
Security is no longer a bolt-on concern. The EU’s Cyber Resilience Act (CRA) and AI Act are raising the bar for transparency and resilience, pushing OSS maintainers, vendors, and adopters toward more rigorous software supply chain hygiene. Yet 62% of respondents said they are unfamiliar with the CRA, underlining a readiness gap that must be closed – quickly. Find out more about the CRA, or how Canonical can help you to meet your CRA compliance requirements.
Security and compliance concerns are among the biggest barriers to open source adoption in enterprise environments, with 37% of organizations citing these as primary challenges. Our customers face these challenges every day. For example, Grundium, a medical device manufacturer, needed to comply with strict security and data protection regulations across global markets. By adopting Ubuntu Pro for Devices, Grundium ensured that their digital microscope scanners receive regular security patches and over-the-air updates, maintaining compliance with major security frameworks and keeping sensitive patient data safe. This enabled them to confidently innovate with open source technologies while meeting the rigorous standards required in the medical industry.
That’s one of the reasons we introduced Ubuntu Pro: to help organizations prepare for new regulations. From vulnerability tracking to FIPS-certified cryptographic modules and CIS hardening, it delivers a proactive security foundation in line with both current and emerging compliance needs. And as regulatory requirements evolve, Ubuntu Pro + Support doesn’t just provide tooling: it provides context, expertise, and a responsive engineering team that has helped customers navigate everything from CRA readiness to multi-cloud compliance and hardware security posture.
Our support team delivers break and bug fixes, incident response, rapid vulnerability remediation, and expert guidance on regulatory requirements, ensuring customers’ open source stack is secure and audit-ready.
Bridging skills gaps
Skills gaps (38%) and operational complexity are major obstacles to successful open source adoption and cloud-native transformation. Many organizations lack in-house expertise to manage, troubleshoot, and optimize complex open source infrastructure.
Our customers are experiencing these operational challenges firsthand. One customer running a large Ceph cluster faced persistent latency issues. Canonical’s support and sustaining engineering teams led a multi-day investigation, provided step-by-step remediation, and ultimately delivered a sustainable fix. The customer renewed their contract, citing the depth and persistence of Canonical’s support as a key reason.
That’s one of the main goals of Ubuntu Pro + Support: to act as a force multiplier for your team, providing direct access to the engineers who build and maintain the software you run.
From cloud to edge, our experts guide you through upgrades, troubleshoot performance issues, and offer best practices, reducing operational risk and accelerating time to value. Whether you’re managing a cloud environment, running mission-critical workloads, or deploying embedded devices, Ubuntu Pro + Support extends your team’s capabilities in both reactive and proactive ways with dedicated support resources, ensuring environments remain compliant, secure, and efficient at scale.
Addressing operational complexity often means supplementing internal skills with external expertise, ensuring issues are not just fixed, but understood and prevented in the future. Ubuntu Pro + Support transforms technical assistance into a strategic advantage, bolstering internal capacity and building operational confidence in open source.
Enterprise-grade open source support for mission-critical workloads
94% of organizations see open source foundation support as important or extremely important, and 53% say paid support is essential for mission-critical workloads. The business value of OSS is high, but enterprises require lifecycle guarantees, rapid patching, and expert escalation paths to deploy with confidence.
Environments in which respondents consider paid support for OSS essential
Our customers rely on this level of support for their most critical workloads. After a major OpenStack release introduced performance regressions for a customer, Canonical’s sustaining engineering team worked for months to identify the root cause, delivered an upstream patch, and provided customers with immediate workarounds. This not only restored stability but improved the open source ecosystem for all users.
Such outcomes demonstrate the value of having escalation paths and sustained engineering involvement, moving beyond incident response to ecosystem improvement. Our 24/7/365 team doesn’t just resolve incidents: they contribute upstream fixes, influence product roadmaps, and ensure your environment benefits from the latest open source advancements.
Why open source support matters now
The 2025 survey makes one thing clear: Europe is not just consuming open source – it’s starting to see it as a lever for sovereignty, innovation, and economic competitiveness. But open source won’t scale strategically without investment in its maintainers, contributions, and security infrastructure.
Open source support isn’t just about having someone to call when something breaks. It’s about ensuring continuity, compliance, and confidence. Ubuntu Pro + Support offers precisely that: a way to embrace OSS with the rigour and reliability enterprises require.
In practice, this means:
Minimised downtime through guaranteed SLAs and 24/7 support – even when facing global-scale outages or device boot failures.
Expert guidance from engineers who’ve resolved complex issues, like Ceph cluster latency, offline snapd crashes, and API failures.
Contextualised, real-world experience that enables rapid problem diagnosis and sustainable fixes – often in a single support exchange.
Proactive, preventative case handling around upgrade planning, compliance assessments, and network changes.
Scalable, high-touch support for device fleets and infrastructure at scale – whether cloud-native or bare-metal.
As Europe looks to build alternatives to monopolistic technologies and chart its own digital future, sustained open source investment , with support at its core – is not optional. It’s foundational.
Ubuntu Pro + Support turns that foundation into a platform for operational excellence, equipping enterprises to adopt, secure, and scale open source with the confidence only production-grade backing can provide.
Ready to unlock the full value of open source with enterprise-grade support? Ubuntu Pro + Support is your bridge to resilient, future-proof infrastructure. Get in touch.
How ISO/SAE 21434 helps you get ready for the Cyber Resilience Act
If you work in automotive, you’ve probably already heard of the CRA – the EU’s Cyber Resilience Act. It’s one of the most ambitious pieces of cybersecurity regulation in years. And while it wasn’t written specifically for cars, it’s going to impact a huge part of how software gets built, updated, and maintained across the automotive stack.
So here’s the question: how do you prepare for something like the CRA? And more importantly, how do you prove that your software is secure across its lifecycle, from design to development, to maintenance? That’s where ISO/SAE 21434 comes in.
Two regulations, one goal
Let’s be clear: ISO/SAE 21434 and CRA aren’t the same thing. The first is an automotive-specific cybersecurity standard. ISO/SAE 21434 specifies the cybersecurity requirements across the entire vehicle lifecycle: from concept and design to decommissioning. Whether it’s for risk management or threat analysis, it’s used by OEMs and suppliers to demonstrate that cybersecurity is always part of the process. The second is a broad EU regulation that applies to most software and hardware products on the market.
But they’re aiming at the same target. And ISO/SAE 21434 practices overlap with many of the best practices demanded by the CRA.
Both demand the same kind of discipline: secure coding practices, clear vulnerability handling, a structured risk assessment, and actual proof (via documentation) that you’re doing what the regulations demand. Not plans or intentions. Evidence.
So if you’re already ISO/SAE 21434-compliant, you’re not starting from scratch when CRA enforcement begins. You’ve got documented processes. You’ve got traceability. You’ve got a culture of treating security as part of the product.
Why it matters for open source
One of the big challenges with CRA is showing due diligence when you’re using open source software. If the upstream project isn’t CRA-ready, then you’re on the hook for filling the gap. That means more audits, more documentation, and more internal overhead.
But thanks to Canonical’s ISO/SAE 21434 certification, that load gets lighter. We’re already following the right processes, including handling CVEs, tracking patches, and maintaining traceability for the software we build and deliver. So when you use Ubuntu or snaps in an automotive system, you’re not inheriting unknowns. You’re building on a stack that’s already aligned with security expectations.
It also makes the compliance story easier to tell. When regulators or procurement teams ask, “how do you know this component is safe?”, you’ve got a clear answer with documentation to back it up.
What this looks like in practice
Say you’re building an SDV platform and using Ubuntu as the base. At some point, someone’s going to ask: is this secure? How fast do you get patches? What happens if there’s a critical CVE? Who’s responsible for monitoring and fixing it?
If your OS vendor is already ISO/SAE 21434-certified, you don’t have to scramble for answers. The processes are already documented. The responsibilities are assigned. And you’ve got an audit trail that shows your supply chain is under control.
Not just for cars: how ISO 21434 helps beyond automotive
You are affected by the CRA even if your business doesn’t revolve around building cars. Construction, agriculture, and other industries using software in vehicles will still need to comply, even if ISO/SAE 21434 isn’t applicable to them directly. That’s why at Canonical, we treat the CRA as a baseline for all products we build and support, regardless of the sector.
Canonical already provides tooling for stringent security standards like FIPS and NIS2, and we are committed to full CRA alignment as it evolves. So whether you’re building software for a car, a tractor, or a crane, we’ve got you covered.
When will the CRA come into play?
In March 2024, the Cyber Resilience Act was approved by the European Parliament. The real deadline to focus on is December 2027: that’s when the enforcement of compliance requirements kicks in. Any product sold in the EU after that date, including automotive software-defined features, has to meet CRA obligations.
That might sound far off, but we’re already halfway through the year. 2025 and 2026 are your window to prepare. And not just in theory: you’ll need working processes, proper documentation, clear traceability, and secure development practices in place and running before then. Put simply, 2027 isn’t when you can start planning compliance – it’s when you need to actually be compliant.
If you’re already following ISO/SAE 21434, you’re not starting from zero. The structure’s there. To meet the requirements of the CRA, you should be mapping that structure you already have to the new rules and making sure it holds up under scrutiny.
What about RED?
The CRA isn’t the only regulation you need to think about. The Radio Equipment Directive (RED) adds cybersecurity requirements to anything that communicates over radio. In modern vehicles, that includes a growing number of components, from Wi-Fi and Bluetooth to 4G and 5G connectivity.
The good news is that CRA and RED don’t pull you in different directions. They overlap in both scope and approach. If you’re already applying ISO/SAE 21434 and aligning with CRA expectations, you’re on solid ground for RED compliance too. Canonical’s processes around patching, CVE management, and documentation support both.
Looking ahead
The CRA won’t replace ISO/SAE 21434, and ISO/SAE 21434 won’t give you a free pass under the CRA. But they reinforce each other. The more you invest in ISO/SAE 21434 today, the less painful CRA enforcement will be tomorrow.
Canonical’s position is simple: open source is ready for regulated industries if it’s built right. Our ISO/SAE 21434 certification is one piece of that commitment. We are already ensuring that our processes and products are designed with the CRA in mind, as it is merely the next step in raising the bar. We think security should be taken into consideration from the beginning, rather than being added at the last minute.
If you’re building automotive products and need to show your stack is secure, without slowing everything down, we should talk. At Canonical, we’re experienced at helping companies harness open source in the automotive space.
A note from the editor: This post was written by Zhanglei Mao.
This blog explores the successful integration of Huawei OceanStor hybrid flash storage systems with Canonical Kubernetes, demonstrating how enterprises can unlock resilient, enterprise-grade storage in cloud-native environments. The integration benefits companies across different industries seeking reliable, scalable containerized infrastructure. This collaboration enables dynamic volume provisioning, snapshots, and seamless storage management, helping businesses accelerate their digital transformation with a flexible and future-proof Kubernetes platform.
Project overview
Project background
Huawei OceanStor hybrid flash storage systems are designed to carry mission-critical and general-purpose services of enterprises, financial institutions, and data centers (DCs). It meets the requirements of all NAS scenarios and applies to small- and medium-sized enterprises and hybrid applications that do not have high performance requirements, such as database applications on container platforms. It helps finance, manufacturing, telecommunication carriers, and other industries smoothly evolve to all-flash storage, build virtualization, cloud-native, and intelligent IT infrastructure, and effectively supports the transformation to Industry 4.0.
Canonical Kubernetes is a new Kubernetes distribution that builds on upstream Kubernetes by offering hassle-free maintenance, a single-line installation process, enhanced security by default, and comes with up to 12 years of support. Ubuntu is the reference operating system for Kubernetes deployments, making it an easy way to build Kubernetes clusters.
Testing scope
Canonical Kubernetes cluster setup
Huawei csi ( eSDK) integration for Oceanstor
Using persistent volumes for Containers
Advanced features of volume extend, clone, snapshot
Testing environment
Hardware resources and software components used for verification are presented in the tables below.
Hardware resources
Component type
Description
Count
Storage
OceanStor 5120
1
Virtual Machines
VM (x86_64, 8*vCPU, 16G MEM, 32G Disk)
3
Software components
Component type
Description
Version
Operation System
Base OS on all nodes of the cluster
Ubuntu 24.04.2 (ga-kernel v6.8.0)
Canonical Kubernetes
Container orchestration platform
v1.32 and v1.33
Huawei eSDK (Huawei-csi)
Cis driver and secret tools
v4.7.0
Testing network architecture and service mapping
The architecture for integration and testing is illustrated in the following diagram.
Verifying the status of Huawei CSI and Charmed Kubernetes
The tested cases and results are shown in the table below.
Testing Item
Result
Note
4.1.5 Checking the Host Multipathing Configuration
Pass
– Multipath is default installed
4.1.6 Checking the Status of Host-Dependent Software
Pass
– iSCSI client is default installed – NFS client need to be installed
4.2.2 Manually Installing Huawei CSI
Pass
5.1 Managing Storage Backends
Pass
– SAN (iSCSI protocol) backend (create. delete) – NAS (NFS protocol) backend (create, delete) – Configuring Multiple Backends – Creating a PVC for a Specified Backend – Creating a PVC for a Specified Storage Pool
6.1.2 Manage Volume Provisioning(Storage Class)
Pass
– StorageClass (create, delete) – Nfs use fs volume type, lun for iSCSI – Configuring QoS
Huawei Oceanstor can work with Canonical Kubernetes via the Kubernetes CSI plugin developed by Huawei via ISCSI or NFS protocol. It provides persistent data in enterprise-grade resilient storage, allows users to dynamically provision volumes, make snapshots, and attach them to the containers running in the Kubernetes cluster. This solution enables organizations to move their digital transformation further with a flexible and reliable PAAS or CAAS platform with Charmed Kubernetes.
Although the tested model is hybrid flash storage, the guidance and integration apply to all models included in the Huawei Oceanstor Dorado V700 all-flash and the Oceanstor Dorado V700 hybrid flash storage portfolio.
For the second time, and in less than one month, Canonical is coming to East Africa! Three weeks ago, we had the first-ever UbuCon Africa, which was co-located with DjangoCon Africa 2025, and on September 6, Canonical will be coming to Kenya to support OpenInfra Days Kenya 2025. This event is set to be the first OpenInfra Days event in Africa, and the first in-person event ever hosted by the OpenInfra User Group Kenya.
This will be a one-day event organized by the OpenInfra User Group Kenya, with support from its parent foundation, the OpenInfra Foundation. Angani Ltd. will be our graceful hosts, at their offices located at West Point Building, 1st Floor, Mpaka Road, Nairobi.
OpenInfra Days
OpenInfra Days are annual one- or two-day events typically organized and hosted by local OpenInfra user groups and communities in conjunction with the OpenInfra Foundation and other companies within the open source infrastructure ecosystem. These events bring together open source technologists and enthusiasts, developers, infrastructure professionals, and end-user companies, making them an excellent place to meet leaders, learn from users and contributors, hear their stories, network with individuals and companies in the ecosystem, and find your open source community.
OpenInfra Days include talks, keynotes, workshops, and breakout sessions about open source software infrastructure and projects like OpenStack, Ceph, Kubernetes, and more.
OpenInfra Days Kenya 2025
The organizers are already hard at work on preparations. The schedule is out, with exciting talks from excellent speakers – including Canonical’s very own Tytus Kurek, the Product Manager for Canonical OpenStack and board member of the OpenInfra Foundation. Denson Ngumo, Angani’s CTO, and Tatiana Ovchinnikova, a member of RedHat and an OpenStack Horizon developer, will also be speaking.
To wrap it all up, we will be joining other user groups and communities all around the world in celebrating 15 years of OpenStack taking the cloud by storm.
OpenInfra, sponsored by Canonical
Canonical will be the main sponsor for the day, and will be represented in full force by our Kenyan colleagues and Tytus. Canonical strives to amplify the global impact of open source by making it secure, reliable and available to all at scale. It is the company behind Ubuntu, an open source Linux operating system. Beyond Ubuntu, Canonical supports and develops various open source projects across the entire spectrum of technology, including infrastructure, where products such as OpenStack,Ceph and Kubernetes, shine.
Meet our local team at our OpenInfra Days booth to chat and learn more about our open source portfolio. It’ll be easy to spot them: find the cool people in orange polos and/or lanyards, in an orange booth, with funky, mostly orange merch!
Canonical is hiring
Canonical is strongly committed to the idea that software talent is to be found in all corners of the world. We demonstrate that commitment not only by attending events like OpenInfra Days Kenya, but also by hiring people from everywhere and anywhere to work remotely on open source software. We have 40 Canonical employees from Africa, 6 of whom are from Kenya, and we’d like to increase that number.
It can get complicated working remotely, if you have never done it before – find our Kenyan colleagues at our booth to find out more about their experiences, and how they found success in the global workplace.
Starting August 2025, businesses and administrative bodies must implement initial provisions of the EU AI Act – a new era of responsibility in dealing with artificial intelligence begins. Since the AI Act not only demands technical adjustments, but a fundamental rethinking, AI will prospectively be evaluated in a more nuanced way, taking risk and use […]
We have some big news for you all in August — the count of changes that VyOS maintainers and community member made this month is small, but their impact makes up for the low number. They include VRF support for DHCP and DHCPv6 servers, steady progress in the legacy configuration backend replacement, multiple bug fixes, and an experimental privilege separation feature that allows limiting users to specific sets of operational commands.
Setting up a proof of concept (PoC) environment is often one of the first steps in any IT project. It helps organizations to get to grips with the technology, validate the idea, and identify any potential risks. However, setting up an OpenStack PoC has always been a challenge due to the overall complexity of the project. Fortunately, both Sunbeam and Canonical OpenStack (based on Sunbeam) have now finally solved this long-standing challenge.
In this blog, I’ll cover Canonical’s proven path to setting up an OpenStack PoC. I will demonstrate how using the right set of tools is critical to the overall success of your project. By taking a closer look at the leading platforms, we’ll see how to get your OpenStack PoC up and running with ease.
But there is someone out there who knows that a better solution exists: someone who has been monitoring various trends in the IT industry for years and who perfectly understands that the future lies in cloud computing and open source. Someone who knows that, out of all the various cloud platforms, OpenStack stands out for its maturity, stability, and comprehensiveness.
That someone is you, isn’t it? Otherwise you wouldn’t be reading this blog, correct?
So you talk to your executive team one more time, as they are struggling with all those challenges. You pitch OpenStack, and suddenly they start listening to you, and your idea gets approved.
“All right!” they say. “Bring us a working PoC and then we’ll talk.”
You get a green light, a budget, and a timeline – everything you need for a successful project completion. You finally have a unique opportunity to fully transform the IT infrastructure in your organization.
But now it’s your turn. You have only one shot. You had better get your OpenStack PoC right.
Use the right tools
This is where our journey begins. By now, you’ve likely already done some research on your own: you’ve read about OpenStack, watched some YouTube videos, or even tried installing it on your own. And you’ve probably already seen why OpenStack is hard to get right the first time.
Let’s get straight to the point: OpenStack is complex. Many organizations that have tried it on their own in the past, failed. At the same time, projects that claimed to tame OpenStack’s complexity were not able to fulfill their promises. This is exactly why it’s critical to do the right research and use the right set of tools.
At Canonical we understand this challenge very well. Over years we’ve helped hundreds of customers build their large production OpenStack clouds. We have seen an endless evolution of OpenStack deployment and operation tools, including our own. We have also seen thousands of organizations like yours struggling to make their first steps. And this is exactly the reason why we decided to launch the Sunbeam project.
Sunbeam aims to drastically lower the barrier to entry to OpenStack. It provides a smooth installation experience, helping users with no previous experience get it up and running in less than an hour. Contrary to non-production solutions, such as DevStack or Packstack, the architecture of Sunbeam remains the same regardless of the scale. All of that makes Sunbeam an ideal candidate for your OpenStack PoC project as you know you get exactly the same experience as if you were deploying it in production.
Don’t believe it? Let’s try it together in four simple steps!
OpenStack PoC with Canonical OpenStack (based on Sunbeam)
And that’s it! Your PoC environment is now up and running.
You can launch your first VM on top of the cloud by running the following command:
sunbeam launch ubuntu --name test
Sample OpenStack dashboard
Get help, if needed
So that was easy, but we both know that the reality is always slightly different. This is because your executive team might have certain expectations from your OpenStack PoC. Some examples of common requirements might be multi-node, highly available clusters or more complex network setups. This is hardly surprising: at the end of the day the PoC has to resemble the behavior of the existing infrastructure.
Luckily, the Canonical OpenStack documentation provides a comprehensive overview of all those more advanced scenarios, so that you can build your required features and functionality yourself. If you’re missing some info, or need help figuring something out (like with all PoCs), the official community forum is a fantastic resource to consult.
But if you’re looking for some extra help to get your OpenStack PoC not just right but absolutely perfect, then you’re even luckier, because you can get everything you need from OpenStack PoC delivered exactly to spec by cloud experts.
We have recently launched an OpenStack Pilot package. This package provides Canonical OpenStack (based on Sunbeam) delivery on three to six physical machines. By relying on Canonical’s expertise and putting your trust in the hands of a partner who has been on the OpenStack market since Day One, you can rest assured that your project will be under full control from the very beginning.
Since a working PoC is essential for the overall success of your OpenStack deployment project, I encourage you to take a deep breath now and carefully consider next steps on your journey. Depending on where you stand with your project, information found in this blog might have an impact on both your schedule and your budget.
While evaluating this impact, do also consider visiting the following resources:
We have published Qubes Canary 044. The text of this canary and its accompanying cryptographic signatures are reproduced below. For an explanation of this announcement and instructions for authenticating this canary, please see the end of this announcement.
Qubes Canary 044
---===[ Qubes Canary 044 ]===---
Statements
-----------
The Qubes security team members who have digitally signed this file [1]
state the following:
1. The date of issue of this canary is September 02, 2025.
2. There have been 109 Qubes security bulletins published so far.
3. The Qubes Master Signing Key fingerprint is:
427F 11FD 0FAA 4B08 0123 F01C DDFA 1A3E 3687 9494
4. No warrants have ever been served to us with regard to the Qubes OS
Project (e.g. to hand out the private signing keys or to introduce
backdoors).
5. We plan to publish the next of these canary statements in the first
fourteen days of December 2025. Special note should be taken if no new
canary is published by that time or if the list of statements changes
without plausible explanation.
Special announcements
----------------------
None.
Disclaimers and notes
----------------------
We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently compromised.
This means that we assume NO trust in any of the servers or services
which host or provide any Qubes-related data, in particular, software
updates, source code repositories, and Qubes ISO downloads.
This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other means,
like blackmail or compromising the signers' laptops, to coerce us to
produce false declarations.
The proof of freshness provided below serves to demonstrate that this
canary could not have been created prior to the date stated. It shows
that a series of canaries was not created in advance.
This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to anybody.
None of the signers should be ever held legally responsible for any of
the statements made here.
Proof of freshness
-------------------
Tue, 02 Sep 2025 10:14:02 +0000
Source: DER SPIEGEL - International (https://www.spiegel.de/international/index.rss)
Rugby: "What Is Going On in My Head?!" Ex Professional Alix Popham Opens Up about His Damaged Brain
Photographing the Suffering in Gaza: A Palestinian Photojournalist Finds Himself at the Center of Controversy
Secret Documents from the 1990s: How Close Was Russia to NATO Membership?
"Have We Done It?": How Is Germany Doing 10 Years after the Refugee Crisis?
Merz Under Fire: German Conservatives Roiled by Chancellor's Decision on Arms for Israel
Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
Putin Finds a Growing Embrace on the Global Stage
Xi’s Parade to Showcase China’s Military Might and Circle of Autocrats
Ukraine Pursues a Weapons Buildup More Potent Than Any Security Guarantee
How Jair Bolsonaro Tried, and Failed, to Stage a Coup in Brazil
Landslide Kills More Than 1,000 People in Sudan, Rebel Group Says
Source: BBC News (https://feeds.bbci.co.uk/news/world/rss.xml)
Russia's Putin hails 'unprecedented' ties with China at talks in Beijing
Beijing tightens control ahead of Xi's big moment on world stage
Aerial search for survivors after Afghan quake kills 800 people
Hundreds killed in Sudan landslide, UN says
Body seen in secret mortuary could solve 50-year mystery of vanished religious leader
Source: Blockchain.info
00000000000000000001850c13d0c4fbfa4c41b6cfaa598c032820713209a633
Footnotes
----------
[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this canary
in the qubes-secpack.git repo, and (2) via digital signatures on the
corresponding qubes-secpack.git repo tags. [2]
[2] Don't just trust the contents of this file blindly! Verify the
digital signatures! Instructions for doing so are documented here:
https://www.qubes-os.org/security/pack/
--
The Qubes Security Team
https://www.qubes-os.org/security/
The purpose of this announcement is to inform the Qubes community that a new Qubes canary has been published.
What is a Qubes canary?
A Qubes canary is a security announcement periodically issued by the Qubes security team consisting of several statements to the effect that the signers of the canary have not been compromised. The idea is that, as long as signed canaries including such statements continue to be published, all is well. However, if the canaries should suddenly cease, if one or more signers begin declining to sign them, or if the included statements change significantly without plausible explanation, then this may indicate that something has gone wrong.
The name originates from the practice in which miners would bring caged canaries into coal mines. If the level of methane gas in the mine reached a dangerous level, the canary would die, indicating to miners that they should evacuate. (See the Wikipedia article on warrant canaries for more information, but bear in mind that Qubes Canaries are not strictly limited to legal warrants.)
Why should I care about canaries?
Canaries provide an important indication about the security status of the project. If the canary is healthy, it’s a strong sign that things are running normally. However, if the canary is unhealthy, it could mean that the project or its members are being coerced in some way.
What are some signs of an unhealthy canary?
Here is a non-exhaustive list of examples:
Dead canary. In each canary, we state a window of time during which you should expect the next canary to be published. If no canary is published within that window of time and no good explanation is provided for missing the deadline, then the canary has died.
Missing statement(s). Canaries include a set of numbered statements at the top. These statements are generally the same across canaries, except for specific numbers and dates that have changed since the previous canary. If an important statement was present in older canaries but suddenly goes missing from new canaries with no correction or explanation, then this may be an indication that the signers can no longer truthfully make that statement.
Missing signature(s). Qubes canaries are signed by the members of the Qubes security team (see below). If one of them has been signing all canaries but suddenly and permanently stops signing new canaries without any explanation, then this may indicate that this person is under duress or can no longer truthfully sign the statements contained in the canary.
Does every unexpected or unusual occurrence related to a canary indicate something bad?
No, there are many canary-related possibilities that should not worry you. Here is a non-exhaustive list of examples:
Unusual reposts. The only canaries that matter are the ones that are validly signed in the Qubes security pack (qubes-secpack). Reposts of canaries (like the one in this announcement) do not have any authority (except insofar as they reproduce validly-signed text from the qubes-secpack). If the actual canary in the qubes-secpack is healthy, but reposts are late, absent, or modified on the website, mailing lists, forum, or social media platforms, you should not be concerned about the canary.
Last-minute signature(s). If the canary is signed at the last minute but before the deadline, that’s okay. (People get busy and procrastinate sometimes.)
Signatures at different times. If one signature is earlier or later than the other, but both are present within a reasonable period of time, that’s okay. (For example, sometimes one signer is out of town, but we try to plan the deadlines around this.)
Permitted changes. If something about a canary changes without violating any of the statements in prior canaries, that’s okay. (For example, canaries are usually scheduled for the first fourteen days of a given month, but there’s no rule that says they have to be.)
Unusual but planned changes. If something unusual happens, but it was announced in advance, and the appropriate statements are signed, that’s okay (e.g., when Joanna left the security team and Simon joined it).
In general, it would not be realistic for an organization to exist that never changed, had zero turnover, and never made mistakes. Therefore, it would be reasonable to expect such events to occur periodically, and it would be unreasonable to regard every unusual or unexpected canary-related event as a sign of compromise. For example, if something usual happens with a canary, and we say it was a mistake and correct it (with valid signatures), you will have to decide for yourself whether it’s more likely that it really was just a mistake or that something is wrong and that this is how we chose to send you a subtle signal about it. This will require you to think carefully about which among many possible scenarios is most likely given the evidence available to you. Since this is fundamentally a matter of judgment, canaries are ultimately a social scheme, not a technical one.
What are the PGP signatures that accompany canaries?
A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all canaries so that Qubes users have a reliable way to check whether canaries are genuine. The only way to be certain that a canary is authentic is by verifying its PGP signatures.
Why should I care whether a canary is authentic?
If you fail to notice that a canary is unhealthy or has died, you may continue to trust the Qubes security team even after they have signaled via the canary (or lack thereof) that they been compromised or coerced.
Alternatively, an adversary could fabricate a canary in an attempt to deceive the public. Such a canary would not be validly signed, but users who neglect to check the signatures on the fake canary would not be aware of this, so they may mistakenly believe it to be genuine, especially if it closely mimics the language of authentic canaries. Such falsified canaries could include manipulated text designed to sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.
How do I verify the PGP signatures on a canary?
The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)
Obtain the Qubes Master Signing Key (QMSK), e.g.:
$gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
gpg: directory '/home/user/.gnupg' created
gpg: keybox '/home/user/.gnupg/pubring.kbx' created
gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
gpg: Total number processed: 1
gpg: imported: 1
View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)
$gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
gpg (GnuPG) 2.2.27;Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
gpg>fpr
pub rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
Primary key fingerprint: 427F 11FD 0FAA 4B08 0123 F01C DDFA 1A3E 3687 9494
Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.
Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.
Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.
gpg>trust
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
1 = I don't know or won't say
2 = I do NOT trust
3 = I trust marginally
4 = I trust fully
5 = I trust ultimately
m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: ultimate validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg>q
Import the included PGP keys. (See our PGP key policies for important information about these keys.)
$gpg --import qubes-secpack/keys/*/*gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
gpg: Total number processed: 17
gpg: imported: 16
gpg: unchanged: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 6 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1 valid: 6 signed: 0 trust: 6-, 0q, 0n, 0m, 0f, 0u
Verify signed Git tags.
$cd qubes-secpack/
$git tag -v`git describe`object 266e14a6fae57c9a91362c9ac784d3a891f4d351
type commit
tag marmarek_sec_266e14a6
tagger Marek Marczykowski-Górecki 1677757924 +0100
Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.
Verify PGP signatures, e.g.:
$cd QSBs/
$gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
$cd ../canaries/
$gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.
For this announcement (Qubes Canary 044), the commands are:
You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the Qubes Canary 044 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.
The 8th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.16.4, 6.12.44-LTS, 6.6.103-LTS – Sparky 8.0 “Seven Sisters” based on Debian 13 “Trixie” released -> https://sparkylinux.org/sparky-8-0/ – new testing repo of Sparky 9 based on Debian “Forky” is done -> https://sparkylinux.org/sparky-9-code-name-and-repos/ – virtualbox-7.2 moved to sparkylinux repos…
The Incus team is pleased to announce the release of Incus 6.16!
This release brings in a new storage driver, the ability to install Windows VMs without having to rely on a repacked ISO and support for temporary storage in containers.
The highlights for this release are:
TrueNAS storage driver
USB CD-ROM handling for VMs
tmpfs and tmpfs-overlay disks for containers
Configurable console behavior in the CLI
The full announcement and changelog can be found here. And for those who prefer videos, here’s the release overview video:
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
Nearly a third of IT leaders say lack of expertise is the biggest obstacle they face in managing their open source environments. It’s a problem that slows progress, increases risk, and makes it tough for organizations to unlock the full value of technologies like OpenStack, Ceph, or LXD.
When every minute counts, especially in critical industries, technical skill alone isn’t enough. While typical 24/7 or weekday support models provide valuable reactive ticket handling and issue resolution, some organizations require a named expert who owns your case end-to-end and works closely with you until full resolution. Canonical’s named support resources deliver exactly that: continuous, proactive guidance from experienced Technical Account Managers and Support Engineers who deeply understand your systems and integrate closely with your teams.
In this blog, we’ll explore how named support can strengthen your team and help you achieve more with your open source infrastructure.
Beyond break-fix: embedded expertise at scale
Open source infrastructure in production is rarely simple. Whether you’re upgrading a distributed Ceph cluster, troubleshooting low-level live migration issues, or scaling OpenStack across multiple datacenters, there’s little room for trial and error – because downtime or misconfiguration can impact millions of users or critical services.
For example, the Unique Identification Authority of India (UIDAI), which manages Aadhaar – the world’s largest biometric ID system, serving over a billion people – relies heavily on Ubuntu and open source infrastructure. There, continuous availability and data integrity are paramount, demanding near-zero tolerance for disruption.
Canonical’s Dedicated Support Engineers worked closely with UIDAI’s teams during key upgrades and scaling efforts, providing hands-on, real-time technical expertise. This embedded partnership allowed them to:
Anticipate and mitigate risks in complex environments spanning thousands of nodes
Provide tailored architectural guidance ensuring high availability and compliance
Quickly resolve critical incidents without service interruptions
Perform seamless upgrades backed by deep knowledge of Ubuntu and related technologies
For UIDAI traditional reactive support falls short – there’s no margin for error when your infrastructure supports essential national services. Organizations with unique platforms or compliance demands – especially those needing instant response, ongoing collaboration, and a single point of contact – face equally high expectations and risks. Canonical’s embedded experts become a true extension of your team, combining deep technical knowledge with intimate awareness of your unique environment to prevent downtime before it happens.
Far from transactional, this support becomes a true expansion of your team – built on trust, context, and technical depth.
TAM vs DSE: choosing the right level of dedicated Linux support
While both Technical Account Managers (TAMs) and Dedicated Support Engineers (DSEs) provide personalized, expert Linux support, the most important difference lies in the depth of engagement and the nature of support they provide. Understanding these distinctions will help you select the best fit for your organization’s unique needs and operational style.
Technical Account Manager (TAM)
A TAM is your named, strategic advisor and single point of contact during business hours, focused on amplifying your team’s effectiveness through proactive, high-level guidance. Beyond simply managing support tickets, TAMs partner with you to plan major upgrades, anticipate risks, and align your Linux environment with industry best practices, ensuring your infrastructure evolves smoothly and securely.
Strategic planning in action
For instance, when a customer prepared to upgrade their Ceph clusters, their TAM designed a clear, risk-aware plan that highlighted dependencies and potential pitfalls well in advance. This proactive approach helped avoid downtime and ensured the upgrade aligned seamlessly with their long-term business goals.
What you get with a TAM:
Regular strategic reviews, risk management, and compliance checks
Prioritized support for faster case resolution when issues arise
Proactive advice by tracking ongoing changes and participating in planning calls
Access to Canonical’s broader technical experts when deeper help is needed
Periodic onsite visits for strengthened collaboration and richer context understanding
Choose a TAM if you want a trusted advisor who drives long-term success through strategic support, without requiring hands-on, daily technical involvement.
Dedicated Support Engineer (DSE)
A DSE is fully dedicated to your account, working embedded alongside your team every day to provide responsive, hands-on technical support. They do more than offer guidance: DSEs actively roll up their sleeves to tackle complex issues directly with your in-house engineers, ensuring swift problem resolution and sustained operational stability. As Drew Dunn, Virtualisation Engineer at BT Group describes it, Canonical offers “professional help for other professionals, people who don’t leave you behind, but instead take you on the journey and educate.”
For example, when a customer faced a critical issue during a live Ceph migration, their DSE diagnosed the root cause within the context of the customer’s unique environment, developed a custom tool to enable a safe migration, and collaborated directly with the customer’s engineers to resolve the blocker in days, transforming what could have been a prolonged problem into a quick win.
What you get with a DSE:
Immediate and direct involvement in troubleshooting complex production issues by working closely with your logs and configurations
Active participation in executing upgrades, migrations, with live troubleshooting and remediation support
Daily integration with your team to assist with scaling and configuration, helping avoid common pitfalls
Continuous technical mentorship and regular workshops to build your team’s expertise and confidence
Proactive risk identification and prevention, spotting issues before they impact operations
Onsite availability for critical projects and incidents, enabling face-to-face collaboration when it matters most
Choose a DSE if your team needs a trusted, embedded technical expert providing direct leadership, hands-on problem solving, and continuous partnership and education through daily infrastructure operations and challenges.
A spectrum of deep, sustained collaboration that makes a difference
TAMs and DSEs both provide proactive guidance, own escalations, and lead strategic planning, acting as part advisor, part engineer, part communicator, and part advocate for your organisation within Canonical. They are generalists but also they dive deep into your unique environment, spotting patterns, anticipating risks, and delivering solutions tailored to your infrastructure and priorities.
As one of our TAMs puts it: “Since we work face-to-face a lot, big, long-term projects depend on our work. Therefore, expectations can be high to deliver good results and to stay informed about the customer environments and projects in a much deeper way than our typical support relationships.”
With this level of commitment, customers experience faster problem resolution, smooth and confident upgrades, expert advice on scaling, and thorough performance troubleshooting, all while catching issues early to avoid downtime. In short:
TAMs provide strategic, proactive guidance and planning to help you make confident, long-term decisions.
DSEs embed with your team daily, actively guiding hands-on issue resolution and upgrade execution to maintain stability and momentum.
Real-world impact: how our dedicated support teams make a difference
Migration blockers, resolved live
Running mixed x86 hardware environments is possible but presents unique requirements, even with standard technologies like live migration. One enterprise, for example, wished to migrate virtual machines dynamically and seamlessly between AMD EPYC and Intel XEON hosts. Subtle CPU instruction set support differences made live linux Kernel Virtual Machine (KVM) hypervisor migrations unreliable, impacting project deployment. Canonical’s Dedicated Support Engineer (DSE) to that enterprise stepped in, working side‑by‑side with the enterprise and AMD’s engineering team to define a processor configuration consistent between EPYC and Xeon nodes, validate safe configurations, and design a repeatable migration process. What started as a pressing customer issue grew into a broader solution: the collaboration matured into a jointly authored public guide with AMD, ‘Preparing a cluster of mixed server processors for liver KVM migrations’. Today, this flow enables any IT organization running mixed CPU environments to build in flexibility with qualified applications.
This case shows how DSEs don’t just solve local problems: they connect customers, vendors, and Canonical expertise to create solutions widely applicable in the open source community..
Scaling safely with confidence
A financial services provider needed to expand their OpenStack and Ceph deployment across three regional datacenters. Instead of duplicating data and adding operational complexity, we worked with their team to design a site-local storage layout that aligned with their compliance and performance goals – without downtime.
Building what’s not yet documented
Another customer was experimenting with NVIDIA BlueField SmartNICs and LXD integration – functionality not officially supported or documented. Working alongside them in a lab setup, we validated drivers, tested deployment patterns, and collaborated with Canonical’s LXD team to improve product support. Their proof of concept now helps define new best practices.
A strategic investment in platform reliability
Not all Ubuntu Pro customers need a TAM or DSE – but for enterprises running critical infrastructure, dedicated resources are a strategic multiplier.
We recommend them for:
Organizations with large or highly customized OpenStack, Ceph, LXD, or Kubernetes environments
Teams seeking proactive, forward-looking support
Companies that want a single, trusted technical contact – not a ticketing queue
Your Canonical TAM or DSE provides tailored guidance based on your platform, people, and priorities. They help translate Canonical’s engineering expertise into operational success – for everything from tuning disk I/O throughput to planning a transition to Juju 3.x or MicroK8s.
Summary: support as a force multiplier for your team
Built around our customers’ unique environments and operational reality, Canonical’s relational support centers on collaboration, trust, and tailored expertise. Your named support engineer becomes an embedded team member who lives and breathes your infrastructure – joining planning calls, tracking changes, and offering proactive advice based on deep knowledge of your infrastructure to help you operate confidently at scale.
Ubuntu Pro is backed by a world-class support organization. With a TAM or DSE, that support becomes personal and proactive, a partner intimately familiar with your platform and business needs who accelerates your Linux support journey.
Ready to expand your team’s expertise with dedicated Linux support? Contact Canonical to learn how a TAM or DSE can help you accelerate, scale, and secure your open source infrastructure.
Nearly a third of IT leaders say lack of expertise is the biggest obstacle they face in managing their open source environments. It’s a problem that slows progress, increases risk, and makes it tough for organizations to unlock the full value of technologies like OpenStack, Ceph, or LXD.
When every minute counts, especially in critical industries, technical skill alone isn’t enough. While typical 24/7 or weekday support models provide valuable reactive ticket handling and issue resolution, some organizations require a named expert who owns your case end-to-end and works closely with you until full resolution. Canonical’s named support resources deliver exactly that: continuous, proactive guidance from experienced Technical Account Managers and Support Engineers who deeply understand your systems and integrate closely with your teams.
In this blog, we’ll explore how named support can strengthen your team and help you achieve more with your open source infrastructure.
Beyond break-fix: embedded expertise at scale
Open source infrastructure in production is rarely simple. Whether you’re upgrading a distributed Ceph cluster, troubleshooting low-level live migration issues, or scaling OpenStack across multiple datacenters, there’s little room for trial and error – because downtime or misconfiguration can impact millions of users or critical services.
For example, the Unique Identification Authority of India (UIDAI), which manages Aadhaar – the world’s largest biometric ID system, serving over a billion people – relies heavily on Ubuntu and open source infrastructure. There, continuous availability and data integrity are paramount, demanding near-zero tolerance for disruption.
Canonical’s Dedicated Support Engineers worked closely with UIDAI’s teams during key upgrades and scaling efforts, providing hands-on, real-time technical expertise. This embedded partnership allowed them to:
Anticipate and mitigate risks in complex environments spanning thousands of nodes
Provide tailored architectural guidance ensuring high availability and compliance
Quickly resolve critical incidents without service interruptions
Perform seamless upgrades backed by deep knowledge of Ubuntu and related technologies
For UIDAI traditional reactive support falls short – there’s no margin for error when your infrastructure supports essential national services. Organizations with unique platforms or compliance demands – especially those needing instant response, ongoing collaboration, and a single point of contact – face equally high expectations and risks. Canonical’s embedded experts become a true extension of your team, combining deep technical knowledge with intimate awareness of your unique environment to prevent downtime before it happens.
Far from transactional, this support becomes a true expansion of your team – built on trust, context, and technical depth.
TAM vs DSE: choosing the right level of dedicated Linux support
While both Technical Account Managers (TAMs) and Dedicated Support Engineers (DSEs) provide personalized, expert Linux support, the most important difference lies in the depth of engagement and the nature of support they provide. Understanding these distinctions will help you select the best fit for your organization’s unique needs and operational style.
Technical Account Manager (TAM)
A TAM is your named, strategic advisor and single point of contact during business hours, focused on amplifying your team’s effectiveness through proactive, high-level guidance. Beyond simply managing support tickets, TAMs partner with you to plan major upgrades, anticipate risks, and align your Linux environment with industry best practices, ensuring your infrastructure evolves smoothly and securely.
Strategic planning in action
For instance, when a customer prepared to upgrade their Ceph clusters, their TAM designed a clear, risk-aware plan that highlighted dependencies and potential pitfalls well in advance. This proactive approach helped avoid downtime and ensured the upgrade aligned seamlessly with their long-term business goals.
What you get with a TAM:
Regular strategic reviews, risk management, and compliance checks
Prioritized support for faster case resolution when issues arise
Proactive advice by tracking ongoing changes and participating in planning calls
Access to Canonical’s broader technical experts when deeper help is needed
Periodic onsite visits for strengthened collaboration and richer context understanding
Choose a TAM if you want a trusted advisor who drives long-term success through strategic support, without requiring hands-on, daily technical involvement.
Dedicated Support Engineer (DSE)
A DSE is fully dedicated to your account, working embedded alongside your team every day to provide responsive, hands-on technical support. They do more than offer guidance: DSEs actively roll up their sleeves to tackle complex issues directly with your in-house engineers, ensuring swift problem resolution and sustained operational stability. As Drew Dunn, Virtualisation Engineer at BT Group describes it, Canonical offers “professional help for other professionals, people who don’t leave you behind, but instead take you on the journey and educate.”
For example, when a customer faced a critical issue during a live Ceph migration, their DSE diagnosed the root cause within the context of the customer’s unique environment, developed a custom tool to enable a safe migration, and collaborated directly with the customer’s engineers to resolve the blocker in days, transforming what could have been a prolonged problem into a quick win.
What you get with a DSE:
Immediate and direct involvement in troubleshooting complex production issues by working closely with your logs and configurations
Active participation in executing upgrades, migrations, with live troubleshooting and remediation support
Daily integration with your team to assist with scaling and configuration, helping avoid common pitfalls
Continuous technical mentorship and regular workshops to build your team’s expertise and confidence
Proactive risk identification and prevention, spotting issues before they impact operations
Onsite availability for critical projects and incidents, enabling face-to-face collaboration when it matters most
Choose a DSE if your team needs a trusted, embedded technical expert providing direct leadership, hands-on problem solving, and continuous partnership and education through daily infrastructure operations and challenges.
A spectrum of deep, sustained collaboration that makes a difference
TAMs and DSEs both provide proactive guidance, own escalations, and lead strategic planning, acting as part advisor, part engineer, part communicator, and part advocate for your organisation within Canonical. They are generalists but also they dive deep into your unique environment, spotting patterns, anticipating risks, and delivering solutions tailored to your infrastructure and priorities.
As one of our TAMs puts it: “Since we work face-to-face a lot, big, long-term projects depend on our work. Therefore, expectations can be high to deliver good results and to stay informed about the customer environments and projects in a much deeper way than our typical support relationships.”
With this level of commitment, customers experience faster problem resolution, smooth and confident upgrades, expert advice on scaling, and thorough performance troubleshooting, all while catching issues early to avoid downtime. In short:
TAMs provide strategic, proactive guidance and planning to help you make confident, long-term decisions.
DSEs embed with your team daily, actively guiding hands-on issue resolution and upgrade execution to maintain stability and momentum.
Real-world impact: how our dedicated support teams make a difference
Migration blockers, resolved live
Running mixed x86 hardware environments is possible but presents unique requirements, even with standard technologies like live migration. One enterprise, for example, wished to migrate virtual machines dynamically and seamlessly between AMD EPYC and Intel XEON hosts. Subtle CPU instruction set support differences made live linux Kernel Virtual Machine (KVM) hypervisor migrations unreliable, impacting project deployment. Canonical’s Dedicated Support Engineer (DSE) to that enterprise stepped in, working side‑by‑side with the enterprise and AMD’s engineering team to define a processor configuration consistent between EPYC and Xeon nodes, validate safe configurations, and design a repeatable migration process. What started as a pressing customer issue grew into a broader solution: the collaboration matured into a jointly authored public guide with AMD, ‘Preparing a cluster of mixed server processors for liver KVM migrations’. Today, this flow enables any IT organization running mixed CPU environments to build in flexibility with qualified applications.
This case shows how DSEs don’t just solve local problems: they connect customers, vendors, and Canonical expertise to create solutions widely applicable in the open source community..
Scaling safely with confidence
A financial services provider needed to expand their OpenStack and Ceph deployment across three regional datacenters. Instead of duplicating data and adding operational complexity, we worked with their team to design a site-local storage layout that aligned with their compliance and performance goals – without downtime.
Building what’s not yet documented
Another customer was experimenting with NVIDIA BlueField SmartNICs and LXD integration – functionality not officially supported or documented. Working alongside them in a lab setup, we validated drivers, tested deployment patterns, and collaborated with Canonical’s LXD team to improve product support. Their proof of concept now helps define new best practices.
A strategic investment in platform reliability
Not all Ubuntu Pro customers need a TAM or DSE – but for enterprises running critical infrastructure, dedicated resources are a strategic multiplier.
We recommend them for:
Organizations with large or highly customized OpenStack, Ceph, LXD, or Kubernetes environments
Teams seeking proactive, forward-looking support
Companies that want a single, trusted technical contact – not a ticketing queue
Your Canonical TAM or DSE provides tailored guidance based on your platform, people, and priorities. They help translate Canonical’s engineering expertise into operational success – for everything from tuning disk I/O throughput to planning a transition to Juju 3.x or MicroK8s.
Summary: support as a force multiplier for your team
Built around our customers’ unique environments and operational reality, Canonical’s relational support centers on collaboration, trust, and tailored expertise. Your named support engineer becomes an embedded team member who lives and breathes your infrastructure – joining planning calls, tracking changes, and offering proactive advice based on deep knowledge of your infrastructure to help you operate confidently at scale.
Ubuntu Pro is backed by a world-class support organization. With a TAM or DSE, that support becomes personal and proactive, a partner intimately familiar with your platform and business needs who accelerates your Linux support journey.
Ready to expand your team’s expertise with dedicated Linux support? Contact Canonical to learn how a TAM or DSE can help you accelerate, scale, and secure your open source infrastructure.
Engaging Introductions: Capturing Your Audience’s Interest
The initial impression your blog post makes is crucial, and that’s where your introduction comes into play. Hook your readers with a captivating opening that sparks curiosity or emotion. Address their pain points or questions to establish a connection. Outline the purpose of your post and give a sneak peek into what they can expect. A well-crafted introduction sets the tone for an immersive reading experience.
Crafting Informative and Cohesive Body Content
Within the body of your blog post lies the heart of your message. Break down your content into coherent sections, each with a clear heading that guides readers through the narrative. Dive deep into each subtopic, providing valuable insights, data, and relatable examples. Maintain a logical flow between paragraphs using transitions, ensuring that each point naturally progresses to the next. By structuring your body content effectively, you keep readers engaged and eager to learn more.
Powerful Closures: Leaving a Lasting Impression
Concluding your blog post isn’t just about wrapping things up – it’s your final opportunity to leave a strong impact. Summarize the key takeaways from your post, reinforcing your main points. If relevant, provide actionable solutions or thought-provoking questions to keep readers thinking beyond the post. Encourage engagement by inviting comments, questions, or sharing. A well-crafted conclusion should linger in your readers’ minds, inspiring them to explore further or apply what they’ve learned.
Engaging Introductions: Capturing Your Audience’s Interest
The initial impression your blog post makes is crucial, and that’s where your introduction comes into play. Hook your readers with a captivating opening that sparks curiosity or emotion. Address their pain points or questions to establish a connection. Outline the purpose of your post and give a sneak peek into what they can expect. A well-crafted introduction sets the tone for an immersive reading experience.
Crafting Informative and Cohesive Body Content
Within the body of your blog post lies the heart of your message. Break down your content into coherent sections, each with a clear heading that guides readers through the narrative. Dive deep into each subtopic, providing valuable insights, data, and relatable examples. Maintain a logical flow between paragraphs using transitions, ensuring that each point naturally progresses to the next. By structuring your body content effectively, you keep readers engaged and eager to learn more.
Powerful Closures: Leaving a Lasting Impression
Concluding your blog post isn’t just about wrapping things up – it’s your final opportunity to leave a strong impact. Summarize the key takeaways from your post, reinforcing your main points. If relevant, provide actionable solutions or thought-provoking questions to keep readers thinking beyond the post. Encourage engagement by inviting comments, questions, or sharing. A well-crafted conclusion should linger in your readers’ minds, inspiring them to explore further or apply what they’ve learned.
Engaging Introductions: Capturing Your Audience’s Interest
The initial impression your blog post makes is crucial, and that’s where your introduction comes into play. Hook your readers with a captivating opening that sparks curiosity or emotion. Address their pain points or questions to establish a connection. Outline the purpose of your post and give a sneak peek into what they can expect. A well-crafted introduction sets the tone for an immersive reading experience.
Crafting Informative and Cohesive Body Content
Within the body of your blog post lies the heart of your message. Break down your content into coherent sections, each with a clear heading that guides readers through the narrative. Dive deep into each subtopic, providing valuable insights, data, and relatable examples. Maintain a logical flow between paragraphs using transitions, ensuring that each point naturally progresses to the next. By structuring your body content effectively, you keep readers engaged and eager to learn more.
Powerful Closures: Leaving a Lasting Impression
Concluding your blog post isn’t just about wrapping things up – it’s your final opportunity to leave a strong impact. Summarize the key takeaways from your post, reinforcing your main points. If relevant, provide actionable solutions or thought-provoking questions to keep readers thinking beyond the post. Encourage engagement by inviting comments, questions, or sharing. A well-crafted conclusion should linger in your readers’ minds, inspiring them to explore further or apply what they’ve learned.
Engaging Introductions: Capturing Your Audience’s Interest
The initial impression your blog post makes is crucial, and that’s where your introduction comes into play. Hook your readers with a captivating opening that sparks curiosity or emotion. Address their pain points or questions to establish a connection. Outline the purpose of your post and give a sneak peek into what they can expect. A well-crafted introduction sets the tone for an immersive reading experience.
Crafting Informative and Cohesive Body Content
Within the body of your blog post lies the heart of your message. Break down your content into coherent sections, each with a clear heading that guides readers through the narrative. Dive deep into each subtopic, providing valuable insights, data, and relatable examples. Maintain a logical flow between paragraphs using transitions, ensuring that each point naturally progresses to the next. By structuring your body content effectively, you keep readers engaged and eager to learn more.
Powerful Closures: Leaving a Lasting Impression
Concluding your blog post isn’t just about wrapping things up – it’s your final opportunity to leave a strong impact. Summarize the key takeaways from your post, reinforcing your main points. If relevant, provide actionable solutions or thought-provoking questions to keep readers thinking beyond the post. Encourage engagement by inviting comments, questions, or sharing. A well-crafted conclusion should linger in your readers’ minds, inspiring them to explore further or apply what they’ve learned.