This summary highlights key updates to the Armbian build repository, including kernel adjustments, tooling fixes, and configuration changes. These contributions reflect ongoing efforts from the community to ensure system stability, compatibility, and enhanced performance across supported platforms.
“This doesn’t feel right. The package exists across all main distros. Probably not needed in every case… but it’s toolchain-related, so worth revisiting if merged.” Author: Werner View Commit ›
Armbian is a community-driven project maintained by a group of dedicated individuals in their limited free time. We provide the platform and tools for collaboration, but fixing every bug is beyond our capacity. Even large, well-funded teams face similar limits. That’s why we rely on the community—not just for reporting issues, but for actively helping to resolve them. View all commits and contribute at github.com/armbian/build Support Armbian development: Donate Today!
There is a new application available for Sparkers: ElecWhat What is ElecWhat? Features: – Desktop notifications – Tray icon with unread count (aka AppIndicator) – Custom keyboard shortcuts (default) – Spellcheck – CLI & D-Bus interface to show/hide/toggle window – Very stable, I have been running it for 6+ month without crash/freeze – Can be trusted: Code easy to review…
The Launchpad project is almost 21 years old! Many people have contributed to the project over this lifetime, and we are thankful for all of them. We understand the value of a strong community and we are taking steps to reinvigorate Launchpad’s once-thriving community.
There are two common suggestions for getting started in open source: fixing bugs and contributing to documentation. Early in 2024, Canonical launched the Canonical Open Documentation Academy; an initiative that aims to break down barriers to open source contribution, and work with the community to raise the bar for documentation practice. The Open Documentation Academy has been helping people get involved in open source and has also been helping projects achieve ever higher standards in documentation. Launchpad is one such project.
Today, we recognize and celebrate our community contributors. We hope they enjoyed contributing to Launchpad as much as we enjoyed working with them!
– gerryRcom
– Jared Nielsen
– Adriaan Van Niekerk
– Nathan Barbarick
Thank you for helping to make Launchpad great!
commit f980cfb3c78b72b464a054116eea9658ef906782
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Mon Oct 14 15:39:27 2024 -0400
Add debugging doc; fix broken links (#108)
* Add debugging doc; fix broken links
* fix broken links in debugging.rst
* fix spelling errors
* fix spelling errors
* fix spelling errors
* fix debugging link
* fix lots of formatting on recovered debugging.rst page
* add debugging.rst page into Launchpad development tips
---------
Co-authored-by: Alvaro Crespo <alvarocrespo.se@gmail.com>
commit c690ef5c7ed2d63d989c1f91b2883ed947904228
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Wed Oct 9 14:32:59 2024 -0400
Add database table page; fix broken link (#107)
* Add database table page; fix broken link
* add spell check errors to custom_wordlist
* add rename-database-table to how-to/index.rst
* fix reference link to rename-database-table page in live-patching.rst explanation doc
* format rename-database-table to show as sql code
---------
Co-authored-by: Jared Nielsen <nielsen.jared@gmail.com>
Co-authored-by: Alvaro Crespo <alvaro.crespo@canonical.com>
commit 5b319ab2899a326b7e96a5c001965e486a445448
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Wed Oct 9 12:20:24 2024 -0400
Add missing codehosting doc; fix broken link (#106)
* Add missing codehosting doc; fix broken link
* add codehosting-locally to index.rst
* add spell check errors to custom_wordlist
* fix reference link for codehosting-locally in code.rstexplanation section
---------
Co-authored-by: Jared Nielsen <nielsen.jared@gmail.com>
Co-authored-by: Alvaro Crespo <alvaro.crespo@canonical.com>
commit 1fcb3a9588bcb62132ce0004bb98f54e28c6561c
Author: Nathan Barbarick <nathanclaybarbarick@gmail.com>
Date: Mon Sep 30 11:08:39 2024 -0700
Group articles of the Explanation section into proper subsections (#97)
* Remove How to go about writing a web application, per jugmac00.
* Group articles in the Explanation section into subsections, add introductory text.
* Add new sections for remaining ToC headings.
* Add codehosting.png, fix broken link (#104)
* add codehosting.png, fix broken link
* delete linkcheck_ignore item
* remove accessibility, upstream, and schema links (#102)
* add concepts.rst, fix broken link in code.rst (#105)
* add concepts.rst, fix broken link in code.rst
* add spellcheck errors to custom_wordlist
* add concepts to index.rst
* Add descriptions in the explanation index and move new concepts page.
---------
Co-authored-by: Jared Nielsen <nielsen.jared@gmail.com>
commit ce5408a8ba919d22c5f5f01ff0396e1eb982d359
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Thu Sep 12 08:11:00 2024 -0400
add concepts.rst, fix broken link in code.rst (#105)
* add concepts.rst, fix broken link in code.rst
* add spellcheck errors to custom_wordlist
* add concepts to index.rst
commit eb5a0b185af6122720d44791aa8c98d52daf93e5
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Sep 6 04:00:51 2024 -0400
remove accessibility, upstream, and schema links (#102)
commit 766dc568b06e49afbb831c25a6163be31ab5064a
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Thu Sep 5 03:09:19 2024 -0400
Add codehosting.png, fix broken link (#104)
* add codehosting.png, fix broken link
* delete linkcheck_ignore item
commit 317437262dd6d21bbb832e9603e4f84dbd4095b6
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Aug 16 15:02:25 2024 -0400
add 'Soyuz' link (#103)
commit f238c1f4e2322d5ad31c9d86615108856c9f8dfc
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jul 24 06:01:27 2024 +0100
oda spelling check on code doc (#90)
* oda spelling check on code doc
* oda spelling check on code doc
* Update .custom_wordlist.txt
---------
Co-authored-by: Jürgen Gmach <juergen.gmach@canonical.com>
commit ff237feec8ee9fd6530ccd0aa1f940939ddedee0
Author: Adriaan Van Niekerk <144734475+sfadriaan@users.noreply.github.com>
Date: Tue Jul 23 14:44:29 2024 +0200
Check Spelling errors (Storm migration guide) (#92)
* Remove Storm Migration Guide from exclusion list
* Update code inline formatting and correct spelling errors
* Add accepted words
commit 8500de5b96e4949b23d6c646c65272b9c8180424
Author: Adriaan Van Niekerk <144734475+sfadriaan@users.noreply.github.com>
Date: Tue Jul 23 11:05:04 2024 +0200
Check Spelling (Database Performance page) (#91)
* Remove database performance page from exclusion
* Add accepted words
* Correct spelling errors
commit 06401ea4f554bd8eff483a03c5dea2508f942bdd
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Wed Jul 17 11:13:05 2024 +0200
Correct spelling errors
commit 9eb17247c1100dc7c23dcb2a0275064ed1dc7a19
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Wed Jul 17 11:11:13 2024 +0200
Add accepted words
commit a539b047d012d5078b097041d9072937d2247704
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Wed Jul 17 11:10:59 2024 +0200
Remove "Security Policy" from exclusion list
commit 7708a5fa7b6ed6c0856fa2722f917228c9127eb0
Author: Adriaan Van Niekerk <144734475+sfadriaan@users.noreply.github.com>
Date: Wed Jul 17 08:13:34 2024 +0200
Spell check (URL traversal + Navigation Menus) (#87)
* Remove Navigation Menu page from exclusion
* Add words to be excluded from spell check
* Correct spelling errors
* Remove "url-traversal" from exclusion list
* Update list of accepted words
* Update formatting and correct errors
---------
Co-authored-by: Jürgen Gmach <juergen.gmach@canonical.com>
commit e952eb0aa98fe33a20517b82640d88c2c6a8fc5f
Author: gerryRcom <gerryr@gerryr.com>
Date: Mon Jul 15 20:17:36 2024 +0100
oda spelling check on branches doc
commit 46170ead6fe34fde518fe8848e3d321b57506875
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Mon Jul 15 11:02:57 2024 +0200
Update formatting of URLs
commit 124245b2b4b5699596e7039f09f6d1f3211b409f
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Mon Jul 15 11:00:22 2024 +0200
Remove Launchpad Mail page from exclusion list
commit 141aa07f62d47e7b25581c113fe222679ca9135d
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jul 10 20:12:47 2024 +0100
oda spelling check on ppa doc
commit bdea1e1d11e88255eed19e335d840a278cefb134
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jul 10 20:08:37 2024 +0100
oda spelling check on ppa doc
commit 7a960016415d32bae99bccac8e7ee634d7034ce7
Merge: 1c6506b 3e12837
Author: gerryRcom <gerryr@gerryr.com>
Date: Tue Jul 9 17:47:06 2024 +0100
Merge branch 'main' into spelling-feature-flags-doc
commit 1c6506b7e971fed802b3dfc85abc29bc0a075450
Author: gerryRcom <gerryr@gerryr.com>
Date: Fri Jul 5 20:06:05 2024 +0100
oda spelling check on feature-flags doc
commit 27b2aa62c48dde374d4e27fae671b061eb97a46f
Merge: acb3847 d32c826
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Fri Jul 5 16:03:01 2024 +0200
Merge branch 'main' of https://github.com/canonical/launchpad-manual into javascript-buildsystem-page
commit 3dc90949b0bd2136347916be1b4b05e0041b2d54
Merge: 053a960 f193109
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Fri Jul 5 14:07:59 2024 +0200
Merge branch 'main' of https://github.com/canonical/launchpad-manual into fix-spelling-issues
commit 053a96086a8e649f0b135aa6eeb942b858f7ba5b
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Fri Jul 5 13:59:34 2024 +0200
Add word to resolve conflict in pull request
commit f19310999278be18a3d92443a7b22cf1b0e7e441
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jul 4 21:18:04 2024 +0100
oda spelling check on testing doc
commit 93e5fb8d8356b70b52401c69e7884a1dea2e8b46
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 18:44:24 2024 +0200
Remove exclusion added via rebase
commit d75ca31d26bd1731db6fad08c94c7d99bebc02c3
Merge: 54b74c2 5a2f090
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 18:09:04 2024 +0200
Merge branch 'fix-spelling-issues' of https://github.com/sfadriaan/launchpad-manual into fix-spelling-issues
commit 54b74c252952c5de24c0e232bbbe560f9c4c416e
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:50:08 2024 +0200
Correct spelling errors, verified by external documentation, converted to en-gb and corrected formatting
commit f1c66b1ce59f6af9a678f86f6b4fa637df91bcb3
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:48:48 2024 +0200
Add correctly spelled words picked up by spell checker
commit 73f12ca01f9cce4414702674cd24dc3d38e49304
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:47:42 2024 +0200
Remove javascript-integration-testing page from the exclusion list
commit acb384767214e3d432eafe062a2fb646f3c31938
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 16:07:25 2024 +0200
Update mailing list URL, spelling error correction
commit da06505e8a3431d50a815d16ca4f89a5d66c7a41
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 16:06:52 2024 +0200
Remove javascript-buildsystem from exclusion list
commit 2318addb0ea19de7813b5f6b16efc43d21584659
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 16:06:24 2024 +0200
Add words to exclusion list
commit 5a2f090a2da9083b3c3b658592ec43595e78eb0e
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:50:08 2024 +0200
Correct spelling errors, verified by external documentation, converted to en-gb and corrected formatting
commit ce333446e7c7501629d3ceab239183aed64af319
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:48:48 2024 +0200
Add correctly spelled words picked up by spell checker
commit 7649b104c9439dda5f938b2e0153e4d1c45f21b4
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:47:42 2024 +0200
Remove javascript-integration-testing page from the exclusion list
commit 017d19761d96d9c04a1ea61ac0e77bcf6a7b7cab
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Wed Jul 3 11:42:33 2024 -0400
Fix 'Loggerhead' link
commit fda0691919cd849ff4c6ee24e4dc1e3d5e6b1682
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Wed Jul 3 11:32:15 2024 -0400
Fix 'UI/CssSprites' link
commit f26faaef61e5ef48140bd2f84630c5d624041dad
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jul 3 09:18:02 2024 +0100
oda spelling check on translations doc
commit 13cb12c45e1a5826d27eaf497b7e6a2605d7ec6d
Author: gerryRcom <gerryr@gerryr.com>
Date: Tue Jul 2 19:41:38 2024 +0100
oda spelling check on unittesting doc
commit cdab34e61a7c1009852a642e978b9027c2aad3d2
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Tue Jul 2 12:07:06 2024 -0400
Fix 'Running' link
commit dbe279acfef9eb736735b04ba474801d3f58a3f0
Author: Nathan Barbarick <nathanclaybarbarick@gmail.com>
Date: Fri Jun 28 19:55:08 2024 -0700
Restructure navigation menu using subsections in how-to.
commit 8592ed544881d50877f036073a6eec9de2e6356d
Author: gerryRcom <gerryr@gerryr.com>
Date: Sat Jun 29 09:49:34 2024 +0100
oda spelling check on css doc
commit 90608989d15cf2dbdf9a538a03517c03d87a3658
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Sat Jun 29 03:54:27 2024 -0400
Fix 'JavascriptUnitTesting' link (#72)
Co-authored-by: Jürgen Gmach <juergen.gmach@canonical.com>
commit 61ab3a36a51cb6ee40d6132cc1028779115b8efd
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Sat Jun 29 03:43:47 2024 -0400
Fix 'Help' link (#70)
Co-authored-by: Jürgen Gmach <juergen.gmach@canonical.com>
commit 89f08619f4c1cbb6e82bc95fd3cdc30b802e9c37
Author: gerryRcom <gerryr@gerryr.com>
Date: Fri Jun 28 19:52:32 2024 +0100
oda spelling check on live-patching doc
commit 96924bd1cf580875d76ed28afa3db83d0d642247
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Jun 28 08:44:30 2024 -0400
Fix 'Getting'
commit be6124ff67fc89a604ebad566805e7e535a01377
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Jun 28 09:00:41 2024 -0400
Fix 'JavaScriptIntegrationTesting' link
commit da7f6bfa597f2ea1e8df57dbbec7217fd746268f
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Jun 28 07:46:05 2024 -0400
Fix 'FixBugs'
commit 2ca5b808797ccd2c24cfb65a06d98e1db844b1b1
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Thu Jun 27 11:02:31 2024 -0400
remove underscores
commit 7577f7674066d4e1d974e956ab2506e0d6f5a89b
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Tue Jun 25 13:22:07 2024 -0400
Fix '../Trunk'
commit deb42beb594b860356dfe11297516d26609d1018
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Thu Jun 27 11:52:33 2024 -0400
Fix 'Database/LivePatching'
commit ded351427d3f694d16855f3b4c44e085eb4e551c
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jun 27 19:47:05 2024 +0100
oda spelling check on merge-reviews doc
commit c07847f039bc9414410ebf134d263174004a0a67
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jun 27 08:22:23 2024 +0100
oda spelling check on db-devel doc
commit 6a54f46fedfcfdb3385dd8ff5c2f1d4a9ce45f15
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Tue Jun 25 12:32:41 2024 -0400
remove updated link from linkcheck_ignore
commit 6eedaa9f3d5eaee21242280b1ead71c376698c4e
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Sat Jun 22 12:59:24 2024 -0400
Fix 'PolicyAndProcess/DatabaseSchemaChangesProcess'
commit 92d1b15eafc2a90a88e24afd5a6938f277314d8a
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jun 26 19:30:14 2024 +0100
oda spelling check on css-sprites doc
commit aeb7e5c2d4186ba45cb3279e24c3716e7752b32c
Author: gerryRcom <gerryr@gerryr.com>
Date: Tue Jun 25 20:06:46 2024 +0100
oda spelling check on registry doc
commit 13eb716d534b41ee60ac6adbf8b9d8fb96ca96cd
Author: gerryRcom <gerryr@gerryr.com>
Date: Mon Jun 24 20:00:43 2024 +0100
oda spelling check on triage-bugs doc
commit b7ad120ca563e3a1ac82f5ec7c7742874b53d88b
Author: gerryRcom <gerryr@gerryr.com>
Date: Mon Jun 24 19:51:08 2024 +0100
oda spelling check on triage-bugs doc
commit a83419e47f21071ae53a7036210a7c650195e8ef
Author: gerryRcom <gerryr@gerryr.com>
Date: Fri Jun 21 21:54:21 2024 +0100
oda spelling check on schema-changes doc
commit 486b54241a46ec42f48a05a0081b238699c0557b
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jun 20 20:36:01 2024 +0100
oda spelling check on submitting-a-patch doc
commit a890a576681258d647d20b8fdc5c80b14f490d94
Author: gerryRcom <gerryr@gerryr.com>
Date: Tue Jun 18 20:09:14 2024 +0100
oda spelling check on database-setup doc
commit b52d850a0d2456f7925a91cb3e2ff4a8c44711a5
Author: gerryRcom <gerryr@gerryr.com>
Date: Mon Jun 17 12:18:09 2024 +0100
oda spelling check on contribute-to doc
commit 074e13a662821ba17d1c99e2814ef38fe2206a01
Author: gerryRcom <gerryr@gerryr.com>
Date: Fri Jun 14 13:17:53 2024 +0100
oda spelling check on getting-help-hacking
commit 81b6f8025aecf35c48b6660510447e07910d4b8e
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jun 13 20:58:20 2024 +0100
oda spelling check on explanation-hacking
The Incus team is pleased to announce the release of Incus 6.12!
This release comes with some very long awaited improvements such as online growth of virtual machine memory, network address sets for easier network ACLs, revamped logging support and more!
On top of the new features, this release also features quite a few welcome performance improvements, especially for systems with a lot of snapshots and with extra performance enhancements for those using ZFS.
The highlights for this release are:
Network address sets
Memory hotplug support in VMs
Reworked logging handling & remote syslog
SNAT support on complex network forwards
Authentication through access_token parameter
Improved server-side filtering in the CLI
More generated documentation
The full announcement and changelog can be found here. And for those who prefer videos, here’s the release overview video:
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
One of my favourite authors, Douglas Adams, once said that “we are stuck with technology when what we really want is just stuff that works.” Whilst Adams is right about a lot of things, he got this one wrong – at least when it comes to infrastructure. As our Infra Masters 2025 event demonstrated, infrastructure is the technology that makes everything work – from managing a satellite in outer space, to, say, livestreaming an event.
Held at Canonical’s London office on March 31st, Infra Masters 2025 brought together operations leaders and architects to explain how to build infrastructure that transforms industries.
If you didn’t attend the event, don’t worry – and, naturally, “DON’T PANIC.” You’ve come to the right place to find out what you might have missed. You can watch the full talks on YouTube, or read this article for an overview of everything that took place, from key insights from ESA on modernizing infrastructure to BT’s network cloud transformation.
So, without further ado, what can we learn from Infra Masters 2025?
1. Choose a partner that helps you speed up innovation
Modernizing infrastructure isn’t just about choosing the right software. It’s also about who you choose to work with, and how you work with them. According to BT, it was a fundamental shift from vendor-consumer to partnership with Canonical that invigorated their efforts to modernize their infrastructure.
As their representatives disclosed in the first talk of the day, BT have worked with Canonical since 2019 on their ongoing infrastructure transformation. Despite the challenges posed by the transition, the collaboration between BT and Canonical has been marked by open communication, shared goals, and regular training sessions to upskill engineers. The secret to a successful partnership?
“It’s the collaborative approach, working together working on shared goals.”
– Curtis Haslam, Network Cloud Senior Manager, BT Group
BT was keen to emphasize that transparent collaboration requires willingness to offer and accept constructive feedback. Haslam explains, “we’re very honest”, because “acknowledging mistakes from both sides” is the best way to fix errors. Great collaboration becomes possible through this kind of mutual trust.
2. Double your output with Kubernetes
The European Space Agency (ESA) plans to double the number of missions they want to run by 2030 – no easy feat. With critical projects covering everything from searching for habitable worlds, to clearing the 130 million pieces of orbiting debris that threaten satellites, each mission had its own individual compute and infrastructure needs, making their goal a particularly ambitious one. For organizations interested in how to increase output by modernizing their infrastructure, ESA’s presentation may provide some helpful tips.
As Michael Hawkshaw, ESA Mission Operations Infrastructure IT Service Manager at ESOC (European Space Operations Centre) explains, with Canonical Kubernetes ESA has been able to automate the deployment of both infrastructure and “all the software needed for those missions as well.” Canonical Kubernetes readily plugs into Ceph and PostgreSQL for instance, which are part of ESA’s stack. These automations have, naturally, freed the team to work on other mission-critical tasks. Likewise, by increasing availability and reducing “wasted space” on database servers, Canonical Kubernetes have helped ESA to support more missions.
Want to try Kubernetes but don’t think you have the capacity? ESA was in the same position. Michael acknowledges that Kubernetes is fast-moving and has a steep learning curve. However, with Canonical managing the systems for them, on call “24/7, 7 days a week”, ESA can always “get the support” they need, “with active monitoring” of their setup.
For one thing, open source software gives organizations the flexibility to scale and adapt to shifting requirements. To return to our initial Adams quote, while it may be true that with proprietary solutions, you can become “stuck with technology”, you’re never stuck with open source.
For BT, open source was critical to building the Network Cloud, the infrastructure project that helped them to achieve their goal of bringing 5G to the UK. The Network Cloud replaced a variety of disparate, proprietary vertically-integrated stacks. These stacks each required individual management, oversight, compliance, authentication, and deployment, making them time- and cost-intensive. The challenge was to replace these with infrastructure that was highly dependable and automated, providing consistent high performance.
The answer was consolidating their infrastructure into a single, trusted open source stack – including MAAS for bare-metal provisioning, Ceph for storage, LXD for container management, and Juju for automation. James Cawte, BT Group’s Cloud Network Principal Engineer, noted that by streamlining their operations in this way, the app developers were free to ‘focus solely on their application development’, rather than trying to make the infrastructure work – which allowed BT to streamline its operations. For more details on BT’s partnership with Canonical, explore the case study.
As Canonical’s Thibaut Rouffineau noted, open source software helps organizations to scale resources whilst keeping costs down. Moving away from proprietary software reduces expensive licences and contracts, whilst enabling companies to optimize their infrastructure – rather than getting stuck with technology, it just works.
4. The future is in the clouds, on the edge, and managed by Kubernetes
…When it comes to infrastructure, that is.
Moving forward, BT’s focus is on enhancing edge computing capabilities for better 5G performance, optimizing infrastructure for containerized applications, and integrating serverless computing to improve developer workflows.
This shift to edge computing seems likely to become increasingly common as organizations choose to move away from the public cloud and data centers, and distribute their infrastructure across edge devices. Kubernetes, and the automations it makes possible, will form a key part of managing this infrastructure, offering a new approach to how we think about the future of technology.
Meanwhile, ESA’s plan to double the number of satellites it currently flies by 2030 relies on Kubernetes and cloud-native computing. AI tooling managed by Canonical Kubernetes has increased the amount of data that can be stored and retrieved, whilst Ceph and PostgreSQL support these cloud-based workloads. As space exploration continues to evolve, ESA can more easily scale its workloads thanks to these tools.
So, to take a final leaf from Adams, and “summarize the summary of the summary,” what can we learn from Infra Masters?
The relationship between vendor and client is critical, and moving towards a more collaborative partnership can improve innovation, efficiency, and in-house skillsets. Equally, the support provided by a company like Canonical to aid migration efforts, for example, can help to take the pressure off in-house engineers to avoid disrupting workflows.
The automations enabled by Canonical’s infrastructure portfolio improve efficiency, reduce costs and take the pressure off managing infrastructure. Choosing a managed solution can help organizations to get these benefits, without worrying about capacity or skill shortages.
Open source software is an increasingly important part of the stack for many companies, providing cost savings, the opportunity to scale, and even create a unified platform and integrate infrastructure seamlessly across different environments.
As the company behind Ubuntu, Canonical’s software is widely used, trusted, and provides a great option for organizations looking to explore open source options for their infrastructure – whether it’s to lower costs, gain architecture freedom or cloudify their data center. And that’s all, folks. So long, and thanks for all the fish – and by fish, I mean “humoring my … creative references.” See you at the next Infra Masters!
As organizations shift toward multi-cloud environments and real-time service delivery, edge computing is becoming central to modern application strategies. But this decentralization introduces new security challenges. With devices and data distributed across multiple locations, securing the edge is no longer optional.
What does edge security really mean?: It’s about protecting applications, data, and services that are processed and delivered outside traditional data centers—closer to users and devices. This requires securing each edge node against attacks, ensuring encrypted communication, and maintaining control over who and what accesses your infrastructure, even in highly distributed environments.
Modern Application Delivery Controllers (ADCs)—like SKUDONET—play a key role in building resilient and secure edge infrastructures.
What Is Edge Computing and Why It Matters
Edge computing is a distributed IT architecture where data processing takes place as close as possible to the data source—rather than relying exclusively on centralized cloud infrastructures.
This approach addresses critical needs in modern application delivery: real-time responsiveness, reduced latency, and local autonomy. By processing data at the edge of the network, organizations minimize the delay associated with long-distance data transmission and ensure better service continuity, even in scenarios where connectivity is limited or intermittent.
Key benefits of edge computing for secure and efficient application delivery:
Reduced latency: Critical in use cases that demand real-time data processing.
Optimized bandwidth: Only relevant data is transmitted to the cloud or data center.
Improved reliability: Applications can continue functioning even when the connection to the core network is disrupted.
Enhanced scalability: Distributes workloads more efficiently across edge and core environments.
Edge computing plays a fundamental role in modern IT strategies, particularly when combined with security, observability, and automation—areas where SKUDONET provides integrated support for robust application delivery.
Common Edge Security Threats
As edge infrastructure grows, so does its exposure to security risks. The combination of distributed endpoints, increased network complexity, and reliance on real-time connectivity makes the edge a prime target.
Key threats include:
DDoS attacks targeting edge nodes to degrade or halt local services.
Malware and ransomware infiltrating through unmanaged or poorly secured devices.
Compromised or unmonitored IoT devices can serve as entry points for broader network intrusions.
Data interception over insecure communication channels.
Securing the Edge with SKUDONET
SKUDONET Enterprise Edition provides an integrated ADC and load balancing solution built with security at its core—critical in edge environments.
Key features for edge security:
Secure Application Load Balancing
SKUDONET ensures that traffic distribution across nodes is not only efficient, but also resilient against malicious traffic patterns. Rate limiting, health checks, and session persistence features protect against overload and improve reliability.
Built-in Web Application Firewall (WAF)
The Enterprise Edition of SKUDONET includes an advanced WAF as a core feature, providing application-layer protection without requiring external modules. It mitigates common threats such as SQL injection, XSS, and OWASP Top 10 vulnerabilities.
DDoS Protection and Traffic Filtering
Advanced filtering mechanisms allow for early detection and blocking of suspicious traffic. When deployed at the edge, this helps stop attacks before they propagate deeper into the network. The SKUDONET IPDS module operates at the edge, providing a layered defense that includes IP reputation filtering, protocol-level protection, DDoS mitigation, real-time blocklists, and Web Application Firewall—all before traffic reaches the load balancing tier.
SSL/TLS Termination and Encryption
SKUDONET supports SSL offloading and re-encryption, ensuring data confidentiality across all connections. This reduces the load on backend services and improves response times.
Access Control and Authentication
Role-based access, IP whitelisting, and integration with identity providers support granular control over who accesses edge applications.
Edge computing is no longer a future trend—it’s a current reality. As organizations distribute their infrastructure to improve performance and user experience, edge security becomes a non-negotiable requirement.
SKUDONET enables secure application delivery at the edge by combining advanced load balancing with integrated threat protection, all managed from a centralized interface.
Discover SKUDONET Enterprise Edition or Try it for free for 30 days.
Voltámos da quadra pascal muito mais gordos, depois de enfardar toda a espécie de doçarias, ensopados e lançamentos de Ubuntu 25.04 Plucky Puffin, a.k.a.: Fradinho. A conversa andou à volta de rituais religiosos, como apanhar doenças com cuspo dos vizinhos, como construir emissores de Onda Média de trazer por casa, habilidades e jigajogas com leitores de livros electrónicos de tinta electrónica e ainda todas as novas funcionalidades do 25.04, terapias com Painel de Bem-Estar, perigosas expedições ao Berço e promessas de grande galhofa com jogos de bebidas, durante as eleições Legislativas..
Ampere and Canonical are pleased to celebrate new milestones in their ongoing partnership including the completion of Canonical’s System-on-Chip (SoC) certification on AmpereOne®, and the extension of the partnership into the AI Platform Alliance, a strategic group of full stack ecosystem partners that provide enterprise-grade curated solutions specifically developed for AI inference use cases.
The ongoing partnership between Ampere and Canonical will continue to drive AI compute and cloudification forward through efficient, scalable, and sustainable infrastructure. The AmpereOne family delivers excellent performance-per-watt, making these SoCs a strong choice for AI-driven applications where scalability and efficiency matter. Canonical’s software stack (which includes infrastructure solutions like Canonical OpenStack and MicroCloud) complements Ampere’s hardware by providing a securely designed, flexible, and optimized platform for deploying AI inferencing workloads on Arm architecture.
AmpereOne Ubuntu certification
Ensuring a consistent, performant, and reliable software stack is critical to accelerating cloud and enterprise datacenter adoption, and the Ubuntu certification of the AmpereOne SoC is an important milestone in this journey. The AmpereOne SoC is Ampere’s latest flagship processor designed for cloud-native workloads. Thanks to Canonical’s rigorous SoC certification program, the AmpereOne platform has undergone extensive validation to guarantee compatibility, and stability with the Ubuntu software stack.
Additionally, having Ampere-based platforms in Canonical’s certification labs ensures continuous integration and testing, providing customers with confidence that their infrastructure is optimized, certified, and ready for production. This certification is essential for Original Equipment Manufacturers (OEMs) and cloud providers looking to deploy AmpereOne-based servers at scale through Ubuntu server certification and long-term support.
“The certification of AmpereOne on Ubuntu is a significant milestone for our joint customers allowing them to proceed with confidence on platforms and services built on our products. The Ampere/Canonical partnership reflects our joint commitment to deliver energy-efficient, cloud-optimized solutions based on open source technology using the trusted Ubuntu software stack,” said Sean Varley, Chief Evangelist at Ampere.
AI Platform Alliance collaboration
The partnership between Ampere and Canonical also extends to a broader industry initiative – the AI Platform Alliance. As active members of the Alliance, both companies are committed to driving innovation and collaboration across the AI ecosystem. The Alliance fosters close cooperation between silicon providers, hardware accelerators, cloud and managed service providers, and system integrators to deliver optimized end-to-end AI platforms. Through this collaboration, Ampere and Canonical are working alongside ecosystem partners to simplify AI deployment, making it easier for developers and enterprises to unlock the full potential of AI on Arm-based platforms.
“Together, Ampere and Canonical are delivering the building blocks for a cloudified, AI-ready, and energy-efficient data center – setting the stage for the next generation of compute infrastructure,” said Youssef Eltoukhy, Silicon Alliances at Canonical.
Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.
Ampere is a modern semiconductor company designing the future of cloud computing with the world’s first Cloud Native Processors. Built for the sustainable Cloud with the highest performance and best performance per watt, Ampere processors accelerate the delivery of all cloud computing applications. Ampere Cloud Native Processors provide industry-leading cloud performance, power efficiency and scalability.
The AI Platform Alliance is a strategic group of full stack ecosystem partners that provide enterprise-grade curated solutions specifically developed for AI inference use cases, and optimized for the industry-leading AI platform from Ampere. The Alliance offers high performance, open, efficient and sustainable solutions including design-in and ready-to-use AI inference-enabled services that help end customers and digital enterprises overcome the challenges of adapting to a constantly evolving market. For more information, visit https://platformalliance.ai/.
Despite the NVD (National Vulnerability Database) outage of the NIST (National Institute of Standards and Technology), Greenbone’s detection engine remains fully operational, offering reliable, vulnerability scanning without relying on missing CVE enrichment data. Since 1999 The MITRE Corporation’s Common Vulnerabilities and Exposures (CVE) has provided free public vulnerability intelligence by publishing and managing information about […]
The dipole program is part of the Yagi-Uda project, a collection of tools designed for the analysis and optimization of Yagi-Uda antennas. This particular tool calculates the impedance of a single dipole, making it a useful utility for antenna engineers and amateur radio enthusiasts.
Installation on Ubuntu/Debian
To install the Yagi-Uda software suite, including dipole, run the following command:
sudo apt install yagiuda
This package includes several tools for Yagi-Uda antenna analysis and design, making it a valuable addition for those working with antennas.
Usage
To compute the impedance of a dipole, use the following command:
dipole <frequency> <length> <diameter>
For example, to calculate the impedance of a dipole at 7.1 MHz with a length of 20 meters and a diameter of 1.5 mm, run:
dipole 7.100mhz 20m 1.5mm
Example Output:
Self impedance of a dipole:
7.100000 MHz, length 20.000000 m, diameter 1.500000 mm, is
Z = 62.418686 -48.363233 jX Ohms
This output indicates:
Frequency: 7.1 MHz
Length: 20 meters
Diameter: 1.5 mm
Impedance (Z): 62.42 – j48.36 Ω
The negative reactance (-48.36 Ω) suggests the dipole is capacitive, meaning it is too long at this frequency. To achieve resonance (purely resistive impedance), the dipole length should be slightly reduced.
Related Tools
The Yagi-Uda project includes additional tools that help with various aspects of antenna design and optimization:
first – Initial calculations for antenna design
input – Processes input parameters for analysis
output – Displays calculated results
optimise – Helps refine antenna parameters for better performance
Each of these tools contributes to designing and analyzing Yagi-Uda antennas effectively.
Supported Platforms
The Yagi-Uda project was primarily developed for UNIX-based systems, including Linux distributions such as Ubuntu and Debian. While efforts were made to port it to other operating systems, its primary focus remains on UNIX environments.
Reporting Bugs
If you encounter any issues while using dipole or other Yagi-Uda tools, you can report them to Dr. David Kirkby (G8WRB) at david.kirkby@onetel.net. Providing clear, reproducible steps will help ensure that reported bugs are addressed efficiently.
Conclusion
For amateur radio operators and engineers working with Yagi-Uda antennas, the dipole program is a valuable tool for analyzing a single dipole’s impedance. With an easy installation process on Debian-based systems, it is an accessible and practical choice for antenna analysis.
Ubuntu MATE 25.04 is ready to soar! 🪽 Celebrating our 10th anniversary as an official Ubuntu flavour with the reliable MATE Desktop experience you love, built on the latest Ubuntu foundations. Read on to learn more 👓️
A Decade of MATE
This release marks the 10th anniversary of Ubuntu MATE becoming an official Ubuntu flavour. From our humble beginnings, we’ve developed a loyal following of users who value a traditional desktop experience with modern capabilities. Thanks to our amazing community, contributors, and users who have been with us throughout this journey. Here’s to many more years of Ubuntu MATE! 🥂
What changed in Ubuntu MATE 25.04?
Here are the highlights of what’s new in the Plucky Puffin release:
Celebrating 10 years as an official Ubuntu flavour! 🎂
Optional full disk encryption in the installer 🔐
Enhanced advanced partitioning options
Better interaction with existing BitLocker-enabled Windows installations
Improved experience when installing alongside other operating systems
Major Applications
Accompanying MATE Desktop 🧉 and Linux 6.14 🐧 are Firefox 137 🔥🦊,
Evolution 3.56 📧, LibreOffice 25.2.2 📚
See the Ubuntu 25.04 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
Welcome to this week’s Armbian Build Highlights! We’ve added support for the Qcom Robotics RB5 and delivered a stack of board-level fixes and improvements. Bootloaders are getting version upgrades, new device tree tweaks are live, and key driver updates are in. Whether you’re tuning SPI displays or tweaking governors, there’s something here for every dev. Let’s dive in!
New Board Support
Qcom Robotics RB5 Support Added View Commit » by FantasyGmm
sun55iw3-dev.config: Changed default CPU governor to ondemand View Commit »
Armbian is a community-driven project maintained by a group of dedicated individuals in their limited free time. We provide the platform and tools for collaboration, but fixing every bug is beyond our capacity. Even large, well-funded teams face similar limits. That’s why we rely on the community—not just for reporting issues, but for actively helping to resolve them. View all commits and contribute at github.com/armbian/build Support Armbian development: Donate Today!
Ubuntu 25.04, codenamed “Plucky Puffin”, is here. This release continues Ubuntu’s proud tradition of integrating the latest and greatest open-source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, partnering with the community and our partners, to introduce new features and fix bugs.
Ubuntu 25.04 introduces GNOME 48 with triple buffering for smoother performance, HDR settings, and new features like a Wellbeing Panel and Preserve Battery Health mode. A new modern PDF reader, Papers, is now the default.
The installer now offers a smoother experience when installing alongside other operating systems, with better BitLocker support, and advanced partitioning.
Built on the Linux 6.14 kernel, this release brings a new scheduling system with sched_ext, enhanced Wine/Proton gaming support through the new NTSYNC driver, and better container tooling via decoupled bpftools and linux-perf.
Developer experience takes a leap forward with the introduction of devpacks. These snap bundles deliver the latest Go and Spring ecosystems, alongside updated toolchains for Python, Rust, .NET, LLVM, OpenJDK, and more.
Ubuntu 25.04 also expands confidential computing to on-premise environments with AMD SEV-SNP host support, and introduces a new ARM64 Desktop ISO for next-gen hardware.
Networking and identity management see continued improvements, including secure time sync with NTS, better Active Directory (AD) integration, cloud authentication against EntraID and Google identity, and DNS-aware wait-online logic with Netplan.
The newest Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, Ubuntu Unity, and Xubuntu are also being released today. More details can be found for these at their individual release notes under the Official Flavours section:
Users of Ubuntu 24.10 will be offered an automatic upgrade to 25.04 if they have selected to be notified of all releases rather than just LTS upgrades. For further information about upgrading, see:
As always, upgrades to the latest version of Ubuntu are entirely free of charge.
We recommend that all users read the release notes, which document caveats, workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:
Ubuntu is a full-featured Linux distribution for desktops, laptops, IoT, cloud, and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.
Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:
The Xubuntu team is happy to announce the immediate release of Xubuntu 25.04.
Xubuntu 25.04, codenamed Plucky Puffin, is a regular release and will be supported for 9 months, until January 2026.
Xubuntu 25.04, featuring the latest updates from Xfce 4.20 and GNOME 48.
Xubuntu 25.04 features the latest Xfce 4.20, GNOME 48, and MATE 1.26 updates. Xfce 4.20 features many bug fixes and minor improvements, modernizing the Xubuntu desktop while maintaining a familiar look and feel. GNOME 48 apps are tightly integrated and have full support for dark mode. Users of QEMU and KVM will be delighted to find new stability with the desktop session—the long-running X server crash has been resolved in Xubuntu 25.04 and backported to all supported Xubuntu releases.
The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.
As the main server might be busy the first few days after the release, we recommend using the torrents if possible.
We want to thank everybody who contributed to this release of Xubuntu!
Highlights and Known Issues
Highlights
Xfce 4.20, released in December 2024, is included and contains many new features. Early Wayland support has been added, but is not available in Xubuntu.
GNOME 48 apps, including Font Viewer (gnome-font-viewer) and Mines (gnome-mines), include a refreshed appearance and usability improvements.
Known Issues
The shutdown prompt may not be displayed at the end of the installation. Instead, you might just see a Xubuntu logo, a black screen with an underscore in the upper left-hand corner, or a black screen. Press Enter, and the system will reboot into the installed environment. (LP: #1944519)
You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox).
OEM installation options are not currently supported or available.
Please refer to the Xubuntu Release Notes for more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions.
The main Ubuntu Release Notes cover many other packages we carry and more generic issues.
Support
For support with the release, navigate to Help & Support for a complete list of methods to get help.
In addition to all the regular testing I am testing our snaps in a non KDE environment, so far it is not looking good in Xubuntu. We have kernel/glibc crashes on startup for some and for file open for others. I am working on a hopeful fix.
Next week I will have ( I hope ) my final surgery. If you can spare any change to help bring me over the finish line, I will be forever grateful
The Lubuntu Team is proud to announce Lubuntu 25.04, codenamed Plucky Puffin. Lubuntu 25.04 is the 28th release of Lubuntu, the 14th release of Lubuntu with LXQt as the default desktop environment. With 25.04 being an interim release, it will be supported until January of 2026. If you're a 24.10 user, please upgrade to 25.04 […]
The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 25.04 code-named “Plucky Puffin”. This marks Ubuntu Studio’s 36th release. This release is a Regular release and as such, it is supported for 9 months, until January 2026.
Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.
This release is dedicated to the memory of Steve Langasek. Without Steve, Ubuntu Studio would not be where it is today. He provided invaluable guidance, insight, and instruction to our leader, Erich Eickmeyer, who not only learned how to package applications but learned how to do it properly. We owe him an eternal debt of gratitude.
You can download Ubuntu Studio 25.04 from our download page.
Special Notes
The Ubuntu Studio 25.04 disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.
Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.
Full updated information, including Upgrade Instructions, are available in the Release Notes.
Upgrades from 24.10 should be enabled within a month after release, so we appreciate your patience. Upgrades from 25.04 LTS will be enabled after 24.10 reaches End-Of-Life in July 2025.
New This Release
GIMP 3.0!
The long-awaited GIMP 3.0 is included by default. GIMP is now capable of non-destructive editing with filters, better Photoshop PSD export, and so very much more! Check out the GIMP 3.0 release announcement for more information.
Pencil2D
Ubuntu Studio now includes Pencil2D! This is a 2D animation and drawing application that is sure to be helpful to animators. You can use basic clipart to make animations!
The basic features of Pencil2D are:
layers support (separated layer for bitmap, vector and soud part)
bitmap drawing
vector drawing
sound support
LibreOffice No Longer in Minimal Install
The LibreOffice suite is now part of the full desktop install. This will save space for those wishing for a minimalistic setup for their needs.
Invada Studio Plugins
Beginning this release we are including the Invada Studio Plugins first created by Invada Records Australia. This includes distortion, delay, dynamics, filter, phaser, reverb, and utility audio plugins.
PipeWire 1.2.7
This release contains PipeWire 1.2.7. One major feature this has over 1.2.4 is that v4l2loopback support is available via the pipewire-v4l2 package which is not installed by default.
PipeWire’s JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.
However, if you would rather use straight JACK 2 instead, that’s also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire’s JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.
Ardour 8.12
This is, as of this writing, the latest release of Ardour, packed with the latest bugfixes.
To help support Ardour’s funding, you may obtain later versions directly from ardour.org. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in October 2025.
Deprecation of Mailing Lists
Our mailing lists are getting inundated with spam and there is no proper way to fix the filtering. It uses an outdated version of MailMan, so this release announcement will be the last release announcement we send out via email. To get support, we encourage using Ubuntu Discourse for support, and for community clicking the notification bell in the Ubuntu Studio category there.
Frequently Asked Questions
Q: Does Ubuntu Studio contain snaps? A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.
Thunderbird also became a snap so that the maintainers can get security patches delivered faster.
Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.
We have additional snaps that are Ubuntu-specific, such as the Firmware Updater and the Security Center. Contrary to popular myth, Ubuntu does not have any plans to switch all packages to snaps, nor do we.
Q: Will you make an ISO with {my favorite desktop environment}? A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.
Q: What if I don’t want all these packages installed on my machine? A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!
Get Involved!
A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!
Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. We’re not there, but if every Ubuntu Studio user donated monthly, we’d be there! Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!
Special Thanks
Huge special thanks for this release go to:
Eylul Dogruel: Artwork, Graphics Design
Ross Gammon: Upstream Debian Developer, Testing, Email Support
Sebastien Ramacher:Upstream Debian Developer
Dennis Braun: Upstream Debian Developer
Rik Mills: Kubuntu Council Member, help with Plasma desktop
Scarlett Moore: Kubuntu Project Lead, help with Plasma desktop
Len Ovens: Testing, insight
Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing, keeping Erich sane
Simon Quigley: Qt6 Megastuff
Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer
The Kubuntu Team is happy to announce that Kubuntu 25.04 has been released.
Codenamed “Plucky Puffin”, Kubuntu 25.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.
The release features the latest KDE Plasma 6.3 desktop, KDE Gear 24.12.3, kernel 6.14, and many other updated applications and libraries.
Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.
In addition to the applications on our install media, 25.04 benefits from the huge number of applications in the Ubuntu archive, plus those installable via snap or other methods.
Please refer to our release notes for further details.
Note: For upgrades from 24.10, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.
The latest interim release of Ubuntu introduces “devpacks” for popular frameworks like Spring, along with performance enhancements across a broad range of hardware.
17 April 2025
Today Canonical announced the release of Ubuntu 25.04, codenamed “Plucky Puffin,” available to download and install from ubuntu.com/download.
Ubuntu 25.04 delivers the latest GNOME 48 with support for triple buffering and an improved install and boot experience. The introduction of a “devpack” for Spring expands toolchain availability in Ubuntu. Advancements in silicon enablement with Canonical’s partners deliver performance improvements for AI workloads on Intel GPUs, and support for confidential computing on AMD SEV-SNP.
Plucky Puffin combines the very latest in open source desktop technology with a focus on making high quality developer tooling readily available on Ubuntu. Ubuntu 25.04 delivers performance improvements across Intel GPUs, and a new purpose-built ISO for ARM64 hardware enthusiasts. Our increasing support for confidential computing with AMD SEV-SNP makes Ubuntu the target platform to deploy AI workloads securely and at scale on both public clouds and private data centers.
Ubuntu 25.04 delivers GNOME 48, in line with Canonical’s commitment to ship the freshest Gnome releases possible. Among other enhancements in GNOME, this version brings new features like a “Preserve Battery Health” mode that helps extend the lifespan of laptop batteries by optimizing charge cycles. A new “Wellbeing Panel” provides screen-time tracking, and helps users manage their usage habits. With GNOME 48, Ubuntu gains HDR support out of the box, and the Canonical-developed triple buffering patches, which deliver higher performance and a smoother UX on desktops with lower rendering power. These patches are now part of the GNOME upstream project for the first time, benefitting all users of the GNOME desktop environment.
Plucky Puffin ships with “Papers” as its default new PDF reader. Papers offers a more modern design, improved performance and a more user-friendly experience.
Following the retirement of Mozilla’s geolocation service, Ubuntu 25.04 uses a new geolocation provider: BeaconDB. This new geolocation service enables automatic timezone detection, weather forecasting and night light features in the desktop.
Linux 6.14 kernel delivers improved scheduling
This release delivers the latest Linux kernel, following Canonical’s new policy. Kernel developers can now make use of a new scheduling system, sched_ext, which provides a mechanism to implement scheduling policies as eBPF programs. This enables developers to defer scheduling decisions to standard user-space programs and implement fully functional hot-swappable Linux schedulers, using any language, tool, library, or resource accessible in user-space.
A new NTSYNC driver that emulates WinNT sync primitives is also available, delivering better performance potential for Windows games running on Wine and Proton (Steam Play).
The bpftools and linux-perf tools have been decoupled from the kernel version, making dependency management easier for developers working with containers. These tools are now shipped in their own packages.
The installer delivers an improved user experience for those installing Ubuntu alongside other operating systems, with advanced partitioning and encryption options, as well as better interaction with existing BitLocker-enabled Windows installations.
To further improve the boot experience in future releases, Ubuntu Desktop and Ubuntu Server will include Dracut as an alternative to initramfs-tools. Plucky Puffin offers Dracut as an experimental feature, enabling users to test it ahead of its inclusion in Ubuntu 25.10.
Cutting-edge toolchains and devpacks
Ubuntu 25.04 comes with the latest toolchains for Python, Golang, Rust, .NET, LLVM, OpenJDK and GCC.
Additional early access upstream versions such as OpenJDK 24ea, OpenJDK 25ea, and GCC 15 are also available. The .NET plugin in Snapcraft delivers improvements for .Net content snaps, and provides increased parity with MSBuild options.
With this release, Canonical is expanding toolchain availability on Ubuntu to a broader set of developer tools like formatters and linters, delivering the latest versions in snap bundles known as “devpacks.”
The first of these is a new “devpack-for-spring” snap that brings the latest Spring Framework and Spring Boot projects to Ubuntu, enabling application developers to more easily build and test their applications using the latest Spring project versions – Spring Framework 6.1 and 6.2, and Spring Boot 3.3 and 3.4.
Improved manageability and networking controls
Canonical continues to deliver identity and access management features for system administrators which will be available in all Ubuntu LTS releases, including many enhancements to Authd, Ubuntu’s new authentication service for cloud identity providers. This service now supports Google IAM in addition to Entra ID. ADSys, the Active Directory Group Policy client for Ubuntu, supports the latest Polkit and comes with improvements and bug fixes to certificates enrolment.
Canonical continues to enable Ubuntu across a broad range of hardware. The introduction of a new ARM64 Desktop ISO makes it easier for early adopters to install Ubuntu Desktop on ARM64 virtual machines and laptops.
Qualcomm Technologies is proud to collaborate with Canonical and is fully committed to enabling a seamless Ubuntu experience on devices powered by Snapdragon®. Ubuntu’s new ARM64 ISO paves the way for future Snapdragon enablement, enabling us to drive AI innovation and adoption together.
Leendert van Doorn, SVP, Engineering at Qualcomm Technologies, Inc.
Ubuntu 25.04 introduces full-featured support for Intel® Core™ Ultra 200V series with built-in Intel® Arc™ GPUs and Intel® Arc™ B580 and B570 “Battlemage” discrete GPUs. The new additions include improved GPU and CPU ray tracing rendering performance in applications with Intel Embree support, such as Blender (v4.2+). Ray tracing hardware acceleration on the GPU improves frame rendering by 20-30%, due to a 2-4x speed-up for the ray tracing component. These GPUs now also have support enabled for hardware accelerated video encoding of AVC, JPEG, HEVC, and AV1, thus improving performance when using these formats when compared to software encoding. Developers will have access to the Intel Compute Runtime with newly introduced CCS optimizations and debugging support for Intel Xe GPUs, enabling easier development and improved AI workload speeds.
Canonical and Intel have a long-term collaboration to ensure that Intel hardware and software work seamlessly with Ubuntu, and have delivered again by enabling our best-in-class Xe2 built-in and discrete GPUs.
Hillarie Prestopine, VP and GM of GPU and System Software Engineering at Intel Corporation
Confidential computing support extended to on-premises use cases
Confidential computing represents a significant paradigm shift in security architecture, protecting virtual machine workloads from unauthorized access. This technology shields sensitive code and data at runtime from privileged system software and other VMs, by operating within a hardware-protected Trusted Execution Environment, keeping data encrypted while in system memory.
Canonical has long recognized confidential computing as an area of strategic importance. Ubuntu was the first Linux distribution to support confidential VMs as a guest OS across major public cloud providers, with built-in support for AMD SEV-SNP and Intel TDX technologies.
Today, Canonical is pleased to announce that Ubuntu now supports AMD SEV-SNP on virtualization hosts, made possible by QEMU 9.2. This will enable enterprises to deploy confidential VMs in on-premise data centers using Ubuntu as both the host and guest operating system.
Canonical’s continued investment in confidential computing reflects the importance of protecting workloads in increasingly complex environments. With Ubuntu 25.04 now having AMD SEV-SNP host support, customers can take full advantage of AMD hardware-based security features to help isolate virtual machines, safeguard memory integrity, and reduce attack surfaces. We’re proud to collaborate with Canonical to extend secure, scalable solutions across enterprise infrastructure.
Frank Gorishek, Corporate Vice President, Software Development, AMD
Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.
Snapdragon is a product of Qualcomm Technologies, Inc. and/or its subsidiaries. Snapdragon is a trademark or registered trademark of Qualcomm Incorporated.
Ubuntu released its 20.04 (Focal Fossa) release 5 years ago, on March 23, 2020. As with the earlier LTS releases, Ubuntu committed to ongoing security and critical fixes for a period of 5 years. The standard support period is now nearing its end and Ubuntu 20.04 LTS will transition to Extended Security Maintenance (ESM) on May 29, 2025.
Users are encouraged to evaluate and upgrade to our latest 24.04 LTS release via 22.04 LTS. The supported upgrade path from Ubuntu 20.04 LTS is via Ubuntu 22.04 LTS. Instructions and caveats for the upgrades may be found at:
Ubuntu 22.04 LTS and 24.04 LTS continue to be actively supported with security updates and bug fixes. All announcements of official security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found here:
Since its launch in October 2004, Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.
Originally posted to the ubuntu-announce mailing list on Thu Apr 17 10:32:41 UTC 2025 by Florent ‘Skia’ Jacquet, on behalf of the Ubuntu Release Team.
The Elive Team is pleased to announce the release of 3.8.48 This new version includes: File Manager: Added actions such as creating torrents, printing documents, and streaming videos to ChromeCast. Notification Widget: Updated to feature a more advanced interface. DBUS Communication fixes that resolve mounting problems and other miscellaneous application issues. Kernel: Backported from 6.11.10, which includes improved stability for the BTRFS filesystem. Drivers: Improved the r8168 driver, which previously caused computers to hang at boot due to excessive logs. Default Calculator switched to gcalculator, which is lighter and moreSEE DETAILS
Passámos uma semana interessante e infernal, a ler programas eleitorais sobre software livre enquanto gatos se roçam em microfones; falámos de novidades de domótica e assistentes de música; meta-motores de busca privados, livres e fresquinhos, actualização do GIMP para 3.0, antecipação salivante do Ubuntu 25.04 que sai nesta quinta-feira; quem ganha num combate entre Fedora e Ubuntu; e ainda parvoíces sobre carochos da Adobe e metadonas metafóricas.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Recently, I was involved in an event where a video was shown, and the event was filmed. It would be nice to put the video of the event up somewhere so other people who weren't there could watch it. Obvious answer: upload it to YouTube. However, the video that was shown at the event is Copyrighted Media Content and therefore is disallowed by YouTube and the copyright holder; it's not demonetised (which wouldn't be a problem), it's flat-out blocked. So YouTube is out.
I'd like the video I'm posting to stick around for a long time; this is a sort of archival, reference thing where not many people will ever want to watch it but those that do might want to do so in ten years. So I'm loath to find some other random video hosting site, which will probably go bust, or pivot to selling online AI shoes or something. And the best way to ensure that something keeps going long-term is to put it on your own website, and use decent HTML, because that means that even in ten or twenty years it'll still work where the latest flavour-of-the-month thing will go the way of other old technologies and fade away and stop working over time. HTML won't do that.
But... it's an hour long and in full HD. 2.6GB of video. And one of the benefits of YouTube is that they'll make the video adaptive: it'll fit the screen, and the bandwidth, of whatever device someone's watching it on. If someone wants to look at this from their phone and its slightly-shaky two bars of 4G connection, they probably don't want to watch the loading spinner for an hour while it buffers a full HD video; they can ideally get a cut down, lower-quality but quicker to serve, version. But... how is this possible?
There are two aspects to doing this. One is that you serve up different resolutions of video, based on the viewer's screen size. This is exactly the same problem as is solved for images by the <picture> element to provide responsive images (where if you're on a 400px-wide screen you get a 400px version of the background image, not the 2000px full-res version), and indeed the magic words to search for here are responsive video. And the person you will find who is explaning all this is Scott Jehl, who has written a good description of how to do responsive video which explains it all in detail. You make versions of the video at different resolutions, and serve whichever one best matches the screen you're on, just like responsive images. Nice work; just what the doctor ordered.
But there's also a second aspect to this: responsive video adapts to screen size, but it doesn't adapt to bandwidth. What we want, in addition to the responsive stuff, is that on poor connections the viewer gets a lower-bandwidth version as well as a lower-resolution version, and that the viewer's browser can dynamically switch from moment to moment between different versions of the video to match their current network speed. This task is the job of HTTP Live Streaming, or HLS. To do this, you essentially encode the video in a bunch of different qualities and screen sizes, so you've got a bunch of separate videos (which you've probably already done above for the responsive part) and then (and this is the key) you chop up each video into a load of small segments. That way, instead of the browser downloading the whole one hour of video at a particular resolution, it only downloads the next segment at its current choice of resolution, and then if you suddenly get more (or less) bandwidth, it can switch to getting segment 2 from a different version of the video which better matches where you currently are.
Doing this sounds hard. Fortunately, all hard things to do with video are handled by ffmpeg. There's a nice writeup by Mux on how to convert an mp4 video to HLS with ffmpeg, and it works great. I put myself together a little Python script to construct the ffmpeg command line to do it, but you can do it yourself; the script just does some of the boilerplate for you. Very useful.
So now I can serve up a video which adapts to the viewer's viewing conditions, and that's just what I wanted. I have to pay for the bandwidth now (which is the other benefit of having YouTube do it, and one I now don't get) but that's worth it for this, I think. Cheers to Scott and Mux for explaining all this stuff.
Focal Fossa will reach the End of Standard Support in May 2025, also known as End Of Life (EOL). Ubuntu 20.04 LTS has become a critical component for millions of IoT and embedded devices worldwide, including kiosks, digital signage solutions, industrial appliances, and robotic systems. The release has been foundational for companies innovating in various industries, from healthcare to manufacturing. Like every Ubuntu LTS reaching the end of its standard support, Focal Fossa will transition to Extended Security Maintenance (ESM). This blog post will guide developers and businesses through their options and explain how to activate ESM for ongoing support.
Before we dive in, let’s revisit why Ubuntu releases have an EOL.
Why do Ubuntu releases reach EOL?
Every Ubuntu LTS version offers 5 years of standard support, during which Canonical provides bug fixes and security updates for over 2,300 core packages. Continually improving and maintaining the security of Ubuntu over the standard support period requires substantial engineering resources – especially to meet the needs of our customers, and the many critical infrastructures that rely on us..
However, our community and users always look forward to experiencing newer Ubuntu versions equipped with the latest packages. Thus, as we launch newer distributions, we inevitably have to reallocate resources. As a result, older LTS versions enter the ESM phase.
ESM offers continuous vulnerability management for critical and high-severity Common Vulnerabilities and Exposures (CVEs). Although we stop improving features of the LTS distribution, we maintain its security. Many organizations depend on ESM when immediate migration isn’t feasible to keep their infrastructures stable and secure.
ESM is a paid subscription service, as ongoing security updates still require dedicated engineering resources. However, subscribing to ESM is significantly more cost-effective than performing these maintenance tasks in-house. There are also free subscriptions for personal use.
Migrate to a supported LTS distribution
Migration planning shouldn’t be left until the last minute. Devices on Ubuntu 20.04 will soon cease receiving standard updates, leaving them eventually vulnerable. Ensuring your device’s security and operational excellence typically means migrating to a newer, supported version of Ubuntu.
Your best option is migration to a supported LTS, like Ubuntu 24.04. Ubuntu 24.04 continues the familiar environment of 20.04, with updated security enhancements, improved performance, and the latest hardware enablement. It provides continuity for workloads, minimizing disruption and maximizing the lifespan of your device deployments.
Device manufacturers might want to consider migrating to Ubuntu Core, specifically optimized for IoT and embedded environments. Ubuntu Core features – such as built-in OTA updates, full disk encryption and secure boot, strict confinement, and robust device recovery – make managing fleets much simpler. Additionally, Ubuntu Core extends standard support up to 10 years, significantly delaying the need for future migration.
If your resources are limited, consider packaging your applications using snaps, which neatly bundle applications with all their dependencies. Snaps provide a streamlined way to manage software without introducing unnecessary abstraction layers, which reduces attack surfaces and maintenance workload. Your snapped applications will effortlessly run across Ubuntu Desktop, Server, or Core.
Can’t migrate? Get 20.04 ESM
Sometimes migrating is complex or impractical, due to dependency challenges or logistical issues like deployed devices in the market. If your organization needs more time to migrate, activating ESM provides an extra 5 years of support.
ESM is part of the Ubuntu Pro subscription, delivering critical security updates for more than 2,300 packages included in Ubuntu Main. Here you find packages such as Python, OpenSSL, OpenVPN, network-manager, sed, curl, systemd, udev, bash, OpenSSH, login, libc… For the complete list of what’s included in Ubuntu Main, you can visit the Ubuntu Packages Search tool.
But there is more. With Ubuntu Pro, you can access security coverage to an additional 23,000 packages beyond the operating system. For example, Boost, Qt, OpenCV, PCL, python-(argcomplete, opencv, pybind11, png…), cython, eigen, GTK, FFMPEG… These are some of the many packages covered in Universe that are now getting security maintenance from Canonical. If you want to see how many packages could benefit from this additional security in your devices, simply follow the next steps.
Ubuntu Universe also includes well-used applications such as the Robot Operating System (ROS), where Canonical provides services such as ROS ESM for the upcoming EOL of ROS 1 Noetic.
Option 1: Purchase ESM through the Ubuntu Pro store
For small numbers of devices, purchasing ESM directly through the Ubuntu Pro Store is straightforward. Simply go to the Ubuntu Pro Store and complete your purchase.
Option 2: Purchase ESM through Canonical’s Ubuntu Pro for Devices
If you have a large fleet of devices, need to add support for estates that grow over time, or prefer one-time pricing over a subscription, we have an alternative option for you; Ubuntu Pro for Devices. It will not only grant you access to the Ubuntu Pro subscription (and so to ESM), but Ubuntu Pro for Devices will also apply a beneficial discount-based model depending on your compute module.
Security updates during ESM are accessed via a dedicated repository. This requires a subscription token, which you can get through your Ubuntu Pro account after subscribing.
To enable ESM, you just need to follow the instructions in your welcome email:
Install the Ubuntu Pro client
Attach your token to an Ubuntu machine
Activate ESM
Run apt upgrade will now allow you to install available updates
Depending on your management infrastructure, there are various ways of enabling ESM in your fleet of machines. An Ubuntu Pro subscription gives you access to Landscape, which facilitates this process.
Landscape is a management and administration tool for Ubuntu. It allows you to monitor your systems through a management agent installed on each machine. The agent communicates with the Landscape server to update an automatically selected set of essential health metrics. Landscape allows you to remotely update and upgrade machines, and manage users and permissions.
Using remote script execution, Landscape can interact with the Ubuntu Pro client. It can also distribute tokens in air gapped environments.
With Ubuntu 20.04 reaching EOL in May 2025, businesses must proactively manage their devices. Working on an unsupported release introduces security risks no organization can afford. Although migrating to a newer LTS remains our primary recommendation, we acknowledge the challenges involved. When migration isn’t immediately feasible, activating ESM provides the necessary extension to securely bridge your organization to its next update cycle.
Get in touch for tailored advice on the best migration or support options for your organization.
Dear deepin Community Partners, To enhance system update experiences and improve stability, the deepin team is proud to announce the release of deepin 23.1. This version focuses on resolving installation efficiency issues arising from core component updates, significantly streamlining the setup process for new users. Additionally, it integrates multiple functional optimizations and bug fixes to further refine system performance and usability. I. Key Improvements in This Release 1、Hardware Compatibility & Kernel Optimization Integrated 6.6/6.12 kernel updates, NVIDIA graphics driver upgrades, and Intel/AMD CPU microcode updates for comprehensive improvements in hardware compatibility and low-level performance. 2、Core Feature Enhancements DDE: Introduced smart mirror source management and compact mode entry; ...Read more
Ubuntu Budgie 25.04 (Plucky Puffin) is a Standard Release with 9 months of support by your distro maintainers and Canonical, from April 2025 to Jan 2026. These release notes showcase the key takeaways for 24.10 upgraders to 25.04. Please note – there is no direct upgrade path from 24.04.2 to 25.04; you must uplift to 24.10 first or perform a fresh install. In these release notes the areas…
Welcome back! Since our first update in July 2024, our PureOS Crimson updates have addressed major foundational work. We've handled archive maintenance, infrastructure work, and critical functionality. These lay the foundation for a solid operating system. Now that the foundations are strong, we're working up the stack into the individual packages that form PureOS.
This release is an emergency release to fix security vulnerabilities in
the Linux kernel and the implementation of the Perl programming language.
Changes and updates
Update Linux to 6.1.133, which fixes multiple security
vulnerabilities
that may lead to a privilege escalation or information leaks.
Update perl to 5.36.0-7+deb12u2, which fixes a heap-based buffer overflow
vulnerability,
which may result in denial of service, or potentially the execution
of arbitrary code.
The way organizations deliver their applications has changed radically in recent years. While the primary focus used to be ensuring high availability and traffic balancing, current priorities are centered on guaranteeing resilience, automating processes, continuous integration, and more efficient management. This shift is driven by the adoption of more complex architectures, such as microservices and multi-cloud environments, which demand greater agility and scalability.
Companies face the pressure to reduce delivery times, improve user experience, and strengthen cybersecurity, which is redefining the priorities of IT teams. The evolution of these factors requires application delivery solutions that can adapt to this new reality, offering greater control, visibility, and flexibility. As a result, traditional ADCs (Application Delivery Controller) are encountering limitations in meeting these new demands, opening the door to more modern and efficient solutions.
Some of the main trends shaping this evolution include:
Increased adoption of distributed architectures and microservices
Hybrid and multi-cloud environments as the norm
Automation as a requirement for scaling
Greater sensitivity to total cost of ownership (TCO)
Need for real-time observability, traceability, and control
The Challenges of Traditional ADCs
Although ADCs remain essential for ensuring application availability and performance, many traditional models face significant challenges:
Operational complexity: Everyday tasks such as creating balancing rules, renewing certificates, or applying security policies require advanced technical knowledge or unintuitive processes.
High costs: Many traditional solutions involve opaque licensing, additional costs for functionalities, and total vendor dependence (vendor lock-in).
Lack of integration: Poor compatibility with DevOps tools, CI/CD, or modern platforms like Kubernetes.
Limited scalability: The scaling process often requires manual planning, physical resources, or complex reconfiguration.
Fragmented management: Traffic, security, and maintenance monitoring are not centralized, which complicates analysis and incident response.
What is Expected of a Modern Application Delivery Controller?
Today’s teams seek more than just application availability and performance. They need solutions that keep pace with the business, simplify daily management, offer total visibility, and adapt to diverse infrastructures. Modern ADCs have evolved to meet these demands.
Five key capabilities define next-generation ADCs:
1.Simplified and accessible management
New solutions prioritize a clear and intuitive interface, well-defined access roles, and automation options that eliminate repetitive tasks. This allows technical teams to focus on the strategic, not the operational.
2.Automatic scalability and flexible deployment
It’s no longer just about adding servers. Modern ADCs enable automatic scaling based on demand and deployment in any environment: bare metal, virtual machines, public, private, or hybrid clouds.
3.Integration with the DevOps ecosystem
A modern ADC cannot be a “black box.” It must speak the same language as your development stack: REST APIs, infrastructure as code (IaC), GitOps, CI/CD systems, etc.
4.Advanced security without friction
Web application firewall (WAF), protection against DoS/DDoS attacks, black and white lists, SSL certificate management, automatic validations with Let’s Encrypt… All of this, configurable and updatable without complex intervention.
5.Observability and control
The ability to understand what is happening in real time is essential. Modern ADCs integrate advanced monitoring, detailed logs, customized alerts, and visual dashboards for more informed decision-making.
SKUDONET: An ADC Aligned with New Needs
SKUDONET Enterprise Edition is an all-in-one solution that integrates advanced load balancing and security, managed from a single, unified platform. This empowers IT teams to efficiently manage both traffic distribution and application protection via an intuitive graphical interface, simplifying day-to-day administration, even for users without advanced technical expertise.
With SKUDONET, you get all the features you need right away: the product includes a full range of balancing and security capabilities from the start, allowing businesses to adapt to the most demanding environments without surprise costs or added complexity.
In addition, SKUDONET delivers automatic scalability and complete compatibility with diverse infrastructures, including hybrid and multi-cloud setups. Unified management and Let’s Encrypt integration for SSL certificate management give administrators total visibility and control over their systems. And when issues arise, specialized technical support from experienced engineers is available to ensure quick resolution.
Experience a platform that addresses all load balancing and security requirements in one product. Try SKUDONET Enterprise Edition for 30 days.
Today, on World Quantum Day, we celebrate the incredible advancements in quantum computing—a technology poised to revolutionize industries from healthcare to telecommunications. But as we marvel at its potential, we must also confront the challenges it brings, particularly the existential threat it poses to current encryption systems. At Purism, we are not just preparing for this future; we are actively shaping it by integrating post-quantum cryptography (PQC) into our devices and solutions.
A core goal of our design team at Canonical is to create a space where people can easily access valuable insights on design in the open source industry, along with best practices for designing complex systems that serve a diverse range of users. Over the years, we created and shared a wealth of great design resources.
However, after years of contributing across multiple sites and platforms, our online presence had become fragmented and in need of some much-needed TLC, or “Tender Loving Care” as the expression goes.
This series explores the methods and tools we used to organize a sprint that helped us streamline and redefine our digital footprint. Part one focuses on how we organized the sprint, while part two recounts the execution and results of the sprint.
In part one, we cover the following:
Auditing our online presence – How the team assessed our platforms, content, and user experience to find gaps and inconsistencies
Defining our mission and values – A deep dive into the purpose of the Canonical design team’s online presence and the guiding principles shaping its direction
Structuring a new information architecture – How we organized content to improve navigation and relevance
Planning the design sprint – Details on gathering participants, setting time constraints, defining deliverables, and preparing resources
Part two covers:
Executing the sprint – Day-by-day insights into the methods and tools we used, including how the team collaborated to implement changes
Post-sprint actions and delivery – A reflection on what was achieved, how the team communicated updates, and the next steps for maintaining a cohesive online presence
For each phase, you will find reflections on how our team experienced each exercise, the biggest obstacles, and lessons learned for next time. I hope you will find valuable insights and can learn from our experience to organize your own online presence sprint.
Auditing our online presence
The goals
Understand the scope of the sprint
Assess the key requirements for a potential website redesign
Assess potential priorities and objectives
The journey
Over time, the content and resources the design team shared online got increasingly scattered across various platforms. We had resources on company websites, community platforms, internal documents, and public forums. This made it hard to maintain over time.
To assess what we were working with, my first step was to audit the content. The team gathered and linked all the places where we shared design work, discussed insights, or contributed to design conversations. This was very manual and time-consuming. It involved a mix of tapping into institutional knowledge, archive diving, and old-fashioned googling.
I recruited a few teammates to maximize our chances to cover as much ground as possible. We used a shared Miro board to document every piece of content we found, where it was linked to, what platform it lived on, and what the goals of the content were. We also added comments about “freshness” and quality to help us contextualize what needed to be kept, modified, replaced, or deleted.
By the end, we had a list of content to evaluate based on its purpose, relevance, and efficiency.
Extract of our online presence audit
Next, I brought the whole team together to review our findings. We addressed key questions and areas of uncertainty, like how we wanted to represent our design system and the relationship between the Vanilla framework and our design libraries. This helped us understand our options and clarify the actions needed for each item as we move forward.
Audit Workshop: How to run it
Duration
2 to 3 days to gather content / 50 min to discuss findings
Preparation
Prepare your board
The goal here is to list all artifacts found and provide the right level of information to get as close as possible to make a keep/update/delete decision for each item by the end of the group discussion.
Asynchronously, the moderator needs to ask the team to add links to every place where information is shared or hosted following this pattern:
Header post-it
Where the content was found and the link to the content
What topic it covers
Purpose post-it
What does the content try to do
Additional context like freshness, maintenance status, or quality levels
Thoughts/Questions post-it
Main points to discuss with the teams when reviewing the content
Suggested actions post-it
Suggested options to address challenges identified by the team
If the audit exposes a lot of content, the moderator should organize the different artifacts by topic or purpose to facilitate splitting the workshop audience into groups. Examples of categories can be related to the type of content, like “web pages”, “tutorials”, or their audience, like “internal material”, “designer/developer-oriented content”, etc.
Prepare you attendees
Once the board preparation is done, the moderator needs to ask attendees to familiarize themselves with the board to help reduce the time spent reading the information on the board.
Always make sure attendees are familiar with the tools you use. If people are not familiar with Miro, make sure to dedicate some time at the beginning of the workshop to get people up to speed. Do check authorization and whether attendees have all the required permissions to participate ahead of time.
Guidelines
Step 1: Present the context (10 min)
Introduce the goal. By the end of the session, there should be a clear set of actions for each artifact—whether to keep, update, modify, archive, or delete it.
Explain the criteria chosen to assess the content. In our example, we used purpose, relevance, accuracy, and efficiency.
If you had to group people to cover the entire artifact list, separate people into their groups.
Allow 5 minutes for people to get a refresher on what they will be looking at during the session.
Step 2: Discuss each piece (30 to 40 min, or as needed based on the number of items to review)
Assign a 3 to 5 minutes timer depending on how complex the discussions are expected to be for each item, and prompt the discussion using the following questions:
Does this content serve a clear purpose?
Is it still relevant to our audience and goals?
Does it need updates, restructuring, or consolidation?
Should it be archived or deleted? Why?
At the end of the timer, label each piece as Keep, Update, Modify, Archive, or Delete, and note the rationale for each decision in the Action post-its.
If a consensus cannot be reached, the moderator should mark the item as To Be Resolved, document the reasons for the disagreement, and note the key individuals needed to address the issue.
Step 3: Wrap-up and next steps (10 min)
The moderator should quickly review the categorized content and decisions made. Make sure to highlight any contentious items to allow more people to come forward if they have a valuable perspective they can bring to the follow-up discussion.
Follow-up
If there are any unresolved items, schedule a follow-up with the people able to make decisions regarding the items. Do try to do this as soon as possible to keep the momentum going.
For each item with a clear resolution, reach out to the owners or maintainers to communicate the outcomes of the workshop if they were not among the attendees.
The takeaways
As mentioned earlier, this was the most manual and time-consuming step. Tracking down every piece of content proved challenging, and we had to rely heavily on team knowledge to find the bulk of it. While input from those who created the content was really helpful, it wasn’t always possible to get firsthand insights. This was an interesting lesson in documenting processes and keeping records. It highlighted places where we needed to be more systematic in our work to avoid losing too much knowledge with time.
Looking back, one key aspect I overlooked was clearly identifying the target audience for each piece. While the team addressed this gap during the sprint, discussing it earlier would have saved time and streamlined the process. This is an important reminder to keep a user-centric mindset—even designers can fall into the trap of making assumptions and taking shortcuts!
Defining our mission and values
The goals
Define a clear direction for the team
Establish the type and tone of content needed
Ensure alignment with the team’s mission and values
Lay the foundation for consistent and impactful messaging
The journey
Now that I understood where we were, I needed to figure out where we wanted to go. We had many discussions and ideas about the mission and values of the Canonical design team, but we needed to define them in a way that everyone could contribute to and align with.
To achieve this, I gathered members of the management team and representatives from our design working groups, ensuring a diverse range of perspectives while keeping the workshop focused and productive. For this exercise, I chose a template that highlights personal stories and experiences, as I believe these help show the deeper motivations that should fuel our broader vision. To guide the discussion, my co-moderator and I included prompting questions to spark ideas and encourage reflection.
At the end of the workshop, I was left with multiple statements and ideas that I needed to combine into a clear vision/mission statement. I wrote a couple of variations and asked people on the team to do the same so we could all look at these examples and pick the ones that felt the most representative.
In the end, this is what we decided on:
“Our purpose is to demonstrate how design can be both a leading and creative force in highly technical and engineering-led environments.
We want to share our journey of designing innovative and user-centric solutions for complex technical systems. This way, we hope to set high standards for excellence in design and engineering, as well as champion collaboration, curiosity, and resourcefulness.”
Extract of our mission workshop
Mission workshop: How to run it
Duration
1 hour 30 min
Preparation
Prepare your board
Create a board following this template:
A large circle labeled “Story” to gather stories people can share about their experiences.
Inside this circle, three overlapping circles labeled:
“Recognition”
“Contribution”
“Passion”
The intersection of these circles should be labeled as follows:
“Recognition” + “Contribution” = “Vision”
“Contribution” + “Passion = “Mission”
“Recognition” + “Passion = “Vocation”
The center of the three overlapping circles should be labeled “Purpose”
The definition of each label is covered in the guidelines section.
See the picture in this section for a visual example.
Write down guiding questions to help people get in the right state of mind.
Examples:
Where do you see our design web presence in 5 years?
What impact are you hoping to achieve with it?
What is the major goal for you with our updated design presence?
What is your goal as a designer in this team?
Who do you hope will visit our website(s) and for what reason? What are they looking for?
What do you want our website visitors’ main takeaway to be?
Why are we doing all of this? Why are you willing to invest significant resources into it?
What change are we trying to make in the world?
What do you think our company is good at now? Or in the future?
Prepare your attendees
Once the board preparation is done, the moderator needs to ask attendees to think of stories in advance so they don’t blank out on the day. In our case, the prompt given ahead of time was “Think of times when you were happy/grateful to be a designer in Open Source/at Canonical”.
Always make sure attendees are familiar with the tools you use. If people are not familiar with Miro, make sure to dedicate some time at the beginning of the workshop to get people up to speed. Do check authorization and whether attendees have all the required permissions to participate ahead of time.
Guidelines
Step 1: Present the context (10 min)
Introduce the goal. By the end of the session, the moderator should have enough material to craft a mission statement from the information shared and upvoted by the attendees.
Explain the three main exercises:
Share stories, to get people in the right frame of mind
Define purpose, to write down more focused ideas and concepts from the stories shared and questions asked
Vote, to prioritize the input we want to inform the mission and values statement
Step 2: Share stories (15 min)
Participants list 4 inspirational memories and stories about their relationship with the company and the design community. Moments when they learn something important. Landmark moments that make it all worthwhile. Then, each participant picks one and shares it with the group.
Step 3: Define purpose (35 min)
Based on these stories and inspirations, participants keep drilling down and add post-its with ideas for the following categories:
Recognition: What are the talents, competencies, and abilities we are/want to be recognized and admired for?
Contribution: With what causes and problems do we want to contribute?
Passion: What do we love to do?
Then, each participant picks one idea in each category and shares it with the group.
Step 4: Vote (20 min)
Participants vote on the items they feel are the most important to help formulate our mission statement.
The group discusses the outcome of the voting.
Step 5: Wrapping up and next steps (10 min)
The moderator should review the highlighted concepts to wrap up the workshop.
The moderator also needs to clarify the next steps:
Combine findings from the workshop to write a couple of mission statements.
Share options with the team to tweak them or vote on the most representative one.
Follow up
The next steps do not need to happen in a meeting or in person, they can be worked on asynchronously.
The moderator needs to explore the intersection between the highlighted elements and synthesize the key takeaways to create purpose statement drafts in the first diagram’s center.
The moderator should then share the options with the rest of the team to vote on the most representative one.
The takeaways
This exercise was a huge success with the attendees. An unexpected but valuable outcome of the workshop was how it motivated the team and helped us articulate what unites us. It’s easy to lose sight of the feel-good stories that brought us to where we are today, but taking the time to reconnect with what we love to do and what we want to contribute to the world proved to be a powerful motivator. In our case, it is a love for complex issues and a desire to help others learn about Open Source by sharing our journey.
With more abstract workshops like this one, including leading questions is especially useful in getting participants in the proper mindset. It was also a reference point that kept our eye on the overall goal as we moved through the process.
Structuring a new information architecture
The goals
Define the scope of the sprint and establish clear objectives
Identify the minimum viable product (MVP) for a successful sprint
Discuss prioritization, available resources, and expected outcomes
Align teams on key focus areas to maximize efficiency
The journey
With a clear understanding of our existing content, the necessary changes, and what we wanted to share with the world, it was time to put an action plan in place.
Using insights from our content audit and keeping our mission statement in mind, I began mapping out the content we wanted without considering resource constraints just yet. The team and I focused on refining the ideal information architecture, deciding which content should remain on the website and what should be housed elsewhere, such as Discourse, GitHub, or Canonical’s other platforms. We made sure to discuss the type of audience we thought would be interested or benefit from the content, and highlighted what would be non-negotiable to provide each audience with valuable, up-to-date information.
This process gave us a solid grasp of both the content we needed and the overall scope of work. It highlighted the areas where we had the most material, what we would need to build from scratch, and what we could leave to later as we consolidate our online presence.
We decided to focus on the design website for this sprint, as it was the area we had the most control over and with the biggest challenge to solve. We defined a couple of pages as “priority” based on how fundamental they were to showcasing our work and adding value to the audiences we decided to address.
Given the number of pages we wanted to deliver and the initial estimation of how many people would be able to participate, I set the sprint to last a week.
Extract of our board representing a draft of the information architecture
The takeaways
The information architecture helped the team see which areas needed attention, identify what we could control, and figure out how to distribute the workload. Even without fully defining every page, it allowed us to prioritize effectively. It also helped us address the earlier oversight of not explicitly considering our target audience.
Beyond internal planning, creating a map of content played a key role in our discussions with management. At this stage, I had implicit buy-in that redesigning our online presence was important, but I hadn’t yet received a formal green light. The information architecture helped me move the conversation forward by outlining what was feasible and highlighting potential limitations in concrete terms. This made it easier to begin advocating for the time and resources needed to move ahead.
Planning the design sprint
The goals
Define clear expectations and responsibilities for all participants
Secure final approval from management and partner teams
Establish guidelines and resources to support attendees
Ensure alignment with objectives and project goals
The journey
Building the sprint agenda
Considering the amount of work effort involved in building the website’s pages, it was essential to clearly define how the work would be divided. Using the information architecture, I focused on determining who would be responsible for each section and assessing the resources available for each area. This process allowed me to outline an “ideal scenario” to start creating an agenda for the sprint.
The team had decided that our MVP would include one page per established section, excluding pages already in good condition, such as those covering brand or accessibility. From there, I outlined the criteria for a high-fidelity wireframe, identified the resources required to achieve it, and worked backward to set daily goals to guide our progress.
The initial ideas for our agenda
Managing stakeholders
With these requirements in mind, I was ready to discuss the final details regarding time and participation from the team with management. Since this effort would temporarily divert designers from their usual tasks, I needed to present a clear case and emphasize the cost-benefit tradeoff.
To help convince them, I designed two types of agendas:
The “ideal” one assumed full participation from everyone throughout the sprint.
The “minimal” one focused more on delivery and accounted for the possibility that some participants might not be able to join as much as planned or might only be available on certain days.
The goal was to showcase the outcome of a full commitment versus a partial one, what would be at risk if we didn’t get commitment from a minimum amount of people, and give sufficient information to managers so they could evaluate how this would impact their reports.
Full agenda vs partial agenda: workshop and presentations vs modular pair work time
Another key group of stakeholders was our web team. We needed to make sure that our work would be published and not remain on Figma files indefinitely. After consulting their project managers, we aligned with their processes so that development could begin immediately after the sprint concluded. We used a spreadsheet to keep track of all the material that needed to be developed, links to relevant assets, and a point of contact to follow up with in case something was missing.
A spreadsheet to follow our progress and keep track of helpful assets for the web team to build the pages
Gathering participants
Once the agenda options were ready, we communicated the dates and objectives of the sprint to the rest of the team. I confirmed with our subject matter experts (SMEs) their presence and gave them an outline of what would be great to see from them. I also checked with our “section leads” to confirm their availability. I had planned to get goals from any leads who couldn’t join and have someone cover for them, but luckily, everyone we needed participated.
I shared a board with the details of the sessions and a call sheet for attendance to get an idea of how many people would be joining aside from our SMEs and section leads. Some exercises, like the Round Robin, need a certain amount of people to be doable or enjoyable, so knowing who could join was important.
Building resources
Once the agenda was more or less defined, I had a clear list of the resources needed.
The list looked like this:
A centralized place to host resources and guidelines Ensures all team members can easily access key information and stay aligned throughout the sprint.
A centralized place to save materials (drive folder) Keeps files organized and accessible, avoiding confusion and duplication of work
Guidelines for overall sprint for how to find help or who to ask for questions Provides clarity on support channels, minimizing delays and keeping the sprint running smoothly
Presentation of our goals and mission Establishes a shared understanding of the sprint’s purpose and desired outcomes to keep everyone focused
Examples of what a deliverable should look like at the end of each day Sets clear expectations for daily progress, helping teams track and measure success
Guidelines for all exercises Ensures participants understand the structure and objectives of each activity
Links to existing material for review Saves time and provides context by offering references to prior work or research
Presentation on UX copy and how to write good content Provides guidance on our copy guidelines to the team
Library of components to build a page Speeds up the design process and make sure the outcome is development-ready
Presentation on visual design for web pages Provides guidance on our visual guidelines and inspiration for creating great layout designs
Visual resources Offers tools like icons, images, or templates to streamline the design process
Next steps for implementation Helps the team transition from design to development, ensuring the sprint’s outputs are actionable and ready to execute
To make the sprint structure easier to manage, I created a centralized board with a skeleton of what the agenda for each day should look like and include.
The skeleton: day overview, morning and afternoon agenda with guidelines and working groups, link to further resources for the session, and an end-of-day demo round-up
Once the overall framework for each day was established, I divided the daily responsibilities among the organization team to streamline the creation of resources. This approach also aimed at making moderation easier, as it assigned different team members to answer questions about specific exercises and guide discussions during moderated sessions, rather than relying on a single person for the entire week.
Finalizing logistics
After organizing the resources and defining our daily objectives, it became easier to identify who we needed, for what purpose, and when. The sprint organisation team created tentative groups and sent requests to key individuals to make sure they were available or to ask them to delegate someone in their place if they couldn’t attend. To help with coordination, we set up a shared Google Calendar, added all the sessions with detailed descriptions, and invited the people who initially confirmed. This approach also allowed anyone who initially declined but wanted to participate later to see the schedule and join sessions as needed.
After one last rehearsal with the Open Design working group, we were ready for the start of the sprint!
The takeaways
This phase was the most work-intensive for the moderation team. Estimating how much time and resources were needed to get to where we wanted to be was tricky. However, what I found the hardest was finding a way to deliver on our intense requirements while providing an enjoyable experience for the attendees.
Planning the sprint came with 3 main constraints: delivery, modularity, and overall experience.
Delivery
Creating content is an involved process, and Canonical employees work with strict copy and visual guidelines to ensure everything we share is up to the company’s standards. To maintain consistency and quality, I integrated quality checks directly into the agenda, ensuring that projects were ready for production rather than left unused due to insufficient quality.
To get everyone on the same page, I asked our resident content experts to share guidelines and best practices with the attendees.
And finally, to promote consistency across teams, I scheduled a round-robin style session at the end of each day. This was especially helpful in getting multiple perspectives on the work done and giving everyone an idea of how the other teams structured their pages.
Modularity
One of the most challenging constraints was modularity. Designers are embedded in product teams and are asked to deliver on tight roadmaps, so any time away from product work can be tricky to justify. The design management team asked me to build an agenda that accounted for people coming in and out of the exercise. This constraint required us to develop detailed, easy-to-follow guidelines and resources. It also encouraged us to document our progress and the decisions made along the way. This approach proved invaluable, especially since most attendees ended up not being able to attend the entire sprint.
Experience
On a more personal note, my goal was to make the sprint as engaging and enjoyable as possible by focusing on a few key aspects:
Creating learning opportunities I wanted to ensure that participants could take something valuable away from the experience—whether it was understanding grid systems in visual design, improving copywriting skills, or gaining insight into the roles and challenges of the team members who work on our websites daily. The design team has people with great skills in-house, and I wanted to create an opportunity for them to share their knowledge with the rest of the team. This ended up aligning perfectly with our quality requirements, which was a nice bonus.
Building team relationships To strengthen connections within the team, I wanted to include sessions for smaller group interactions. Since we rarely have the chance to work together as a full team, this was an ideal opportunity for colleagues who don’t usually collaborate to spend meaningful time together. Between the “work together” times and the Round Robin group reviews, every attendee had the opportunity to work with everyone else. As highlighted in the feedback session, being able to work with other designers was one of the best takeaways for attendees.
Prioritizing breaks I wanted to incorporate enough breaks to allow participants time to relax and reset, giving them space to recharge or reflect. In hindsight, we could have scheduled even more!
What’s next?
Figuring out expectations and pulling together the right resources to build solid guidelines took effort, but it helped build strong foundations for our sprint. Overall, focusing on clear deliverables and keeping attendees’ experience in mind helped guiding our process.
All of the advice I shared so far can be used in any type of sprints, and I hope you found useful takeaways! And if you’re eager to see how all this preparation paid off during the sprint and what came next, be sure to check out part 2 of the series!
Introducing Armbian’s New Cloud Images: Optimized for x86 and aarch64 Cloud Deployments
Armbian is taking a significant step forward in cloud computing support with the introduction of dedicated cloud images tailored for cloud environments. The latest enhancement brings a new ‘cloud’ image set specifically designed to optimize generic x86 and aarch64 deployments, ensuring better performance, security, and compatibility in cloud infrastructures.
Why Dedicated Cloud Images?
As more enterprises and developers deploy lightweight Linux images in cloud environments, the need for a streamlined, cloud-optimized configuration has grown. The new ‘cloud’ images aim to address this by focusing on:
Very Fast Boot Time: Optimized configurations enable boot times as fast as 1-2 seconds, ensuring rapid deployment and responsiveness.
No Firmware Package: By eliminating unnecessary firmware packages, the cloud images remain lightweight and focused on essential cloud functionality.
Optimized Performance: Stripped of unnecessary hardware drivers, the cloud images are leaner, allowing faster boot times and improved efficiency.
Enhanced Compatibility: Fine-tuned configurations ensure that Armbian-based cloud instances run seamlessly across major cloud providers, including AWS and Azure.
Security-Focused Design: With a minimal attack surface, the cloud images reduce exposure to vulnerabilities while maintaining essential features.
Lightweight Image Size: The uncompressed QCOW2 image is approximately 700MB, ensuring minimal resource usage while maintaining full functionality.
Docker Support Enabled by Default: The cloud images come preconfigured with Docker support out of the box, allowing seamless deployment of containerized applications.
Lean Debian Bookworm or Ubuntu Noble User Space: Provided images come with a minimal yet powerful user space based on Debian Bookworm or Ubuntu Noble, ensuring stability and efficiency.
Armbian-Config System Configuration Utility: The powerful armbian-config tool provides an easy way to configure and manage system settings, enhancing user experience and customization.
Customize and Brand Your Cloud Image
Armbian makes it incredibly simple to create custom, cloud-optimized images tailored to your needs. Whether you’re an enterprise or an individual developer, you can brand and configure your Armbian cloud images to match your identity and requirements. Have always fresh Armbian OS with your brand!
By leveraging the Armbian build framework, you can easily build your own cloud images with custom configurations, packages, and branding. Get started today by visiting the official Armbian build repository.
This update makes it easier for developers and businesses to deploy Armbian-based instances in the cloud without unnecessary bloat. Whether running on AWS, Google Cloud, Azure, or smaller providers and Qemu KVM / Proxmox, users can expect:
Reduced overhead and better resource utilization
Improved stability with cloud-centric optimizations
A more agile and lightweight system for containerized applications
Faster provisioning and deployment due to the ultra-fast boot times
Seamless container orchestration with pre-enabled Docker support
Optimized memory management with built-in ZRAM support
Customizable and user-friendly system configuration via armbian-config
The ability to build fully customized, branded cloud images with ease
What’s Next?
Armbian continues to evolve, ensuring that users benefit from its embedded Linux roots, embracing a philosophy of minimalism and efficiency that ensures optimal use of system resources. Community feedback is crucial in refining the cloud images, and we encourage users to test them out and contribute.
This topic came up at a customer of mine in September 2024, when working on Debian/trixie support. Since then I wanted to blog about it to make people aware of this new OpenSSH feature and behavior. I finally found some spare minutes at Debian’s BSP in Vienna, so here we are. :)
Some of our Q/A jobs failed to run against Debian/trixie, in the debug logs we found:
debug1: kex_exchange_identification: banner line 0: Not allowed at this time
This Not allowed at this time pointed to a new OpenSSH feature. OpenSSH introduced options to penalize undesirable behavior with version 9.8p1, see OpenSSH Release Notes, and also sshd source code.
FTR, on the SSH server side, you’ll see messages like that:
Apr 13 08:57:11 grml sshd-session[2135]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55792 ssh2 [preauth]
Apr 13 08:57:11 grml sshd-session[2135]: Disconnecting authenticating user root 10.100.15.42 port 55792: Too many authentication failures [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55800 ssh2 [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: Disconnecting authenticating user root 10.100.15.42 port 55800: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55804 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: Disconnecting authenticating user root 10.100.15.42 port 55804: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55810 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: Disconnecting authenticating user root 10.100.15.42 port 55810: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55818 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55824 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55838 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55854 on [10.100.15.230]:22 penalty: failed authentication
This feature certainly is useful and has its use cases. But if you f.e. run automated checks to ensure that specific logins aren’t working, be careful: you might hit the penalty feature, lock yourself out but also consecutive checks then don’t behave as expected. Your login checks might fail, but only because the penalty behavior kicks in. The login you’re verifying still might be working underneath, but you don’t actually check for it exactly. Furthermore legitimate traffic from systems which accept connections from many users or behind shared IP addresses, like NAT and proxies could be denied.
To disable this new behavior, you can set PerSourcePenalties no in your sshd_config, but there are also further configuration options available, see PerSourcePenalties and PerSourcePenaltyExemptList settings in sshd_config(5) for further details.
Watch a conversation reflecting on the 20th anniversary of Git, the version control system created by Linus Torvalds. He discusses his initial motivations for developing Git as a response to the limitations of existing systems like CVS and BitKeeper, and his desire to establish a better tool for the open-source community. Torvalds explains the processes […]
This week’s Armbian update delivers a solid mix of wireless driver enhancements, board refinements, and quality-of-life improvements across the build system. Notably, rtw88 and rtw89 drivers are now enabled for kernel 6.12+, and Radxa Zero 3 receives mainline U-Boot and WiFi upgrades. Rock 3A sees multiple U-Boot tweaks, while NanoPi R3S and Odroid M2 benefit from version bumps and cleanup. Whether you’re maintaining a fleet or tweaking a personal project, there’s plenty in here to streamline your next build.
Kernel & Driver Enhancements
rtw88 & rtw89 wireless drivers are now enabled for kernel 6.12+ configs. By Igor Pecovnik
RTL8812AU driver updated for kernel 6.14.y thanks to upstream merge. By Ricardo Pardini
Reverted hacky RTL8812AU workaround now that upstream is in.
Board Support Updates
Radxa Zero 3
WiFi extension now enabled by default.
Upgraded to mainline U-Boot v2025.01.
NanoPi R3S
U-Boot bumped to v2025.01 for current.
Board config cleanup.
Rock 3A
U-Boot .itb build fixed.
Back-and-forth on code duplication optimizations… but it’s finally clean now. Final Fix
Odroid M2
U-Boot patches updated to v2025.04-rc5.
Cleaned up patch naming and rewrote without functional changes.
Infrastructure & System Improvements
Cloud QCOW2 images now auto-resize filesystem to include 16GB of free space on boot. By Igor Pecovnik
NETKIT support added to all kernel builds 6.7+.
Docstrings added to netkit for clarity. By CodeRabbit AI
Automatic board status synchronization occurred twice this week. #1 | #2
Miscellaneous
armbian-leds.conf added for Radxa E20C
Examples removed from hooks for leaner builds
Vendor colors now included in BSP & artifact hashing
Stay tuned for next week’s highlights!
We’re always looking for more contributors and supporters—whether you’re a developer, tester, or fan. This is the final week to donate to our crowdfunding campaign aimed at boosting tech support and growing the team. Let’s build something great together!
Thanks to our amazing community, we’re rolling out our newest software update for Proxmox Backup Server 3.4. Your feedback has been instrumental in shaping these improvements.
This version is based on Debian 12.10 ("Bookworm") but uses the Linux kernel 6.8.12-9 as stable default and kernel 6.14 as opt-in, and comes with ZFS 2.2.7 with compatibility patches for Kernel 6.14.
Today, Android Automotive OS (AAOS) is the preferred operating system for in-vehicle infotainment among major carmakers. AAOS is a fully functional infotainment platform that supports various applications, features and services, similar to what people are used to on their Android phone.
AAOS helps developers create new applications and features that enhance the driving experience, which has resulted in a rapidly growing ecosystem of compatible apps and services.
Unfortunately, the road to developing robust infotainment systems is long and full of hurdles. Challenges exist in hardware availability, logistics, and system configurations. For example, dependencies on the hardware often lead to problems and delays in testing. What’s more, many hardware variants differ from the developers’ test environment, which makes it difficult to set up a representative test bench.
A cloud-native approach is a powerful answer to the problems of physical hardware: it doesn’t require physical shipping, handling, or setup, it provides consistent environments and is scalable, and it simplifies onboarding. In this blog, we’ll explore the features and benefits of Anbox Cloud, in particular in an Automotive context.
Anbox Cloud: a cloud-native approach to a better infotainment system
As is well known, automotive development often requires extensive testing and validation. This is time-consuming and repetitive work, which is further complicated by issues with inconsistent hardware quality during the system development phase.
Anbox Cloud takes a cloud-native approach to this problem by creating a consistent and secure environment for developing and testing Android-based systems and apps. It decouples development from specific hardware, and thus helps to alleviate the fragmentation and compatibility issues that are common in the automotive industry.
Thanks to this design, developer testing resources become dynamically scalable in the cloud, making it easier to cope with work peaks, e.g. at development milestones or product launches. The development environment remains responsive and performs well even under heavy loads. As a scalable and cost-efficient solution, Anbox Cloud enables automotive development teams to respond more quickly and flexibly to unforeseen developments, changing requirements, and market conditions. And because your teams’ time is precious and cannot be subject to unexpected delays, Canonical can provide 24×7 support for Anbox Cloud installations, in order to ensure smooth use of the technology.
Let’s take a closer look at the many strengths of Anbox Cloud that make it a great fit for many Automotive application development use cases.
Infotainment development efficiency at the lowest cost per user
One of Anbox Cloud’s greatest strengths lies in its origins: it was originally created for cloud gaming, and was optimized to perform well in gaming environments that require fast response times and high reliability. It offers a very low latency between the user on a computer and the server running the Android system in the cloud. It is also designed to offer a very high number of concurrent users per server, having been tested with more than 100 players connected to a single server running Anbox Cloud. These strengths make Anbox Cloud the perfect choice for automotive development, which typically requires very low latency for testing on servers, a great user experience. In addition, the very high number of concurrent users supported per server ensures a cost effective solution.
Using Anbox Cloud in an Automotive software development context is especially impactful when you consider its support for customized AAOS images. It allows developing, debugging, and testing a virtual infotainment device running in the cloud and it supports Android tools (like Android Studio and ADB). It’s available on major public clouds as well as private clouds. It supports both x86- and Arm-based servers as well as GPUs from Intel, AMD, and NVIDIA.
It’s not just stable and powerful, however: it’s also cost effective. For OEMs and suppliers with more than 200 Android developers, Anbox Cloud enables a very low cost per user.
Anbox Cloud for Automotive also provides specific tools and features for infotainment development for Android apps and images. Let’s discuss these functionalities in detail and explore how they help software development, testing and deployment.
Current AAOS images
Support for AAOS images is essential for testing applications and ensuring that they run correctly. Anbox Cloud allows developers to deploy and run reference images from both Android Open Source Project (AOSP) and Android Automotive OS (AAOS). This provides developers with a consistent and stable starting point to implement and test their Android applications, which closely resembles the target hardware and use cases.
The developers run on Android reference images and detect potential problems such as bugs and inconsistencies early in the development cycle. In this controlled environment, they can fix these problems before the apps are deployed on the target hardware. Development teams save time and resources by quickly iterating and testing.
Customizing Android images
OEMs and suppliers are not using unmodified ‘off-the-shelf’ AAOS images. Rather, they customize an AAOS image to their specific requirements. While the AAOS image is customized by one team, they need the teams developing Android applications or in charge of the tests, to be using the same AAOS image (or an earlier version of that image).
Time and consistency are key factors in determining success in such a scenario: the sooner the customized image can be shared with the other teams, the faster the product can be delivered to the market. And the teams need to be able to choose which tailored AAOS image they work on (like a beta version or the latest stable version, depending on their role and needs) so that every team member has access to the exact same environment. Anbox Cloud supports customized Android images built from AAOS and enables the teams to collaborate efficiently to reduce integration risk, improve product quality, and make development faster.
Experimenting with VHAL properties
A defining feature of Android Automotive is the Vehicle Hardware Abstraction Layer (VHAL). Through this interface, applications access and interact with vehicle properties and functions by abstracting the underlying hardware specifications. Enabling developers to manipulate VHAL properties gives them an invaluable tool for simulating various vehicle conditions and behaviours.
Anbox Cloud supports direct modification of VHAL properties, allowing more options for thorough testing and development. Developers can experiment with different scenarios and directly capture how their customized AAOS systems and Android applications respond. Through this level of control, Anbox helps ensure that applications are thoroughly tested and can handle real-world situations.
Sometimes developers need to explore edge cases and stress test their applications. They can do this by tweaking the VHAL properties and simulating extreme conditions such as high speeds or low battery. This simulation ensures that applications remain stable and responsive, and highlights potential weak points and areas for improvement, thus increasing the robustness of the software.
The VHAL properties are available in the Anbox Cloud user interface so this makes it easier for software developers to test their implementation. This VHAL portion of the Anbox Cloud user interface can be tailored for specific needs like adding OEM-specific VHAL extensions. Also, the VHAL properties are available through APIs to enable automatic testing at scale.
Debugging in the cloud
Software development always requires debugging, and Android is no exception. To allow developers to debug the cloud instance, Anbox Cloud provides end-to-end secure access to the Android Debug Bridge (ADB), the debug environment used by Android. With Anbox Cloud, the developers can continue to use the same Android tools (including Android Studio), connect ADB to their AAOS instance running in the cloud, and perform the usual development and debugging activities (adding breakpoints, debugging step by step, and so on) right from within Android Studio. The tools are the same, they just run on virtual Android devices located in the cloud.
Customize Android configurations and screens
Developers can modify Android system configurations and screens to meet specific requirements and use cases. Some examples could be adapting the user interface to the brand identity, optimizing system settings for specific hardware, and creating and testing niche applications. The end product is precisely tailored to the requirements of different automotive applications,for instance, OEMs owning multiple car brands are deploying AAOS-based systems which have a different look and feel based on the vehicle.
Testing all the possible configurations and screens, in all supported languages, needs automation. Anbox Cloud enables the project teams to run automated testers, at scale, in a reliable and replicable manner.
Testing Android applications
Testing applications before using them is mandatory for their success. The requirements for application testing are multifold.
First, every application needs to be tested against every possible configuration of the system, including all supported languages. The testing should also include all VHAL scenarios including when the car is moving or parked.
Also, because the AAOS system has been tailored to the OEM requirement, applications need to be tested in their specific contexts. Whether it is a popular music streaming application or an OEM specific application, these Android Application Packages (APKs) have to be tested with the specific AAOS system.
Some of these applications could require more interaction with the vehicle than others, and there is a growing trend for apps to adapt their user interface based on vehicle conditions.
Anbox Cloud addresses all of these testing requirements by allowing automated testing at scale. With the new VHAL support in Anbox Cloud, developers have precise control over the interactions between their applications and the vehicle. It is also possible to connect the VHAL to cloud-based virtual ECUs (Electronic Control Units) or pre-recorded data streams through APIs provided by Anbox Cloud.
Optimize DevOps and CI/CD
DevOps and CI/CD methodologies help to improve development cycles. The same methods can also be applied to in-car infotainment systems. Automated testing and deployment processes help to quickly identify and fix problems and ensure that the new code is valid and ready for production.
This rigorous testing process builds confidence in the stability and performance of the software and makes it easier to provide updates and new features. Integrating Anbox Cloud into workflows reduces manual effort and the number of repetitive tasks, allowing developers to focus less on maintenance and more on innovation.
Remote access to Android instances
Anbox Cloud provides remote access to Android instances and management portals, serving as a type of virtual Android desktop infrastructure (VDI). Users interact with the Android system as if they were using it locally. This functionality is particularly useful for remote debugging, feature demonstrations, training, and remote support.
Remote access to Android instances increases the flexibility and responsiveness of development teams. Developers can quickly respond to issues without being tied to a specific location.
Conclusion
In summary, Anbox Cloud is a powerful tool for developers, thanks to its ability to facilitate efficient testing, customization, and deployment of Android environments, and its seamless integration into CI/CD pipelines. Embrace the future of automotive development with the new Anbox automotive features and discover a powerful tool for creating the next generation of automotive infotainment systems and applications.
The 7th update of Sparky 7 – 7.7 is out. It is a quarterly updated point release of Sparky 7 “Orion Belt” of the stable line. Sparky 7 is based on and fully compatible with Debian 12 “Bookworm”. Changes: – all packages updated from Debian and Sparky stable repos as of April 8, 2025 – Linux kernel PC: 6.1.129-LTS (6.14.1, 6.12.22-LTS, 6.6.86-LTS in sparky repos) – Linux kernel ARM: 6.12.20…
A Joana Simões, a.k.a. Princesa Leia, voltou com muitas novidades sobre o universo OSGeo, conferências com S.U.S.H.I, controladores de jogos fofinhos que podem ser usados para tudo e anéis espertos; o Diogo continuou a espancar verbalmente a Mozilla com uma tábua com pregos, enquanto relatava as últimas novidades de Firefox e conselhos sobre relógios espertos e o Miguel trouxe um bode.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Software for Open Networking in the Cloud (SONiC) is an open-source network operating system that has revolutionized data center networking. Originating as a Microsoft-led initiative in the Open Compute Project (OCP) in 2016, SONiC has rapidly gained traction among hyperscalers and switch hardware vendors, including Broadcom, Cisco, and NVIDIA. By building its services using containerized microservices, SONiC brings flexibility, scalability, and modularity to network infrastructure. Today, SONiC is a project hosted by the Linux Foundation.
Canonical has identified several key strategies to help network equipment vendors and users of SONiC in the enterprise address some of SONiC’s limitations and enhance reliability and performance for their version of SONiC. This article begins by exploring SONiC’s technical architecture, its origins and current limitations. We then cover Canonical’s efforts to improve it, and its potential impact on enterprise data centers.
How SONiC works
SONiC was developed to meet the rigorous requirements of cloud data centers, offering a robust and scalable solution for network management. At its core, SONiC employs a modular architecture that decouples network software from the underlying hardware.
SONiC’s architecture is composed of multiple containerized microservices, each responsible for specific networking functions. This includes infrastructure services like the Dynamic Host Configuration Protocol (DHCP) service, Border Gateway Protocol (BGP) routing services, and Quality of Service (QoS) policies management. By isolating these functions into separate Docker containers, SONiC ensures that each service operates independently, enhancing system stability and flexibility. To manage configuration and network state information, SONiC utilizes a centralized Redis database. This approach ensures consistent state management across all services, enabling seamless communication between the various microservices.
One of SONiC’s significant advantages is its ability to operate across a wide range of hardware platforms, allowing cloud operators to use the same software stack across devices from different switch vendors. The SONiC community is a diverse ecosystem comprising cloud providers, service providers, hardware vendors, and silicon and component suppliers. With over 850 members, the community collaborates to enhance SONiC’s capabilities and expand its adoption. Additionally, several companies offer enterprise services and support for SONiC, providing users with professional assistance and resources.
SAI: the key component for network operating systems
We can’t discuss SONiC’s inner workings without mentioning the Switch Abstraction Interface (SAI). It is a key component which allows SONiC and other network operating systems (NOS) to run on network equipment from various vendors. As a standardized application programming interface (API), SAI provides a vendor-independent method for controlling network forwarding elements, such as switching ASICs and Network Processing Units (NPUs) that are base hardware components in network devices. SAI abstracts the underlying hardware details and exposes a set of common features that are implemented in the underlying hardware.
SAI offers a consistent set of C-language based functions, which enables developers to program network hardware without needing intimate knowledge of the characteristics of each vendor’s silicon. It is a key enabler for disaggregated networking, as it decouples software from hardware. This separation allows organizations to select switching silicon and NOS independently, preventing vendor lock-in and encouraging software innovation.
As with SONiC, SAI is an open source project, under the Open Compute Project (OCP), with contributions from major industry players such as Microsoft, Dell, Broadcom, and NVIDIA. This collaborative approach ensures SAI evolves to meet the diverse needs of the networking community.
SONiC adoption
Since its initial release in 2016, SONiC has seen significant adoption among hyperscalers, with companies like Microsoft and Alibaba integrating it into their data center infrastructures. Microsoft utilizes SONiC as the default switch OS powering Azure and various other parts of its cloud services, including its AI platform. This integration underscores SONiC’s capability to handle large-scale, demanding environments.
Despite these successes, SONiC’s feature set is still evolving and may not yet comprehensively support all enterprise network use cases. For instance, some key features required for enterprise private data center environments, such as multi-chassis link aggregation (MCLAG) and multihoming EVPN, are absent, still in development, or not thoroughly tested.
Additionally, SONiC’s large image footprint makes it unsuitable for some hardware switches with limited memory. Another notable challenge is SONiC’s reliance on the standard Linux filesystems, which can prove problematic when implemented on devices using eMMC (embedded MultiMediaCard) storage, often found in certain hardware configurations.
Users have also reported issues with operational reliability, as managing and customizing SONiC often requires expert resources, which creates additional complexity for less experienced teams. Furthermore, some of SONiC’s container management workflow presents challenges due to runtime-generated process parameters and container configurations, which hinder service stability and warm-boot capabilities.
Additionally, the community process for improving SONiC has been criticized for being slow, delaying key updates and bug fixes. The fragmented nature of SONiC’s ecosystem also complicates deployment, especially for hardware vendors and Tier-2 cloud providers.
These limitations suggest that while SONiC is a robust solution for hyperscalers, it may require further development and validation to fully meet the diverse needs of enterprise networks.
Canonical’s plan to support SONiC
SONiC traditionally runs on top of Debian, which makes it relatively straightforward to migrate its user space components to run on Ubuntu instead. This transition brings several benefits, as the Ubuntu kernel is more recent than the Debian kernel included in the upstream SONiC distribution (called “Community SONiC”) and is professionally maintained, with more frequent updates to address bugs and security vulnerabilities.
Canonical’s effort to port SONiC over to Ubuntu has already seen significant success. SONiC images are now running effectively on both virtual machines and select hardware platforms from Canonical’s partners, demonstrating the feasibility and stability of this new platform. Thanks to Canonical’s long-term support (LTS) of up to 12 years of maintenance and security fixes for the Ubuntu kernel and operating system, this adaptation enables network equipment suppliers to provide industry-leading maintenance and security updates to their customers, ensuring that their version of SONiC can be supported with the best protection and stability available on the market.
One of Canonical’s key initiatives is the integration of SONiC, and other network operating systems leveraging the Open Network Install Environment (ONIE), with Canonical’s MAAS (metal-as-a-service), in order to streamline device onboarding and enrollment. Watch this space to learn more about what MAAS will provide for networking in the datacenter.
Canonical is also working closely with hardware vendors to improve compatibility and deliver stronger performance on advanced network switches. Recognizing the need for better security, Canonical has conducted an assessment of SONiC-specific security gaps and can apply best practices from its Secure Software Development Lifecycle (SSDLC) to ensure enterprise-grade protection.
Canonical’s efforts also focus on reducing SONiC’s image footprint by improving Docker container layering through the use of chiseled base Ubuntu images and additional optimizations, ensuring the system can function efficiently on hardware with minimal resources. Extensive regression testing, certification lab installations, and robust CI/CD pipelines are available to ensure SONiC versions deployed on popular network hardware platforms are fit for demanding production environments.
The value of collaboration
Canonical’s active collaboration with the SONiC community, independent hardware vendors, and enterprise customers plays a critical role in ensuring SONiC evolves into a stable, secure, and scalable network operating system. By aligning with key industry players and contributing valuable feature enhancements, Canonical aims to accelerate SONiC’s adoption in data centers worldwide. This cooperative effort will empower SONiC to deliver on its promise of providing an open, adaptable, and cost-effective networking solution for the cloud era.
When it comes to protecting your organization from digital threats, who should you trust? Reality dictates that high-resilience IT security is forged from a network of strong partnerships, defense in depth; layered security controls, and regular auditing. Defensive posture needs to be monitored, measured and continuously improved. While vulnerability management has always been a core […]
We are excited to announce that our latest software version 8.4 for Proxmox Virtual Environment is now available for download. This release is based on Debian 12.10 "Bookworm" but uses the Linux kernel 6.8.12 and kernel 6.14 as opt-in, QEMU 9.2.0, LXC 6.0.0, ZFS 2.2.7 with compatibility patches for kernel 6.14, and Ceph Squid 19.2.1 as stable option.
I’m pleased to announce the uCareSystem 25.04.09, the latest version of the all-in-one system maintenance tool for Ubuntu, Linux Mint, Debian and its derivatives, used by thousands ! This release brings some major changes internal changes, fixes and improvements under the hood. A new version of uCareSystem is out, and this time the focus is […]
In a world of pervasive adversarial nations building technology we all rely on, the importance of homegrown technology products and solutions cannot be overstated. At Purism, we are unapologetically committed to designing and manufacturing devices that are Made in America—not as a mere tagline but as a foundational principle. This commitment is not just about patriotism; it’s about ensuring security, privacy, and reliability for our customers at a cost that rivals overseas manufacturing.
Canonical, the publisher of Ubuntu, is excited to announce the availability of Ubuntu developer images for the new OrangePi RV2 RISC-V single board computer (SBC). We’re delighted to add this latest piece of hardware to our certified ecosystem, as we’re committed to providing developers and innovators with access to the latest open-source hardware and software. We anticipate that these new Ubuntu developer images will have a big impact and help developers build, prototype, and deploy cutting-edge applications on RISC-V technology.
A low-cost, performant RISC-V SBC
The OrangePi RV2 offers impressive performance in a low-cost, power-efficient form factor. This platform is powered by the KY-X1 octa-core SoC (system-on-chip), supporting RVA22 and vector extensions. The SoC includes a 2 TOPS AI accelerator, meaning that it can perform intensive computational operations, making it an appealing platform for developers focused on AI and machine learning at the edge. By integrating RISC-V technology, the OrangePi RV2 expands the possibilities for creative projects in robotics, IoT, education, and beyond.
The availability of Ubuntu developer images means that users of OrangePi RV2 can draw upon the latest open source tooling from the Ubuntu ecosystem, whilst at the same time benefit from the robustness and stability that Ubuntu brings to novel use cases.
The OrangePi RV2’s combination of octa-core performance, AI acceleration, and the Ubuntu environment unlocks a range of possibilities:
Education and prototyping: Harness the power of an accessible RISC-V platform to teach next-generation computing concepts and enable rapid proof-of-concept development.
Machine Learning at the edge: Offload AI inference tasks to the 2 TOPS accelerator for rapid data processing and real-time decision-making in industrial and consumer applications.
Canonical’s commitment to RISC-V
At Canonical, we believe that it’s important to do our part to help RISC-V succeed and gain acceptance as an open standard. Ubuntu’s availability on the OrangePi RV2 is a testament to the continued collaboration between Canonical and the broader RISC-V community.
The partnership brings all the ease of use, robust tooling and extensive packaging ecosystem that Ubuntu is known for to a new generation of RISC-V devices.
Getting started
Download the image: Head over to the OrangePi website to find the compatible Ubuntu images for the OrangePi RV2.
Prepare your SD Card: Flash the image onto a SD card.
Boot and explore: Insert the SD card into your OrangePi RV2, power on the board, and enjoy your Ubuntu-powered RISC-V experience! You can immediately begin installing your favorite packages and exploring the vast open-source ecosystem.
Join the community
We believe collaboration and community support drive innovation. We invite you to join the Ubuntu and OrangePi communities to share your experiences, ask questions, and help shape the future of RISC-V. Visit the Ubuntu forums or the OrangePi community pages to meet like-minded developers, contribute tutorials, and get involved in ongoing projects.
Get in touch
If you have any questions about the platform or would like information about our certification program, contact us.
We are thrilled to announce the release of Charmed Kubeflow 1.10, Canonical’s latest update to the widely-adopted open source MLOps platform. This release integrates significant improvements from the upstream Kubeflow 1.10 project, while also bringing a suite of additional capabilities targeted towards enterprise deployments. Charmed Kubeflow 1.10 empowers machine learning practitioners and teams to operationalize machine learning workflows more efficiently, securely, and seamlessly than ever.
Highlights from upstream Kubeflow 1.10
Advanced hyperparameter tuning with Trainer 2.0 and Katib
Kubeflow Trainer 2.0 introduces enhanced capabilities designed to simplify hyperparameter optimization. In combination with Katib, a new high-level API specifically supports hyperparameter tuning for large language models (LLMs), reducing manual intervention and accelerating fine-tuning workflows. Additionally, Katib now supports:
Multiple parameter distribution types, including log-uniform, normal, and log-normal distributions.
Push-based metrics collection mechanism, enhancing performance and simplifying administration.
Improved scalability and flexibility in Kubeflow Pipelines
Kubeflow Pipelines 2.4.1 includes key enhancements such as:
Support for placeholders in resource limits, allowing dynamic and adaptable pipeline configurations.
Loop parallelism with configurable parallelism limits, facilitating massively parallel execution while maintaining system stability.
Reliable resolution of outputs from nested DAG components, simplifying pipeline management and reuse.
Next-level model serving with KServe
KServe 0.14.1 introduces powerful features to further streamline model deployment:
New Python SDK with asynchronous inference capabilities.
Stable OCI storage integration for robust model management.
Model caching leveraging local node storage for rapid deployment of large models.
Direct integration with Hugging Face, allowing seamless deployment using the Hugging Face Hub.
I am very excited to see continued collaborations and new features from KServe being integrated in Kubeflow 1.10 release, particularly the model cache feature and integration with Hugging Face, which enables more streamlined deployment and efficient autoscaling for both predictive and generative models. We are actively working with ecosystem projects and communities like vLLM, Kubernetes WG Serving, and Envoy to tackle the growing challenges of serving LLMs.
—Yuan Tang Kubeflow Steering Committee member
The Kubeflow ecosystem is also growing – it recently welcomed Spark, and the Feast community is actively working on a donation plan as well.
Feast is the reference open-source feature store for AI/ML, and when combined with Kubeflow, it provides a seamless end-to-end MLOps experience. I am excited to see the two projects working more closely together to unlock powerful use cases, especially for Generative AI and Retrieval-Augmented Generation (RAG). Kubeflow and Feast will enable data scientists to efficiently manage features, accelerate model development, and accelerate getting models to production.
— Francisco Javier Arceo Kubeflow Steering Committee member & Feast Maintainer
Added value of Charmed Kubeflow 1.10
We don’t just package upstream components, we take the care needed to ensure a seamless production deployment experience for our customers. We develop open source solutions for improved orchestration and integration with ancillary services. And of course we always take our customers’ feedback in consideration. This is how we have improved Charmed Kubeflow 1.10 even further:
Added an automated and simplified way to manage your Kubeflow profiles via GitOps, with our new Github Profile Automator charm. This mechanism allows you to declaratively define your Kubeflow profiles in one single place. This work also lays the foundation to provide a seamless authentication experience with external identity providers as well, which can be particularly useful when deploying Kubeflow in the Public Cloud
We’ve enabled a high availability option for the Istio ingress, to improve the resilience of your deployments and make sure you can handle a high traffic volume with confidence.
You can now leverage more application health-check endpoints and alerting rules for KServe, Istio, and other components. With every release, we strive to provide more ways to monitor the health status of your deployment.
Charmed Kubeflow is more secure than ever. Most of our images are now based on Ubuntu and and our Rocks technology, leveraging Canonical’s security patching pipelines, guaranteeing the lowest number of CVEs possible.
Canonical’s AI/ML Ecosystem
Canonical works closely with a broad range of partners to enable open source technology at every scale and in any environment. Charmed Kubeflow runs seamlessly on any CNCF-certified Kubernetes distribution, providing a lot of flexibility to choose the best environment that fits your needs. Additionally, we’re working towards bringing Kubeflow as a managed offering in the public cloud, significantly cutting deployment time and operational costs. For data scientists looking to quickly start experimenting right on their Ubuntu laptops or workstations, our Data Science Stack provides a straightforward, ready-to-use solution. Lastly, we’re developing a robust, standalone model-serving solution built on Kubernetes, ideal for secure, mission-critical deployments and extending reliable inference capabilities even to the edge.
Get started with Charmed Kubeflow 1.10
Whether you’re a seasoned MLOps practitioner or new to Kubeflow, now is the perfect time to experience these enhancements firsthand. Install Charmed Kubeflow 1.10 today and elevate your machine learning workflows.
Explore the full details and installation instructions in our release notes.
Last week we were enjoying springtime, this week winter has made a comeback! Good news on the broken arm front, the infection is gone, so they can finally deal with the broken issue again. I will have a less invasive surgery April 25th to pull the bones back together so they can properly knit back together! If you can spare any change please consider a donation to my continued healing and recovery, or just support my work
While testing Beta I came across some crashy apps ( Namely PIM ) due to apparmor. I have uploaded fixed profiles for kmail, akregator, akonadiconsole, konqueror, tellico
Ubuntu MATE 24.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. Read on to learn more 👓️
Ubuntu MATE 24.10
Thank you! 🙇
My sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏
I’d like to acknowledge the close collaboration with the Ubuntu Foundations team and the Ubuntu flavour teams, in particular Erich Eickmeyer who pushed critical fixes while I was travelling.
Thank you! 💚
Ships stable MATE Desktop 1.26.2 with a handful of bug fixes 🐛
Switched back to Slick Greeter (replacing Arctica Greeter) due to race-condition in the boot process which results the display manager failing to initialise.
Returning to Slick Greeter reintroduces the ability to easily configure the login screen via a graphical application, something users have been requesting be re-instated 👍
Ubuntu MATE 24.10 .iso 📀 is now 3.3GB 🤏 Down from 4.1GB in the 24.04 LTS release.
This is thanks to some fixes in the installer that no longer require as many packages in the live-seed.
Login Window
What didn’t change since the Ubuntu MATE 24.04 LTS?
If you follow upstream MATE Desktop development, then you’ll have noticed that Ubuntu MATE 24.10 doesn’t ship with the recently released MATE Desktop 1.28 🧉
I have prepared packaging for MATE Desktop 1.28, along with the associated components but encountered some bugs and regressions 🐞 I wasn’t able to get things to a standard I’m happy to ship be default, so it is tried and true MATE 1.26.2 one last time 🪨
Major Applications
Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.11 🐧 are Firefox 131 🔥🦊,
Celluloid 0.27 🎥, Evolution 3.54 📧, LibreOffice 24.8.2 📚
See the Ubuntu 24.10 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
Ubuntu MATE 24.04 is more of what you like, stable MATE Desktop on top of current Ubuntu.
This release rolls up some fixes and more closely aligns with Ubuntu. Read on to learn more 👓️
Ubuntu MATE 24.04 LTS
Thank you! 🙇
I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏
I’d like to acknowledge the close collaboration with all the Ubuntu flavour teams and the Ubuntu Foundations and Desktop Teams.
The assistance and support provided by Erich Eickmeyer (Ubuntu Studio), Simon Quigley (Lubuntu) and David Muhammed (Ubuntu Budgie) have been invaluable.
Thank you! 💚
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
Sometime during me being a student, I wanted to move to a new calculator from a
Casio
fx991ES
and a Texas Instruments TI-68. Both of
which are great calculators, and I still have them. Though the TI68 has fallen
victim to a common problem with columns of its display giving up. My eye was on
the TI-89
Titanium. But I was
saved by a then grml project member, setting me onto the
one true path of RPN
supremacy, by going with the then new
HP50g. Thanks
gebi! :)
All jokes aside, it is an awesome machine. It has all the features you want,
and so much more. And it has an extension SD card slot, that accepts SD cards
up to 2GiB in storage space. Basically infinite storage for extensions you may
want. I never used it much for graphing the values of expressions. I loved the
huge screen, because it allowed me to see many levels of the system's stack.
One of its killer features is that it's keyboard can be totally customised, and
the six function keys on the top of the keyboard that usually operate built-in
soft-menus, can also be made to host custom soft-menus. Which means you can
implement your own commands that implement whatever tasks you often need to
perform, link them into soft-menus and thus invoke your most common tasks with
ease. It's a pretty awesome input system.
The HP calculators have a bit of a cult-like following. And people over the
years have written unbelievable tools for it. One such tool is
keyman by professor Wolfgang
Rautenberg, which extends the
customisability of the HP50g's keyboard quite a bit compared to what HP's
standard features offer. In particular, you can bind actions to long-presses on
any key of the keyboard. Very nice! I use that for all my custom bindings.
I still own that HP50g and it is indeed on my desk at work. And I also have a
copy of that calculator via the absolutely awesome
Emu48 android application. But most of
the time I also have a computer in front of me. Which means I have access to
all sorts of desktop calculators (which I thought all sucked, until I was
recently made aware of Qalculate which looks
pretty neat), as well as GNU Octave and
Maxima. I still believe that the input method
popularised by the classic HP calculators is pretty unrivaled, especially when
it comes to quick numeric calculations. Much to my delight,
Emacs comes with GNU
Calc, which is
a rather advanced scientific calculator1, whose interface was loosely
based on the HP calculators of the time. And it is
awesome, sometimes dubbed the
poor-man's-mathematica. :)
When loaded, it looks very unassuming, here is how it looks after calculating
5+6*7 = 47 using its stack:
Compared to the real HP calculators however, I am always missing that soft-menu
feature so I can have my most used functions at my fingertips, with a friendly
hint on what to press on a label of a button in my screen. Fear not, though!
Since it's Emacs, there should be a way to add something like that. Right!?
Yes. Yes, there is. Here's how it might looks with default configurartion and a
soft-menu full of useful constants loaded into it:
It was pretty straight forward too. I had the basics going in a couple of
hours, and over a bunch of evenings I had the system working mostly like I
wanted, including supporting calc's algebraic entry mode. Finally it took me a
couple of hours to turn the code I had into a well-behaved minor mode for calc,
that can be enabled and disabled at the user's behest. The basic idea is to
hijack calc's mode-line to display an HP-style soft-menu, and offer a command
to fill the menu with life.
With that it is very easy to construct calculations built upon custom variables
as well as function calls. Here's just a silly example with a multiplication of
a bunch of constants from the active soft-menu. Level two has a copy of the
original expression, and level one holds the numeric evaluation:
The minor-mode also patches calc's algebraic entry mode, so triggering a
softkey will insert a variable or function name into the algebraic entry,
similar to how the real HP calculators do it.
This brings me one step closer to having a plan-b when my HP50g inevitably
expires.
The implementation of calc-softmenu-mode in
calc-softmenu.el
is part of my Emacs configuration.
We’ve integrated CodeRabbit into our code review process, and the results have been excellent. The AI-assisted reviews help us maintain higher code quality, improve efficiency, and streamline development. We’re happy with the improvements and look forward to continued enhancements.
FriendlyElec and Armbian Announce NanoPi R3S Giveaway!
FriendlyElec, a Platinum Partner of Armbian, is giving away the NanoPi R3S in collaboration with Armbian! Whether you need a new router, server, or IoT device, this could be your next ideal solution. To enter, grab your free raffle ticket today and get a chance to win! For details, visit FriendlyElec’s Nanopi R3S giveaway page.
Armbian Images for Raspberry Pi Now on Official RPi Imager
Great news for Raspberry Pi users! Armbian images become available directly through the official Raspberry Pi Imager, making installation easier than ever. This integration brings Armbian’s optimized OS to Raspberry Pi, offering better stability, security, and performance.