May 20, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Simon Quigley: Donuts and 5-Star Restaurants

In my home state of Wisconsin, there is an incredibly popular gas station called Kwik Trip. (Not to be confused with Quik Trip.) It is legitimately one of the best gas stations I’ve ever been to, and I’m a frequent customer.

What makes it that great?

Well, everything about it. The store is clean, the lights work, the staff are always friendly (and encourage you to come back next time), there’s usually bakery on sale (just depends on location etc), and the list goes on.

There’s even a light-switch in the bathroom of a large amount of locations that you can flip if a janitor needs to attend to things. It actually does set off an alarm in the back room.

A dear friend of mine from Wisconsin once told me something along the lines of, “it’s inaccurate to call Kwik Trip a gas station, because in all reality, it’s a five star restaurant.” (M — , I hope you’re well.)

In my own opinion, they have an espresso machine. That’s what really matters. ;)

I mentioned the discount bakery. In reality, it’s a pretty great system. To my limited understanding, the bakery that is older than “standard” but younger than “expiry” are set to half price and put towards the front of the store. In my personal experience, the vast majority of the time, the quality is still amazing. In fact, even if it isn’t, the people working at Kwik Trip seem to genuinely enjoy their job.

When you’re looking at that discount rack of bakery, what do you choose? A personal favorite of mine is the banana nut bread with frosting on top. (To the non-Americans, yes, it does taste like it’s homemade, it doesn’t taste like something made in a factory.)

Everyone chooses different bakery items. And honestly, there could be different discount items out depending on the time. You take what you can get, but you still have your own preferences. You like a specific type of donut (custard-filled, or maybe jelly-filled). Frosting, sprinkles… there are so many ways to make different bakery items.

It’s not only art, it’s kind of a science too.

Is there a Kwik Trip that you’ve called a gas station instead of a five star restaurant? Do you also want to tell people about your gas station? Do you only pick certain bakery items off the discount rack, or maybe ignore it completely? (And yes, there would be good reason to ignore the bakery in favor of the Hot Spot, I’d consider that acceptable in my personal opinion.)

Remember, sometimes you just have to like donuts.

https://medium.com/media/73f78efd7bd6bb9ce495c2f08428c7d3/href

Have a sweet day. :)

20 May, 2025 12:57PM

hackergotchi for Deepin

Deepin

hackergotchi for Volumio

Volumio

Coming Soon: Volumio Pro, A New Audio Dimension for the Professional Market

Coming Soon: Volumio Pro, A New Audio Dimension for the Professional Market


Volumio is about to take a bold step into new territory. After years of serving passionate audiophiles and high-end audio manufacturers with cutting-edge streaming technology, we’re proud to announce the next chapter in our journey:

Introducing Volumio Pro

Launching this fall, Volumio Pro is our new offering tailored for the Audio Pro business, a powerful, flexible streaming platform designed to meet the needs of commercial spaces, hospitality, retail, wellness, and beyond.

Volumio Pro brings together the best of our technology: seamless streaming, robust control systems, and premium audio performance with tools designed specifically for professional environments.

But there’s more.


Meet Volumio Soundcraft, Your AI-Powered Audio Curator

At the heart of Volumio Pro is Volumio Soundcraft, our proprietary AI-powered DJ and content curator based on our Corrd technology. Volumio Soundcraft doesn’t just shuffle songs—it crafts smart, context-aware audio programs based on:

  • Mood and Ambience
  • Time of Day
  • Commercial Strategy
  • Customer Flow and Activity
  • User-Defined Playlists & Themes

Volumio Soundcraft is built to help businesses maximize engagement and tailor sonic experiences, with the ease of automation and the power of data.

The Volumio Soundcraft engine brings a revolutionary edge to professional audio environments with its AI-based DJ capabilities, delivering intelligently curated music through auto-generated playlists and programs powered by Corrd. Designed for seamless brand alignment, it fine-tunes music schedules to reflect your unique customer profile—adjusting the tone by the time of day, season, weather, or even shopping behaviour. Whether you’re managing a boutique retail space, a trendy café, or a luxury hotel chain, Soundcraft ensures a consistent, fashionable, and emotionally resonant sound atmosphere—customized by room, guest type, or location. It’s more than background music; it’s a smart soundtrack that speaks to your brand.

Volumio Soundcraft adapts to your audience, environment, and brand identity—seamlessly, stylishly, and smartly.


A Sneak Peek Is Coming

Get ready: we unveil the first images of the Volumio Pro device, including a close-up of its rear panel, packed with the professional-grade connectivity you expect.

Whether you’re a venue manager, retailer, or integrator, Volumio Pro is designed to seamlessly integrate into your AV system, offering rock-solid reliability and unmatched flexibility.


Key Features:

  • Powered by Volumio OS: The trusted audio streaming platform, optimized for pro use
  • Volumio Soundcraft AI Engine: Smart audio programming at your fingertips
  • Multi-Zone & Remote Management: Control multiple spaces from a central dashboard and APIs
  • Professional I/O: XLR, digital, analogue, and network connectivity
  • Cloud Integration: Real-time updates, remote configuration, and analytics
  • Commercial Licensing & Support: Built for business, backed by Volumio

Coming Fall 2025

Volumio Pro will officially launch this fall. With it, we’re not just entering a new market, we’re opening a new dimension in audio.

For more information, contact us.

Stay tuned for the reveal.
A new sound is coming.

The post Coming Soon: Volumio Pro, A New Audio Dimension for the Professional Market appeared first on Volumio.

20 May, 2025 08:26AM by Volumio (leo@volumio.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: What is geopatriation?

The world is changing every day. From geopolitical shifts to legislation like GDPR which requires localized processing – these all create a complex and uncertain landscape where data storage, processing, and cloud services could potentially come to a sudden halt or suffer heavy disruption overnight. As a result, organizations are increasingly interested in potential routes for shifting cloud services to safer alternatives closer to their country of operation. 

Recently, a term has appeared in the cloud services and cloud repatriation circles: geopatriation. But what is geopatriation, and how does it fit into adjusting your cloud services and infrastructure to meet new legal and compliance requirements?

In this article, we’ll define geopatriation, learn how it fits into cloud repatriation and recovery, and explore the best approaches for geopatriation. But first, let’s consider a vital related concept: cloud repatriation.

What is cloud repatriation?

Before we begin, we’ll quickly define an associated term that is easily confused with geopatriation: cloud repatriation. 

Cloud repatriation is the process of migrating applications from public clouds back to your own infrastructure. Such infrastructure can either be located on-premises or hosted by a data centre provider. It can be a private cloud, a simple virtualisation environment or even legacy IT infrastructure. The main purpose and marker of cloud repatriation is breaking the dependence on the public cloud provider. 

There are many reasons to repatriate your cloud services. One of the most common is that public cloud infrastructure usage can be very expensive, and its cost is only increasing. Another reason is the sensitive or highly regulated nature of certain kinds of data – especially where mounting regulatory compliance restricts how and where that data can be gathered, stored, or processed. After all, not all confidential data should be stored in public clouds. And finally, migrating applications to public clouds might lead to performance degradation in some parts of the world, due to low bandwidth and high latency. In such regions, local cloud infrastructure (either public or private) just performs better. In some cases, private cloud infrastructure is also more resilient than public cloud services, as outages still occur in these services, and you have no direct control over their resolution.

If you want to read more about cloud repatriation, why organizations do it, and what options are available, you can read more in our detailed article on our blog. 

Cloud repatriation can be a tricky term to pin down, as its meaning shifts depending on the context of its use. For example, in infrastructure as a service (IaaS) and platform as a service (PaaS), cloud repatriation refers to different processes. For this reason, many people presume that “cloud repatriation” means “cloud migration reversal”, meaning the reversal of a migration of workloads from data centres to cloud IaaS.

There are three general situations where you would perform cloud repatriation. 

  1. Undoing a full-scale cloud migration
  2. Replacing existing cloud solutions with an in-house IT solution
  3. Recovering from errors or other minor issues in your databases, personnel, or hosting

What is geopatriation?

Geopatriation is a related concept, but a little different to cloud repatriation. Geopatriation was first coined by Gartner® earlier this year, in their 2025 How to Protect Geopolitically Risky Cloud Workloads research, in which “Gartner defines geopatriation as the relocation of workloads and applications from global cloud hyperscalers to regional or national alternatives due to geopolitical uncertainty.”1 Geopatriation refers broadly to the repatriation efforts that result from specific geographic or territorial requirements, limitations, or risks for cloud infrastructure and data storage, processing, or other services. Similarly to sovereign clouds, geopatriation seeks to control and own cloud infrastructure that is located in a specific territory under clear legal jurisdiction.

Learn more about sovereign cloud infrastructure

Geopatriation is one of the many strategies that organizations can pursue to protect their cloud workloads, and is a form of cloud repatriation.

Generally speaking, there are five options for protecting cloud workloads that face geopolitical risks or related disruption:

  1. Reinforcement: you continue services with the hyperscaler but reinforce your cloud environment with further failsafes (for example, localized storage and processing, or additional security features like firewalls).
  2. Redeployment: you continue services with the hyperscaler but redeploy your most at-risk workloads to a different cloud setup (i.e. one that falls within new. requirements due to regulation or sanctions).
  3. Removal: you remove your at-risk workloads from the hyperscaler and redeploy everything to a different cloud setup to a local cloud provider.
  4. Repatriation: you move all of your workloads to an on-premises solution
  5. Accept the risks of disruption and make no changes.

The “removal” and “repatriation” options are both forms of geopatriation – moving your cloud workloads to your local vicinity or country.

It’s important to note that geopatriation is related to cloud repatriation, but they have different meanings. Cloud repatriation refers more broadly to the removal or movement of cloud services in general from public to private, while geopatriation is a distinct form of cloud patriation.

Why is geopatriation a growing topic of interest?

Whether because of conflict, changes in international trade rules, or increasing political tensions, the world is becoming more uncertain. This geopolitical uncertainty raises a critical question for organizations delivering – or using – cloud services: how do you guarantee services will remain uninterrupted, when they depend on infrastructure or companies that are spread across the geopolitical landscape?

Here are a few example cases of why geopatriation is a topic of growing interest:

  • New legislation (such as data localization laws) that require storage or processing within geographic boundaries (for example, EU GDPR or UK GDPR)
  • New laws or agreements (such as Data Processing Agreements) that introduce limitations or difficulties in using international service providers
  • New cybersecurity standards that make it burdensome to pass assessment in using international providers, or which mandate that private or sensitive data be stored and processed locally
  • Conflicts and trade disputes that prevent the use of particular cloud service providers or infrastructure in specific geographies

In short, geopatriation is a topic of growing interest because providers and users of cloud services are concerned that geopolitical events put their hyperscale public-cloud-integrated IaaS and PaaS services at risk.

How do you perform geopatriation?

As mentioned above, geopatriation can be performed by removing or repatriating cloud resources. 

In both cases, you would require private cloud, on-premise cloud, or bare-metal infrastructure to take over your cloud workloads. Generally speaking, you would need to explore various bare metal infrastructure options, compare the costs of private cloud setups against localized cloud hosting services, and assess the functionality and scalability of your options. 

If you’re exploring your options, we recommend you visit our dedicated cloud infrastructure webpage, which demonstrates how our wide range of open source infrastructure solutions and enterprise services can be used to build powerful, reliable, and entirely independent cloud services. 

Learn more about Canonical’s Infrastructure solutions.

Works cited

  1. Gartner, Quick Answer: Protecting Geopolitically Risky Cloud Workloads, Lydia LeongAlessandro Galimberti, 21 March 2025

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Further reading 

How to build a sovereign cloud with Canonical

[Case study] Learn how Phoenix Systems created a hyper-secure OpenStack cloud with a focus on data sovereignty and data protection 

What is a sovereign cloud?

[Case study] OneUptime takes back its servers and saves $352,500 a year with Canonical infrastructure solutions

20 May, 2025 08:11AM

hackergotchi for Volumio

Volumio

Test Drive: Running Volumio on a Raspberry Pi Inside the Hypex DIY Preamp

Test Drive: Running Volumio on a Raspberry Pi Inside the Hypex DIY Preamp
By: A DIY Audio Geek with a Love for Good Sound and Clean Code


There’s something uniquely rewarding about merging high-quality audio hardware with flexible, open-source software. That’s exactly what I explored this week: Hypex’s new DIY preamp kit, brought to life with a Raspberry Pi running Volumio OS. The aim? To see how this hybrid setup performs in practice and enjoy some hands-on tech fun along the way.

First Impressions and Setup

The Hypex DIY preamp made a great first impression, delivering on its promise of high-end, modular audio. While the kit I used didn’t include standoffs for securing the Raspberry Pi to the chassis (you can ask for them), this was easy to work around with a few extras from my toolbox. The included cabling is designed for versatility, and though I opted for longer wires from my collection, a more tailored solution could easily be added as an option for a cleaner look.

Aesthetics aside, the actual integration was refreshingly straightforward. Hypex clearly designed the preamp with Raspberry Pi compatibility in mind, and their manual made the setup process smooth and intuitive.

Volumio + RPi = Out-of-the-Box Success

After flashing the standard Volumio image to an SD card and powering up the Raspberry Pi, everything came together quickly. A quick reminder: to enable the USB audio interface on the DIY preamp, simply press the knob to power it on and set the input to “Network.” Once that’s done, Volumio instantly detects the interface and is ready to play.

No extra configuration was needed, Volumio and the Hypex preamp paired seamlessly. It’s also reassuring to see the Pi 5 running without any low-voltage warnings, confirming the power supply is more than capable of supporting this setup.

Testing the Sound

For this test, I focused on the headphone output, as the preamp provides balanced analog output via XLR and I currently don’t have a matching amp at home. Still, the headphone experience was excellent, delivering clean, articulate, and energetic sound that’s a joy to listen to.

There are a couple of behavior notes worth sharing for future users:

  • Input Switching and USB Audio
    When switching away from the “Network” input, the USB audio interface temporarily disconnects from the Raspberry Pi, prompting a message in Volumio. Once you switch back, everything resumes normally. This could be a great area for future firmware refinement to ensure an even smoother experience.

  • WiFi Performance Inside the Chassis
    The Raspberry Pi’s WiFi signal is limited due to the full-metal enclosure—an expected challenge with enclosed electronics. However, for a stable connection, Ethernet is a highly reliable option. Users who prioritize wireless connectivity might consider a Raspberry Pi CM4 setup, which I’ll describe below.

Geek-Level Upgrade: CM4 + SSD = Refined and Ready

For those looking to take things up a notch, I tested a Raspberry Pi CM4 with onboard WiFi, mounted on a Waveshare CM4IO-BASE-A board and booting Volumio from an NVMe SSD.

This combination really shines with the Hypex preamp. It enables external WiFi antennas (a small hole in the back panel allows for clean integration) and supports faster, more robust storage options like eMMC or NVMe. The result? Quicker boot times, enhanced system reliability, and a polished user experience.

Suggestions for Hypex

If Hypex is considering future updates, here are a couple of opportunities to further elevate the user experience:

  • Consider maintaining USB audio connection consistency when switching inputs—this would make the integration with platforms like Volumio even more seamless.

  • Offering the CM4 + Waveshare baseboard as an officially supported configuration could appeal to power users looking for enhanced wireless and storage capabilities.

Final Thoughts

All in all, the combination of Volumio and the Hypex DIY Preamp makes for an incredibly capable and enjoyable system. Setup is easy, performance is strong, and the audio quality delivers real satisfaction. Whether you’re an audiophile maker or a developer who loves clean code and great sound, this pairing offers a lot to be excited about.

Highly recommended and I’m looking forward to what’s next in this open-source meets high-fidelity journey.

Stay tuned for more hands-on audio experiments.

NB: More details about the HYPEX DIY pre-amp

The post Test Drive: Running Volumio on a Raspberry Pi Inside the Hypex DIY Preamp appeared first on Volumio.

20 May, 2025 07:37AM by joel

hackergotchi for ZEVENET

ZEVENET

TLS Certificate Management with Let’s Encrypt

TLS/SSL certificates are essential for securing web communications by encrypting the data exchanged between clients and servers. The rise of Let’s Encrypt—offering free, automated certificates—has made it easier for organizations of all sizes to secure their websites and services without added costs.

However, a recent change by Let’s Encrypt has significantly impacted the day-to-day operations of thousands of system administrators and infrastructure teams: the end of expiration notification emails for certificates.

What is Let’s Encrypt and How Does Certificate Issuance Work?

Let’s Encrypt is a certificate authority that provides free TLS certificates through an automated process. Its success is based on a few key principles:

  • Automation: Certificates are issued and renewed using tools like Certbot, which are integrated into servers and platforms.
  • Short validity period:Certificates are valid for 90 days, encouraging frequent renewals and helping maintain up-to-date security.
  • Validation process:To issue a certificate, domain ownership is verified—typically via DNS or HTTP challenge methods.

Using these certificates in combination with automation makes it easier to secure multiple websites and services while reducing operational workload.

Changes to Let’s Encrypt’s Notification Policies

Until recently, platforms and tools managing Let’s Encrypt certificates sent expiration alerts—notifications that allowed administrators to renew certificates before they expired.

However, in 2023/2024, Let’s Encrypt announced it would discontinue these notification emails to cut costs and improve infrastructure scalability. As a result, administrators and platforms that relied on these alerts must now implement their own monitoring, renewal, and certificate management mechanisms.
This change has several implications:

  • Manual certificate management becomes more error-prone.
  • Automation is now essential for maintaining secure systems.
  • Organizations must adopt dedicated monitoring and auto-renewal solutions.

The Role of DNS Providers in Automatic Renewal

To address this new scenario, Let’s Encrypt recommends using automation tools or DNS providers that support automatic certificate validation and renewal. This is done through a mechanism known as DNS-01 challenge, where domain ownership is proven by creating temporary DNS records.

Many DNS providers now offer integrations that allow automatic certificate renewals, enabling:

  • Renew certificates without manual intervention
  • Trigger internal or external alerts when a renewal fails
  • Manage Wildcard certificates, protecting multiple subdomains with a single certificate

Let’s Encrypt maintains a public list of compatible DNS providers compatible with this functionality, and it is up to each organization to choose a solution that guarantees continuity and security.

SKUDONET: Centralized Certificate Management with Let’s Encrypt

SKUDONET natively integrates Let’s Encrypt certificate management into its platform, offering automated renewals, DNS-based validation, and support for Wildcard certificates—all from a centralized, user-friendly interface.

This means our users no longer need to rely on manual tasks, external scripts, or email reminders to keep certificates valid.

Key features of SKUDONET’s SSL module include:

  • Built-in Let’s Encrypt integration: Issue and renew certificates automatically from within the platform—no additional configuration required by the user.
  • Auto-renewal flag: When the auto-renewal option is enabled, the system will renew certificates automatically before they expire, seamlessly and without user intervention.
    Auto renewal flag for Let’s Encrypt Certificate Management
  • Support for 7 DNS providers, including Cloudflare, AWS Route 53, Google, Infoblox, acme-dns, Azure DNS, and, since version 10.0.10, Infomaniak.
    DNS providers for Let’s Encrypt Certificate Management
  • Easily add new DNS providers:Our architecture is designed to be flexible. We can quickly add support for new DNS providers without service interruptions, based on user demand.
  • Wildcard certificate support: Using DNS validation, SKUDONET allows issuance of certificates like *.yourdomain.com, securing all subdomains with a single certificate.

Community vs. Enterprise Edition: Certificate Management Capabilities

Both editions support Let’s Encrypt integration, but their capabilities differ significantly in terms of automation and management:

  • Community Edition: Allows manual issuance of Let’s Encrypt certificates. Suitable for simpler environments or a small number of domains. Does not support auto-renewal, DNS provider integration, or Wildcard certificates.
  • Enterprise Edition: Built for demanding environments, it includes a full-featured certificate management module: automated renewals, Wildcard support, DNS provider integration, and centralized web-based management.

These differences make the Enterprise Edition the recommended choice for organizations managing multiple domains or requiring high availability.

Let’s Encrypt’s decision to stop sending expiration reminders poses a challenge—especially for enterprise environments managing numerous domains and subdomains. With the rapid growth of digital services, infrastructure teams can no longer rely on manual tracking of certificate expirations.

Automation and centralized management are now essential to reduce risk and boost operational efficiency.

Try SKUDONET Enterprise Edition free for 30 days and discover a scalable, secure, all-in-one solution with centralized certificate management, load balancing, and advanced security features.

20 May, 2025 05:59AM by Nieves Álvarez

hackergotchi for Tails

Tails

Tails 6.15.1

This release is an emergency release to fix important security vulnerabilities in Tor Browser.

Changes and updates

Fixed problems

  • Fix the Unsafe Browser appearing in the window list with the Tor Browser icon. (#20934)

  • Make reporting an error using WhisperBack more robust. (#20921)

  • Fix USB tethering. (#20940)

For more details, read our changelog.

Get Tails 6.15.1

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 6.0 or later to 6.15.1.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 6.15.1 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 6.15.1 directly:

20 May, 2025 12:00AM

May 19, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 892

Welcome to the Ubuntu Weekly Newsletter, Issue 892 for the week of May 11 – 17, 2025. The full version of this issue is available here.

In this issue we cover:

  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • Upcoming Meetings and Events
  • UbuCon Korea 2025를 함께 만들어갈 오거나이저를 모집합니다!
  • Ubuntu 25.04 Release Party @ Taipei
  • LoCo Events
  • Anbox Cloud 1.26.0 is released
  • Ubuntu Desktop 25.10 – The Questing Quokka Roadmap
  • Rooming with Mark
  • Other Community News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, 24.10, and 25.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić – LXD
  • Cristovao Cordeiro (cjdc) – Rocks
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

19 May, 2025 10:13PM

Simon Quigley: Coffee and Adapting to your Environment

This morning, I went to make my usual cup of coffee. I was given an espresso machine for Christmas, and I’ve developed this technique for making a warm drink that hits the spot every time.

I’ll start by turning on my espresso machine and starting a single shot of espresso. It dispenses and drips while I’m working on the other parts.

I then grab a coffee cup. Usually one of the taller ones. For maybe the bottom inch or two of the cup, that gets sugar and chocolate milk. Microwave for 45 seconds, pour in the espresso, then wash out the actual espresso from the metal cup with milk. Pour all of that in, another 45 seconds in the microwave, a few quick stirs, and you’re all set.

To the actual baristas out there, that probably sounds horrible. It probably sounds like the worst possible recommendation for a morning coffee ever.

But, you know what? It works.

So, I went to put my coffee into the microwave today, and I realized that someone else had put the glass plate for the microwave into the sink after accidentally spilling their breakfast on it.

Instead of saying, “well, I’m not going to have my coffee this morning,” I grabbed a large plate. I remembered the physics of levers from high school, and I understood that if I balanced everything just right, it would heat my coffee up.

And well, here I am. With an un-spilled coffee and a story to tell.

My point here is actually pretty simple, and this is before I even read any messages for the day. People with much more formal educations sometimes look at the guy engineering coffee with his microwave and think, “what is this guy doing?!?”

All I’m doing is making a really good cup of coffee. And to be honest, it tastes amazing.

That’s all. Have a wonderful day.

19 May, 2025 12:30PM

Ubuntu Blog: Rethinking virtualization: open source alternatives for resellers

If you’re a technology reseller in today’s uncertain software virtualization market, you and your customers are probably actively exploring options for virtualized environments. Of course, finding the perfect alternative is easier said than done: it needs to be scalable, easy to migrate to, cost efficient, and provide the same technical capabilities as preexisting virtualization solutions. That’s a seemingly tall order – but luckily, there are a range of options.

In this blog, we’ll take a look at the choices that are available to everyone searching for alternatives to mainstream hypervisor, storage, and other cloud services.

The three main options for virtualization

When your potential customers are affected by the events in the virtualization market, they have three main responses they can take:

  1. Do nothing (the full-stack solution user)
  2. Lessen the risk exposure by exploring alternatives for non-critical workloads (the multi-hypervisor user)
  3. A full migration away (the full-on replacement user)

Let’s review these routes in more detail, before exploring the specific options that are available to organizations that choose each of those paths.

1) The full-stack solution user

These are typically organizations that have invested significantly into their current virtualization software solution and who cannot, or do not wish to, move away from their existing virtualization services. 

This comes at the benefit of not having to do anything: No full-scale migration; no new engineering or maintenance; and no juggling a multi-provider model. At worst, this route’s most difficult process would be switching providers of a specific solution, rather than replacing the full stack 

However, as with all times a developer has chosen to “do nothing”; doing nothing is a choice and trade-off itself: with this route, they will still have to consume the potential cost increases, accept drastically different solutions packages or licence agreements, and remain anchored to the vendor. What’s more, this channel runs the risk of being burnt twice by the circumstances that led them to make that choice: because they haven’t changed the fundamental design of their solution, they’ll still face unwanted consequences again if there are further changes to their package, billing, services, or cost.

Generally speaking, this buyer segment will not actively be considering alternatives.

2) The multi-hypervisor user

This category refers to organizations that want to keep their options open and have more virtualization services at their disposal – either to reduce risks or costs. 

This solution is a middle-of-the-road compromise. With this option, organizations are able to keep their deployments in operation and stave off full-scale migrations (which often need the extra time and planning to get right the first time), while exploring and testing the cost-saving alternatives on the broader virtualization marketplace.

Of course, this route still comes with its challenges: while the organization’s systems or services are no longer completely dependent upon a single monolith or service, the majority of their functionality remains anchored to one provider. 

Similar to doing nothing, this means that the organization will still have to face unwanted challenges if there are further changes to the use of the primary service provider. It also carries risks of shortfalls and poor integration in the long term – plus, these organizations will need to consider how they’ll manage security updates, maintenance, and compatibility of their new, multi-solution setup in the years to come. 

Given that these organizations are still at risk of further challenges in licensing, cost, product availability, and so on, this route is often seen as a simple time-buying exercise, allowing the business to remain stable while assessing potential exits. Solutions that give them that stability, but which also hold the promise of easy migrations down the road, seamless technology compatibility, and full feature integration will be of great interest to this segment.

3) The full-on replacement

This third category refers to organizations who react decisively to avoid steep licence fees or renewal costs. In this category, they’re not looking to test alternatives or farm out non-critical infrastructure or services to third parties; they’re looking to jump entirely to a new provider, or even an entirely new model of virtualization. 

The challenges of this third group are clear to see: full-scale migrations are time-consuming and can be expensive, depending on the complexity of the project. In many cases, such a migration would carry risks of downtime, both in customer- or service-facing terms (where systems are down as the data and infrastructure is migrated) and in resource terms (where entire developer teams would be shifted away from normal business or product work). 

However, for many organizations, the costs of a migration would be returned multiple times over, given the existing costs associated with increasing licence fees and billings for traditional virtualization services. What’s more, if they choose open source alternatives, their systems (and services) would be free of vendor lock-in, reducing the likelihood of future large-scale migrations. 

There’s also the growing sophistication and ease of virtualization services to consider. The broad monopoly on virtualization services has largely shrunk away, thanks to the advancements seen in competitors, both in the proprietary space and in open source communities.  Organizations who are exploring this option would generally be looking for technologies that offer a simplified migration process, the same power and ease-of-use of their previous solution, and the reduced costs they enjoyed prior to the market disruption.

So, now that we understand the three main responses that organizations could take when responding to shifts in the virtualization software landscape, let’s explore how you as a channel reseller could respond to these opportunities. 

In one of our case studies, an organisation was able to save $352,500 a year by migrating to bare metal infrastructure using Canonical’s open-source technologies, representing a reduction of 76% of their cloud costs.

Download the case study

What are the open source alternatives for organizations of different sizes?

There are several avenues available for organizations who are exploring open source alternatives for virtualization. Let’s explore them based on the size of the company.

Options for medium to large private clouds

OpenStack as a powerful alternative

OpenStack has become an extraordinarily powerful and straightforward tool for deploying private cloud resources. This open source tool has been around for more than 14 years, and in that time it has grown significantly; as of 2025, OpenStack is deployed on 45+ million CPU cores, built by 560+ supporting organizations, a community spanning 180+ countries, and powers 300+ public cloud data centers. 

OpenStack offers a simple deployment and robust backbone for reliable services, making it a great option for channel resellers looking for alternatives. 

Canonical’s version of OpenStack can be combined with other technologies in Canonical’s portfolio to deliver robust cloud infrastructure, including MAAS for bare metal server provisioning, and Kubernetes for container management.   

Canonical OpenStack allows enterprises to take advantage of cloud-based operations, while retaining all the benefits of owning their own computing infrastructure – without hidden licence fees.

Learn more about Canonical OpenStack

Canonical’s infrastructure solutions are compliant with FIPS, DISA-STIG, PCI-DSS, and more, and come with a consistent 12 year security maintenance and support commitment. 

Of course, the most attractive benefit of our open infrastructure solutions is the cost implication: on average, our cloud infrastructure solutions carry an average TCO reduction of around 40%.

In one of our case studies, an organisation was able to save $352,500 a year by migrating to bare metal infrastructure using Canonical’s open-source technologies, representing a reduction of 76% of their cloud costs.

Options for smaller private clouds

While large enterprises can simply rely on their economy of scale to withstand price increases or solution changes, most smaller private cloud partners don’t have the mass recurring revenue to bear such major upsets. Luckily, there are some powerful and exceptionally cost effective routes that these organizations can take to remain competitive and continue their services without extreme disruptions.

Powerful, scalable, and manageable MicroClouds

For small cloud setups, MicroCloud is the ideal option. It’s perfect for running VMs or system containers, on single-rack setups between 3 to 50 nodes, and is extremely lightweight and easy to use. 

It deploys in just minutes through simple terminal commands, produces highly cost-effective and available clusters that benefit from automated security updates and upgrades.

Try MicroCloud for free!

We recently helped a Canadian university, the Université de l’Ontario Français, build a reliable and easily-scalable server infrastructure using MicroCloud. The university is on a path to considerable growth, having grown from just 20 students in 2018 to over 300 students today, with plans to expand to around 2,000 more students in the next 4 to 5 years. MicroCloud gave them simplicity and ease-of-use, perfect for a limited IT team, full backwards compatibility with their existing infrastructure, and powerful scalability to grow for more classrooms, students, and campuses in the coming years. 

Wrapping up

In conclusion, organizations facing uncertainty in the virtualization market have multiple cost-effective and scalable open source options to choose from, including OpenStack for large private clouds and MicroCloud for smaller setups. Canonical’s infrastructure and cloud solutions offer easy deployment, flexibility, long term support, and an easy pathway to compliance – without hidden licence fees or unpredictable pricing changes. If your clients, or potential buyers, are being affected by shifts in the virtualization space, OpenStack and MicroCloud are powerful solutions. 

If you’d like to learn more about our wide range of cloudification and virtualization offerings, please get in contact our team

And if you’re interested in being a part of our mission to provide open source solutions to organizations looking for options, you can also apply to become a Canonical partner.

More resources

[White paper] Lift & shift or rebuild: choosing your cloud migration strategy

Canonical empowers Université de l’Ontario Français dream of rapid scaling through MicroCloud and enterprise support

OneUptime takes back its servers and saves $352,500 a year with Canonical infrastructure solutions

How to cloudify your data centre?

[Watch] How to migrate to Ubuntu-based infrastructure

19 May, 2025 09:11AM

Simon Quigley: Toolboxes and Hammers — Be You

Toolboxes and Hammers — Be You

Everyone has a story. We all started from somewhere, and we’re all going somewhere.

Ten years ago this summer, I first heard of Ubuntu. It took me time to learn how to properly pronounce the word, although I’m glad I learned that early on. I was less fortunate when it came to the pronunciation of the acronym for the Ubuntu Code of Conduct. I had spent time and time again breaking my computer, and I’d wanted to start fresh.

I’ve actually talked about this in an interview before, which you can find here (skip to 5:02–6:12 for my short explanation, I’m in orange):

https://medium.com/media/ad59becdbd06d230b875fb1512df1921/href

I’ve also done a few interviews over the years, here are some of the more recent ones:

https://medium.com/media/83bda448d5f2a979f848e17f04376aa6/href

Ask Noah Show 377

Lastly, I did a few talks at SCaLE 21x (which the Ubuntu community donation funds helped me attend, thank you for that!):

https://medium.com/media/0fbde7ef0ed83c2272a8653a5ea38b67/hrefhttps://medium.com/media/4d18f1770dc7eed6c7a9d711ff6a6e89/href

My story is fairly simple to summarize, if you don’t have the time to go through all the clips.

I started in the Ubuntu project at 13 years old, as a middle school student living in Green Bay, WI. I’m now 23 years old, still living in Green Bay, but I became an Ubuntu Core Developer, Lubuntu’s Release Manager, and worked up to a very great and comfortable spot.

So, Simon, what advice would you give to someone at 13 who wants to do the same thing? Here are a few tips…

* Don’t be afraid to be yourself. If you put on a mask, it hinders your growth, and you’ll end up paying for it later anyway.
* Find a mentor. Someone who is okay working with someone your age, and ideally someone who works well with people your age (quick shoutout to Aaron Prisk and Walter Lapchynski for always being awesome to me and other folks starting out at high school.) This is probably the most important part.
* Ask questions. Tons of them. Ask questions until you’re blue in the face. Ask questions until you get a headache so bad that your weekend needs to come early. Okay, maybe don’t go that far, but at the very least, always stay curious.
* Own up to your mistakes. Even the most experienced people you know have made tons of mistakes. It’s not about the mistake itself, it’s about how you handle it and grow as a person.

Now, after ten years, I’ve seen many people come and go in Ubuntu. I was around for the transition from upstart to systemd. I was around for the transition from Unity to GNOME. I watched Kubuntu as a flavor recover from the arguments only a few years before I first started, only to jump in and help years later when the project started to trend downwards again.

I have deep love, respect, and admiration for Ubuntu and its community. I also have deep love, respect, and admiration for Canonical as a company. It’s all valuable work. That being said, I need to recognize where my own limits are, and it’s not what you’d think. This isn’t some big burnout rant.

Some of you may have heard rumors about an argument between me and the Ubuntu Community Council. I refuse to go into the private details of that, but what I’ll tell you is this… in retrospect, it was in good faith. The entire thing, from both my end and theirs, was to try to either help me as a person, or the entire community. If you think any part of this was bad faith from either side, you’re fooling yourself. Plus, tons of great work and stories actually came out of this.

The Ubuntu Community Council really does care. And so does Mark Shuttleworth.

Now, I won’t go into many specifics. If you want specifics, I’d direct you to the Ubuntu Community Council who would be more than happy to answer any questions (actually… they’d probably stay silent. Nevermind.) That being said, I can’t really talk about any of this without mentioning how great Mark has become.

Remember, I was around for a few different major changes within the project. I’ve heard and seen stories about Mark that actually match what Reddit says about him. But in 2025, out of the bottom of my heart, I’m here to tell you that you’re all wrong now.

See, Mark didn’t just side with somebody and be done with it. He actually listened, and I could tell, he cares very very deeply. I really enjoyed reading ogra’s recent blog post, you should seriously check it out. Of course, I’m only 23 years old, but I have to say, my experiences with Mark match that too.

Now, as for what happens from here. I’m taking a year off from Ubuntu. I talked this over with a wide variety of people, and I think it’s the right decision. People who know me personally know that I’m not one to make a major decision like this without a very good reason to. Well, I’d like to share my reasons with you, because I think they’d help.

People who contribute time to open source find it to be very rewarding. Sometimes so rewarding, in fact, that no matter how many economics and finance books they read, they still haven’t figured out how to balance that with a job that pays money. I’m sure everyone deeply involved in this space has had the urge to quit their job at least once or twice to pursue their passions.

Here’s the other element too… I’ve had a handful of romantic relationships before, and they’ve never really panned out. I found the woman that I truly believe I’m going to marry. Is it going to be a rough road ahead of us? Absolutely, and to be totally honest, there is still a (small, at this point) chance it doesn’t work out.

That being said… I remain optimistic. I’m not taking a year off because I’m in some kind of trouble. I haven’t burned any bridge here except for one.

You know who you are. You need help. I’d be happy to reconnect with you once you realize that it’s not okay to do what you did. An apology letter is all I want. I don’t want Mutually Assured Destruction, I don’t want to sit and battle on this for years on end. Seriously dude, just back off. Please.

I hate having to take out the large hammer. But sometimes, you just have to do it. I’ve quite enjoyed Louis Rossmann’s (very not-safe-for-work) videos on BwE.

https://medium.com/media/ab64411c41e65317f271058f56bb2aba/href

I genuinely enjoy being nice to people. I want to see everyone be successful and happy, in that order (but with both being very important). I’m not perfect, I’m a 23-year-old who just happened to stumble into this space at the right time.

To this specific person only, I tell you, please, let me go take my year off in peace. I don’t wish you harm, and I won’t make anything public, including your name, if you just back off.

Whew. Okay. Time to be happy again.

Again, I want to see people succeed. That goes for anyone in Ubuntu, Lubuntu, Kubuntu, Canonical, you name it. I’m going to remain detached from Ubuntu for at least a year. If circumstances change, or if I feel the timing just isn’t right, I’ll wait longer. My point is, I’ll be back, the when of it will just never be public before it happens.

In the meantime, you’re welcome to reach out to me. It’ll take me some time to bootstrap things, more than I originally thought, but I’m hoping it’ll be quick. After all, I’ve had practice.

I’m also going to continue writing. About what? I don’t know yet.

But, I’ll just keep writing. I want to share all of the useful tips I’ve learned over the years. If you actually liked this post, or if you’ve enjoyed my work in the Ubuntu project, please do subscribe to my personal blog, which will be here on Medium (unless someone can give me an open source alternative with a funding model). This being said, while I’d absolutely take any donations people would like to provide, at the end of the day, I don’t do this for the money. I do this for the people just like me, out of love.

So you, just like me, can make your dreams happen.

Don’t give up, it’ll come. Just be patient with yourself.

As for me, I have business to attend to. What business is that, exactly? Read Walden, and you’ll find out.

I wish you all well, even the person I called out. I sincerely hope you find what you’re looking for in life. It takes time. Sometimes you have to listen to some music to pass the time, so I created a conceptual mixtape if you want to listen to some of the same music as me.

I’ll do another blog post soon, don’t worry.

Be well. Much, much more to come.

19 May, 2025 04:03AM

May 18, 2025

Faizul "Piju" 9M2PJU: How to Set Up Chrony as a Local NTP Server Using Docker

In a local network where you want to keep your devices synchronized with accurate time, running a lightweight and efficient NTP server is essential. Chrony, a modern alternative to ntpd, is a great choice and in this guide, I’ll show you how to set it up inside a Docker container that fetches time from global sources and distributes it across your LAN.


🚀 Why Chrony?

Chrony is:

  • More accurate than ntpd in many conditions (especially with intermittent connectivity)
  • Lightweight and easy to configure
  • Ideal for both clients and servers

🐳 What You’ll Set Up

  • A Docker container running Chrony
  • Configured to sync with global NTP servers
  • Act as a time server for your LAN
  • With optional logging and control access

🧱 Step 1: Create a Dockerfile for Chrony

Start by creating a simple Dockerfile to build a minimal Chrony container.

# Dockerfile
FROM debian:stable-slim

RUN apt-get update && \
    apt-get install -y chrony && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

COPY chrony.conf /etc/chrony/chrony.conf

EXPOSE 123/udp

CMD ["chronyd", "-d", "-f", "/etc/chrony/chrony.conf"]

⚙ Step 2: Create the chrony.conf

Here’s a sample chrony.conf tailored for local server use and syncing with global time sources:

# chrony.conf

# Time sources (use pool.ntp.org or your regional servers)
server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server 2.pool.ntp.org iburst

# Allow all clients on your LAN (edit this according to your subnet)
allow 192.168.1.0/24

# Local stratum fallback if Internet is down
local stratum 10

# Drift file to track clock error over time
driftfile /var/lib/chrony/chrony.drift

# Log tracking data
log tracking measurements statistics

# Log files location
logdir /var/log/chrony

# Optional: control access
cmdport 0  # Use 0 to disable remote control; use 323 if needed

Replace 192.168.1.0/24 with your actual LAN subnet.


🧪 Step 3: Build and Run the Docker Container

docker build -t chrony-server .

Now run the container with:

docker run -d \
  --name chrony \
  --restart unless-stopped \
  --network host \
  --cap-add=NET_BIND_SERVICE \
  chrony-server

✅ Explanation:

  • --network host allows the container to bind directly to port 123/UDP
  • --cap-add=NET_BIND_SERVICE is required to bind to low-numbered ports like 123

🔎 Step 4: Test Your NTP Server

From a client machine on your LAN:

ntpdate -q <chrony-server-ip>

or

chronyc sources -a

You should see that the time is being served and synchronized.


📌 Optional: Run as a Local Time Authority

If you want to run fully offline, or ensure internal time continuity even without internet:

  1. Remove the server lines from chrony.conf
  2. Set: local stratum 8
  3. Start the server with a stable internal clock source

This makes your Chrony instance a local time authority for your network.


🔐 Firewall Notes

Make sure UDP port 123 is allowed inbound from your LAN on your Docker host:

sudo ufw allow proto udp from 192.168.1.0/24 to any port 123

Or for iptables:

iptables -A INPUT -p udp -s 192.168.1.0/24 --dport 123 -j ACCEPT

📎 Conclusion

With this setup, you’ve created a portable, containerized NTP server using Chrony that:

  • Syncs with global servers
  • Serves accurate time to all local devices
  • Works even if your external internet connection drops

Perfect for homelabs, IoT networks, or offline environments.

The post How to Set Up Chrony as a Local NTP Server Using Docker appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

18 May, 2025 02:28PM

Kubuntu General News: Plasma 6.4 Beta1 available for testing


Are you using Kubuntu 25.04 Plucky Puffin, our current stable release? Or are you already running our development builds of the upcoming 25.10 (Questing Quokka)?

We currently have Plasma 6.3.90 (Plasma 6.4 Beta1) available in our Beta PPA for Kubuntu 25.04 and for the 25.10 development series.

However this is a Beta release, and we should re-iterate the disclaimer:



DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.



6.4 Beta1 packages and required dependencies are available in our Beta PPA. The PPA should work whether you are currently using our backports PPA or not. If you are prepared to test via the PPA, then add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on Matrix [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the planned feature list, release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 6.3?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.

Thanks!

Please stop by the Kubuntu-devel Matrix channel on if you need clarification of any of the steps to follow.

[1] – https://matrix.to/#/#kubuntu-devel:ubuntu.com
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

18 May, 2025 09:26AM

May 17, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky Package Tool

There is a new application available for Sparkers: Sparky Package Tool What is Sparky Package Tool? Make sure IT IS NOT a package manager, IT IS a text-mode frond-end ONLY. Installation: Usage or short: It lets you choose the package manager of your choice: APT, Aptitude or Nala. Then do what you need. Manual commands: – APT – sparky-package-tool-apt or short…

Source

17 May, 2025 10:47AM by pavroo

May 16, 2025

hackergotchi for Tails

Tails

Security audit of automatic upgrades and recent changes

Late 2024, Radically Open Security conducted another security audit of critical parts of Tails.

To better protect our users, we addressed the security vulnerabilities as soon as they were discovered and reported to us, without waiting for the audit to be complete and public.

We can now share with you the final report.

The auditors concluded that:

The Tails operating system leaves a strong security impression, addressing most anonymity-related concerns. We did not find any remote code execution vulnerabilities, and all identified issues required a compromised low-privileged amnesia user – the default user in Tails.

Looking back at the previous audit, we can see the Tails developers have made significant progress, demonstrating expertise and a serious commitment to security.

Findings

The auditors did not identify any vulnerability in:

  • The creation of the Persistent Storage with LUKS2, introduced in Tails 5.14 (June 2023)

  • Our security improvements to Thunderbird

  • The random seed feature, introduced in Tails 6.4 (June 2024)

The auditors found 4 issues in:

  • The automatic upgrade mechanism

  • Other important changes since Tails 5.8 (November 2023)

IDImpactDescriptionIssueStatusRelease
OTF-001HighLocal privilege escalation in Tails Upgrader#20701Fixed6.11
OTF-002HighArbitrary code execution in Python scripts#20702Fixed6.11
#20744Fixed6.12
OTF-003ModerateArgument injection in privileged GNOME scripts#20709Fixed6.11
#20710Fixed6.11
OTF-004LowUntrusted search path in Tor Browser launcher#20733Fixed6.12

Postmortem

Our team went further than simply fixing these issues. We conducted a postmortem to understand how we introduced these vulnerabilities in our releases and what we could do to avoid similar vulnerabilities in the future. This analysis led to technical, policy, and culture changes.

This analysis was useful and we'll definitely consider doing postmortems again after future audits. It might also be useful for other projects to understand how we worked on these long-lasting improvements.

Technical improvements

  • Postmortem of OTF-001

    While preparing a major Tails release based on a new version of Debian, for example, Tails 7.0, we will look for Perl code included in Tails that modifies @INC in a dangerous way. (#19627)

    Furthermore, we now automatically check for potentially vulnerable Mite code and fail the build if we find any.

  • Postmortem of OTF-002 (#20719 and !1911)

    Our CI now ensures that all our custom Python software runs in isolated mode.

  • Postmortem of OTF-003 (#20711 and !1979)

    Our sudo configuration is now generated from a higher-level description, which has safer defaults and demands explanations when diverging from them.

  • Postmortem of OTF-004 (#20817 and !2040)

    Our CI now ensures that we don't write software that does unsafe .desktop file lookup.

    We will also periodically audit the configuration of onion-grater, our firewall for the Tor control port. (#20821)

Policy and culture improvements

  • During the audit, we noticed that we lacked a policy about when we should make confidential security issues public.

    This was problematic because:

    • We have sometimes been too secretive.

      As a temporary measure, this protected our users by erring on the safe side. But, without a disclosure process, we were not meeting our own standards for transparency and openness to third-party reviews.

    • Different team members were working with different assumptions, which caused communication issues.

    To have better guidelines for confidentiality and disclosure, we created our security issue response policy, based on the policy of the Tor Project's Network Team.

  • We will be more intentional about when it's worth the effort and risk to do large code refactoring.

    While refactoring is necessary for a healthy software development process, this postmortem showed that large refactoring can also introduce security vulnerabilities.

  • When changing security-sensitive code, such as our sudo configuration or any code that elevates privileges, we now require an extra review focused on security.

  • We will communicate about security issues more broadly within our team when we discover them so that every team member can learn along the way.

16 May, 2025 05:39PM

hackergotchi for Grml developers

Grml developers

Michael Prokop: Grml 2025.05 – codename Nudlaug

Debian hard freeze on 2025-05-15? We bring you a new Grml release on top of that! 2025.05 🚀 – codename Nudlaug.

There’s plenty of new stuff, check out our official release announcement for all the details. But I’d like to highlight one feature that I particularly like: SSH service announcement with Avahi. The grml-full flavor ships Avahi, and when you enable SSH, it automatically announces the SSH service on your local network. So when f.e. booting Grml with boot option `ssh=debian`, you should be able to login on your Grml live system with `ssh grml@grml.local` and password ‘debian‘:

% insecssh grml@grml.local
Warning: Permanently added 'grml.local' (ED25519) to the list of known hosts.
grml@grml.local's password: 
Linux grml 6.12.27-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.27-1 (2025-05-06) x86_64
Grml - Linux for geeks

grml@grml ~ %

Hint: grml-zshrc provides that useful shell alias `insecssh`, which is aliased to `ssh -o “StrictHostKeyChecking=no” -o “UserKnownHostsFile=/dev/null”`. Using those options, you aren’t storing the SSH host key of the (temporary) Grml live system (permanently) in your UserKnownHostsFile.

BTW, you can run `avahi-browse -d local _ssh._tcp –resolve -t` to discover the SSH services on your local network. 🤓

Happy Grml-ing!

16 May, 2025 04:42PM

Michael Prokop: HTU Bigband Konzert am 05.06.2025

Plakat für das HTU Bigband-Konzert am 05.06.2025

Wie letztes Jahr schon, spielen wir auch heuer wieder mit der HTU-Bigband ein Konzert an der TU Graz.

Und zwar am Donnerstag, 5. Juni 2025! Das Konzert startet um 19:30 Uhr, bei Schönwetter im Innenhof der TU Graz (Alte Technik, Rechbauerstraße 12, 8010 Graz), und bei Schlechtwetter geht es an der gleichen Adresse in den Hörsaal 2. Wir sind über 25 Musikerinnen und Musiker und haben ein anspruchsvolles Programm, von Swing, über Soul, Funk und Latin bis Pop ist alles dabei. Es gibt über 2 Stunden Musik vom Feinsten, die Set-List ist spitze, und das Ganze bei freiem Eintritt.

Ich freue mich schon tierisch darauf und würde mich wieder über bekannte und gut gelaunte Gesichter freuen. Ich hoffe man sieht und hört sich! :-)

16 May, 2025 04:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

Oliver Grawert: Rooming with Mark

16 May, 2025 02:34PM

hackergotchi for Volumio

Volumio

Volumio announces the compatibility of its Hi-Fi equipment with Qobuz Connect

Qobuz, the French high-quality music streaming and download platform, today has launched Qobuz Connect, a feature that transforms the listening experience while offering simplified control and seamless continuity between devices.

Qobuz Connect enables users to stream music directly using Qobuz to control their compatible Hi-Fi devices without the need for third-party applications, while enjoying the full Qobuz experience, with incomparable sound quality.

Volumio is pleased to announce that its devices are now compatible with Qobuz Connect, offering seamless integration of this innovative technology for high-fidelity listening.

A simplified, enriched listening experience, true to the Qobuz DNA

With Qobuz Connect, users can seamlessly and intuitively:

  • Control their music from the Qobuz application: No more juggling between different remote controls or applications; select the Hi-Fi device and control everything directly from Qobuz.
  • Pick up where they left off: Seamlessly switch from one device to another without losing the thread of the music, from headphones to speakers, without interruption.
  • A single application to manage everything: Control the desktop application using a mobile device and vice versa for a seamless listening experience.
  • Access the complete Qobuz experience everywhere: No need to switch between apps; enjoy unique editorial in the magazine while listening to playlists curated by experts, all at the same time in the Qobuz app.
  • Enjoy unrivaled sound quality: listen to music in high resolution on all devices.

Volumio has joined this dynamic by integrating Qobuz Connect into its devices

“The integration of Qobuz Connect in Volumio is a great addition that allows our users to enjoy Qobuz from its native interface while benefitting from the sound quality that Volumio can deliver natively. It was a pleasure to be involved in the launch of Qobuz Connect since day 1, and we’re happy that we delivered a much-requested feature to our users right at launch. Congratulations to the whole team at Qobuz not only for this great milestone, but also for how well they cooperated with partners like us and listened to the needs of their loyal user base.” Says Michelangelo, Volumio CEO

“Qobuz Connect is a feature that has been eagerly awaited by our subscribers. We are proud to offer the most complete Connect solution on the market today, combining high resolution, app synchronization, and remote control. Our priority has been to develop a simple, intuitive interface to facilitate the experience of all our listeners,” says Axel Destagnol, Chief Product Officer at Qobuz.

Qobuz Connect makes it easy to enjoy the Qobuz experience on all compatible devices. Simply launch the application, select a device, press play, and enjoy your music in high resolution.

Qobuz Connect is now available on iOS, Android, Windows, and macOS applications. This feature has been developed in collaboration with StreamUnlimited Engineering, a technology partner to many premium audio brands, facilitating integration into existing and new products.

The post Volumio announces the compatibility of its Hi-Fi equipment with Qobuz Connect appeared first on Volumio.

16 May, 2025 01:47PM by Alia Elsaady

hackergotchi for Purism PureOS

Purism PureOS

PureOS Crimson Development Report: April 2025

Thanks for tuning in! Like in March, we have a number of updates for April. We've been working up the stack to finalize the remaining packages that make up the PureOS Crimson release.

While that is ongoing, we've also made more fixes and reliability improvements to the Linux kernel used in the Librem 5. These changes benefit both PureOS Crimson and existing installations on PureOS Byzantium.

The post PureOS Crimson Development Report: April 2025 appeared first on Purism.

16 May, 2025 01:17PM by Purism

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.2.2-5 Released

This release of Clonezilla live (3.2.2-5) includes major enhancements and bug fixes.

ENHANCEMENTS and CHANGES from 3.2.1-28

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2025/May/12).
  • Linux kernel was updated to 6.12.27-1.
  • Package ezio was updated to 2.0.16.
  • Added 3 options in drbl-ocs.conf for running ezio: ezio_seed_max_connect="6" # Max total connections number. ezio_seed_max_upload="5" # Max upload connections number. ezio_upload_timeout="90" # The interval to keep uploading and waiting for other peer (sec). These 3 parameters can also be assigned in boot parameters so that seeder and leecher of ezio can be controlled.
  • ocs-live-swap-kernel: firmware directory will be included.

BUG FIXES

16 May, 2025 07:53AM by Steven Shiau

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E348 Não-Episódio

Hoje não há episódio…mas há novela.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

16 May, 2025 12:00AM

May 15, 2025

hackergotchi for Purism PureOS

Purism PureOS

Apple Moves iPhone Production to India—Purism Has Been Leading the Way for Years

Apple is accelerating its shift of iPhone production from China to India, aiming for a complete transition by 2026. Driven by rising U.S.-China trade tensions and tariff pressures, Apple has already begun trial production of the iPhone 17 series in India. Reports suggest the company plans to produce over 60 million iPhones annually in India […]

The post Apple Moves iPhone Production to India—Purism Has Been Leading the Way for Years appeared first on Purism.

15 May, 2025 05:10PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Building an end-to-end Retrieval- Augmented Generation (RAG) workflow

Introduction

One of the most critical gaps in traditional Large Language Models (LLMs) is that they rely on static knowledge already contained within them. Basically, they might be very good at understanding and responding to prompts, but they often fall short in providing current or highly specific information. This is where RAG comes in;  RAG addresses these critical gaps in traditional LLMs by incorporating current and new information that serves as a reliable source of truth for these models. 

In our previous blog on understanding and deploying RAG, we walked you through the basics of what this technique is and how it enhances generative AI models by utilizing external knowledge sources such as documents and extensive databases. These external knowledge bases enhance machine learning models for enterprise applications by providing verifiable, up-to-date information that reduces errors, simplifies implementation, and lowers the cost of continuous retraining.

Building a robust generative AI infrastructure, such as those for RAG, can be complex and challenging. It requires careful consideration of the technology stack, data, scalability, ethics, and security. For the technology stack, the hardware, operating systems, cloud services, and generative AI services must be resilient and efficient based on the scale that enterprises require.

There are several open source software options available for building generative AI infrastructure and complex AI projects that accelerate development, avoid vendor lock-in, reduce costs, and satisfy enterprise needs.

Objective

In this guide, we will take you through setting up a RAG pipeline. We will utilize open source tools such as Charmed OpenSearch for efficient search retrieval and KServe for machine learning inference, specifically in Azure and Ubuntu environments while leveraging silicons.

This guide is intended for data enthusiasts, engineers, scientists, and machine learning professionals who want to start building RAG solutions on public cloud platforms, such as Azure, using enterprise open source tools that are not native to Azure microservices offering. This can be used for various projects, including proofs of concept, development, and production.

Please note that multiple open source tools not highlighted in this guide can be used in place of the ones we outline. In cases where you do use different tools, you should adjust the hardware specifications—such as storage, computing power, and configuration—to meet the specific requirements of your use case.

RAG workflow

When building a generative AI project, such as a RAG system and advanced generative AI reference architectures, it is crucial to include multiple components and services. These components typically encompass databases, knowledge bases, retrieval systems, vector databases, model embeddings, large language models (LLMs), inference engines, prompt processing, and guardrail and fine-tuning services, among others.

RAG allows users to choose the most suitable RAG services and applications for their specific use cases. The reference workflow outlined below mainly utilizes two open source tools: Charmed OpenSearch and KServe. In the RAG workflow depicted below, fine-tuning is not mandatory; however, it can enhance the performance of LLMs as the project scales.

Figure 1: RAG workflow diagram using open source tools

The table below describes all the RAG services highlighted in the workflow diagram above and maps the open source solutions that are used in this guide.

ServicesDescriptionOpen source solutions
Advanced parsingText splitters are advanced parsing techniques for the document that goes to the RAG system. In this way, the document can be cleaner, and focused and will provide informative input.Charmed Kubeflow: Text splitters
Ingest/data processingThe ingest or data processing is a data pipeline layer. This is responsible for data extraction, cleansing, and the removal of unnecessary data that you will run. Charmed OpenSearch can be used for document processing
Embedding modelThe embedding model is a machine-learning model that converts raw data to vector representations.Charmed OpenSearch- Sentence transformer
Retrieval and rankingThis component retrieves the data from the knowledge base; it also ranks the relevance of the information being fetched based on relevance scores.Charmed OpenSearch with FAISS (Facebook AI Similarity Search)
Vector databaseA vector database stores vector embeddings so data can be easily searched by the ‘retrieval and ranking services’.Charmed OpenSearch- KNN Index as a vector database
Prompt processingThis service formats queries and retrieved text into a readable format so it is structured to the LLM. 
Charmed OpenSearch – OpenSearch: ML – agent predict
LLMThis component provides the final response using multiple GenAI models.GPT
LlamaDeepseek
LLM inferenceThis refers to operationalizing machine learning in production by processing running data into a machine learning model so that it gives an output.Charmed Kubeflow with Kserve
GuardrailThis component ensures ethical content in the GenAI response by creating a guardrail filter for the inputs and outputs.
Charmed OpenSearch: guardrail validation model 
LLM Fine-tuningFine-tuning is the process of taking a pre-trained machine learning model and further training it on a smaller, targeted data set.Charmed Kubeflow
Model repositoryThis component is used to store and version trained machine learning models, especially within the process of fine-tuning. This registry can track the model’s lifecycle from deployment to retirement.Charmed KubeflowCharmed MLFlow
Framework for building LLM applicationThis simplifies LLM workflow, prompts, and services so that building LLMs is easier.Langchain


This table provides an overview of the key components involved in building a RAG system and advanced GenAI reference solution, along with associated open source solutions for each service. Each service performs a specific task that can enhance your LLM setup, whether it relates to data management and preparation, embedding a model in your database, or improving the LLM itself.

The deployment guide below will cover most of the services except the following: model repository, LLM fine-tuning, and text splitters. 

The rate of innovation in this field, particularly within the open source community, has become exponential. It is crucial to stay updated with the latest developments, including new models and emerging RAG solutions.

RAG component: Charmed OpenSearch

Charmed OpenSearch will be mainly used in this RAG workflow deployment. Charmed OpenSearch is an operator that builds on the OpenSearch upstream by integrating automation to streamline the deployment, management, and orchestration of production clusters. The operator enhances efficiency, consistency, and security. Its rich features include high availability, seamless scaling features for deployments of all sizes, HTTP and data-in-transit encryption, multi-cloud support, safe upgrades without downtime, roles and plugin management, and data visualization through Charmed OpenSearch Dashboards. 

With the Charmed OpenSearch operator (also known as charm), you can deploy and run OpenSearch on physical and virtual machines (VM) and other cloud and cloud-like environments, including AWS, Azure, Google Cloud, OpenStack, and VMware. For the next section, deployment guide, will be using Azure VM instances:

Figure 2: Charmed OpenSearch architecture

Charmed OpenSearch uses Juju. Juju is an open source orchestration engine for software operators that enables the deployment, integration, and lifecycle management of applications at any scale on any infrastructure. In the deployment process, the Juju controller manages the flow of data and interactions within multiple OpenSearch deployments, including mediating between different parts of the system.

Charmed OpenSearch deployment and use is straightforward. If you’d like to learn how to use and deploy it in a range of cloud environments, you can read more in our in-depth Charmed OpenSearch documentation. 

RAG component: KServe

KServe is a cloud-native solution within the Kubeflow ecosystem that serves machine learning models. By leveraging Kubernetes, KServe operates effectively in cloud-native environments. It can be used for various purposes, including model deployment, machine learning model versioning, LLM inference, and model monitoring.

In the RAG use case discussed in this guide, we will use KServe to perform inference on LLM. Specifically, it will process an already-trained LLM to make predictions based on new data. This emphasizes the need for a robust LLM inference system that works with local and public LLMs. The system should be scalable, capable of handling high concurrency, providing low-latency responses, and delivering accurate answers to LLM-related questions.

In the deployment guide, we’ll take you through a comprehensive and hands-on guide to building and deploying a RAG service using Charmed OpenSearch and KServe. Charmed Kubeflow by Canonical natively supports KServe.

Deployment guide to building an end-to-end RAG workflow with Charmed OpenSearch and KServe

Our deployment guide for building an end-to-end RAG workflow with Charmed OpenSearch and KServe covers everything you need to make your own RAG workflow, including: 

  • Prerequisites
  • Install Juju and configure Azure credentials
  • Bootstrap Juju controller and create Juju model for Charmed OpenSearch
  • Deploy Charmed OpenSearch and set up the RAG service
  • Ask and start conversational flow with your RAG
Access this guide here

Canonical for your RAG requirements

Canonical provides Data and AI workshop, and enterprise open source tools and services and can advise on securing the safety of your code, data, and models in production.

Build the right RAG architecture and application with the Canonical RAG workshop

Canonical offers a 5-day workshop designed to help you start building your enterprise RAG systems. By the end of the workshop, you will have a thorough understanding of RAG and LLM theory, architecture, and best practices. Together, we will develop and deploy solutions tailored to your specific needs. Download the datasheet here.

Learn and use best-in-class RAG tooling  on any hardware and cloud

Unlock the benefits of RAG with open source tools designed for your entire data and machine learning lifecycle. Run RAG-enabled LLM on any hardware and cloud platform, whether in production or at scale.

Canonical offers enterprise-ready AI infrastructure along with open source data and AI tools to help you kickstart your RAG projects.

Secure your AI stack with confidence

Enhance the security of your GenAI projects while mastering best practices for managing your software stack. Discover ways to safeguard your code, data, and machine learning models in production with Confidential AI.

15 May, 2025 02:31PM

Ubuntu Blog: vBRAS NFVI reference architecture with Huawei OceanStor and Canonical OpenStack

A broadband remote access server (BRAS) is an access gateway oriented to broadband network applications. It bridges broadband access and backbone networks, providing basic access methods and management functions for broadband access networks.

Traditionally, BRAS has suffered from challenges, including low resource utilization, complex management and maintenance, and slow service provisioning. Virtual broadband remote access server (vBRAS) offers a way to address these challenges, accelerating service rollout, improving resource utilization, and simplifying operations and maintenance (O&M).

Huawei and Canonical have worked together to design and verify a private cloud architecture for vBRAS based on Huawei OceanStor storage and Canonical Charmed OpenStack. The solution gives telcos a reliable, performant, and cost-effective way to implement network functions virtualization infrastructure (NFVI) for vBRAS.

BRAS challenges and vBRAS

The traditional BRASs are deployed in distributed mode and fully meshed with peripheral systems. As the number of home broadband users surges and emerging services such as 4K high definition (HD) and Internet of Things (IoT) develop rapidly, the traditional distributed BRAS deployment solution faces the following significant challenges:

  • Low resource utilization
  • Complex management and maintenance
  • Slow service provisioning

vBRAS adopts an architecture based on virtualization technologies. It virtualizes traditional hardware BRAS functions and implements software-based network functions, improving network flexibility and scalability.

vBRAS solutions featuring the cloud-based control and user plane separation (CUPS) architecture are implemented based on the centralized management and control capabilities brought by software defined networking (SDN) and the device cloudification capabilities brought by network functions virtualization (NFV). NFV is a network architecture in which traditional network devices are virtualized into software modules running on universal hardware. The vBRAS solution focuses on unified management of vBRAS-vUPs and vBRAS-pUPs, and uses the CUPS vBRAS architecture mentioned earlier. The CUPS architecture is the most cost-effective solution for evolving toward the all-cloud era.

NFVI reference architecture for vBRAS

To achieve reliability, performance, and cost efficiency for vBRAS, Canonical and Huawei field engineering teams have designed and verified a private cloud architecture for vBRAS. It is based on Huawei OceanStor storage and the open source Canonical OpenStack. The main components and key features include:

  • Fully disaggregated architecture with dedicated controller nodes. Compute nodes for management and orchestration (MANO) and VNF workloads.
  • Dedicated storage from OceanStor, with all virtual machines backed by OceanStor. A small ceph storage cluster is available for glance/images only.
  • Instance HA from Openstack Masakara and two separated OceanStor storages provide another higher reliability. The Cinder-Huawei charm was enhanced to support multi-backend during this verification.
  • Separated networks with pure OVN (open virtual network) for general workload (MANO), and high performance OVN+DPDK (data plane development kit – for network acceleration), cpu pin and huge pages for VNFs workloads.
  • Infrastructure management and automation from Canonical MAAS and Juju, Canonical Observability Stack (COS), and Landscape and Kernel Livepatch from Ubuntu Pro.
  • BRAS VNFs orchestration provided by Huawei’s MANO.

The latest versions of MANO and vBRAS are tested and verified with a 10 years lifecycle version of Ubuntu and OpenStack (Ubuntu 20.04 Server LTS and Ussuri Charmed OpenStack).

Huawei OceanStor

Huawei OceanStor storage unlocks new levels of intelligence and power. It offers converged and flexible storage solutions that boast the power and reliability needed to meet green, sustainable, and future-facing development goals.

Huawei OceanStor Dorado systems are the next-gen all-flash storage systems. They are designed to meet the major concerns of high availability, utilization, and usability for medium and large enterprises, offering huge storage capacity and quick data access. OceanStor Dorado 5000 and 6000 excel in database, virtualization, and big data analytics scenarios, making them well-suited to industries such as carrier, finance, government, and manufacturing.

Canonical OpenStack

Ubuntu Server powers 48% of OpenStack clouds globally, and leading companies across industries – including telco, finance, hardware manufacturing, retail, automotive, and healthcare – choose Canonical OpenStack as the platform for their private cloud implementations. Canonical OpenStack is an enterprise cloud platform engineered for price-performance, making it the ideal choice for telecommunications companies that demand the highest levels of infrastructure stability, security, and resilience:

Designed to be economical in every way: we use optimal architecture including server
types and their components to run more VMs. In our case studies, our approach has
reduced costs by as much as 80% for 40 nodes. We use open-source software to avoid
high licence costs and automation to cut spendings on operations.

Total bottom-up automation: enjoy fully automated OpenStack deployment and
operations thanks to Canonical’s tooling and MANO.

Reliability and high availability for vBRAS: our reference architecture includes
redundancy for hardware components to eliminate single points of failure. All controller
services are deployed as clusters and designed for fault tolerance. Instance high
availability from OpenStack Masakara and running vBRAS across 2 regions provide
99.9% availability.

Learn more about Canonical’s telco solutions.

Get in touch about open source infrastructure, applications, development and operations.

15 May, 2025 01:45PM

hackergotchi for GreenboneOS

GreenboneOS

CVE-2025-31324: An Actively Exploited Flaw Affecting SAP NetWeaver Visual Composer

CVE-2025-31324 (CVSS 9.8), published on April 24th 2025, allows unauthenticated attackers to upload executable files [CWE-434] via the NetWeaver Visual Composer component which can result in Remote Code Execution (RCE). The CVE presents a high degree of risk; many publicly available proof-of-concept (PoC) exploits [1][2][3][4][5] are available, and active attack campaigns have been alerted by […]

15 May, 2025 09:02AM by Joseph Lee

hackergotchi for Grml developers

Grml developers

grml development blog: Grml - new stable release 2025.05 available

We are proud to announce our new stable release 🚢 version 2025.05, code-named ‘Nudlaug’!

This Grml release brings you fresh software packages from Debian trixie, enhanced hardware support and addresses known bugs from previous releases.

Like in the previous release 2024.12, Live ISOs 📀 are provided for 64-bit x86 (amd64) and 64-bit ARM CPUs (arm64).

❤️ Thanks ❤️

Once again netcup contributed financially, this time specifically to this release. Thank you, netcup ❤️

15 May, 2025 08:00AM

hackergotchi for Qubes

Qubes

XSAs released on 2025-05-12

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is affected.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • (none)

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

15 May, 2025 12:00AM

QSB-107: Multiple CPU branch prediction vulnerabilities

We have published Qubes Security Bulletin (QSB) 107: Multiple CPU branch prediction vulnerabilities. The text of this QSB and its accompanying cryptographic signatures are reproduced below, followed by a general explanation of this announcement and authentication instructions.

Qubes Security Bulletin 107


             ---===[ Qubes Security Bulletin 107 ]===---

                              2025-05-15

            Multiple CPU branch prediction vulnerabilities

User action
------------

Continue to update normally [1] in order to receive the security updates
described in the "Patching" section below. No other user action is
required in response to this QSB.

Summary
--------

Multiple organizations have recently reported vulnerabilities that
affect CPU branch prediction behavior. Common among these
vulnerabilities is their ability to allow an attacker to manipulate the
branch prediction of the CPU to mount a speculative execution attack
(like the original Spectre v2 attack) even with existing mitigations
enabled.

On 2025-05-12, researchers from the Systems and Network Security Group
at Vrije Universiteit Amsterdam (VUSec) published "Training Solo," [3]
which includes the following:

 - Indirect Target Selection (ITS; CVE-2024-28956, XSA-469 [5], part of
   INTEL-SA-01153 [7])

 - Lion Cove BPU issue (CVE-2025-24495, part of INTEL-SA-01322 [8])

 - IP-based attacks

 - History-based attacks

On 2025-05-13, researchers from the computer security group at ETH
Zürich (COMSEC) published "Branch Privilege Injection: Exploiting Branch
Predictor Race Conditions" [4] (BPRC; CVE-2024-45332, XSN-3 [6], part of
INTEL-SA-01247 [9])

In addition to the coordinated disclosures above, Intel also reported
internally discovering the following:

 - CVE-2025-20623 (part of INTEL-SA-01247 [9])

 - CVE-2024-43420 (part of INTEL-SA-01247 [9])

See the linked publications for further details.

Impact
-------

History-based attacks are believed not to affect Xen. [5]

For all other vulnerabilities mentioned, on affected systems, an
attacker who manages to compromise a qube may be able to use it to infer
the contents of arbitrary system memory, including memory assigned to
other qubes.

Affected systems
-----------------

Only systems with Intel CPUs are believed to be affected. According to
Intel's descriptions:

 - ITS affects Intel CPUs from Whiskey Lake (8th Generation Intel Core)
   to Tiger Lake (11th Generation Intel Core). [7]

 - The Lion Cove BPU issue affects Intel Lunar Lake and Arrow Lake (Core
   Ultra Series 2) models. [8]

 - BPRC affects most Intel CPUs since 8th Generation Intel Core (but see
   the caveat below). [9]

 - CVE-2025-20623 affects some 10th Generation Intel Core CPUs. [9]

 - CVE-2024-43420 affects some Intel Atom CPUs. [9]

See the linked Intel security advisories for more complete and detailed
lists of affected CPU models.

Note: The information above is based on Intel's security advisories. In
general, Intel assesses whether a vulnerability affects a given CPU
model only if that model still receives microcode updates. Therefore, if
a given CPU model no longer receives microcode updates, one should not
infer that a vulnerability does not affect that model merely because
Intel does not report it as affected. In particular, COMSEC observed
that BPRC affects CPUs as far back as 7th Generation Intel Core. [4]

To determine whether your CPU still receives microcode updates, see
"Changes in Customer Support and Servicing Updates for Select Intel
Processors," [10] or check your model's official Intel product page for
an "End of Servicing Updates" (ESU) date. Note that only models that
have reached or will soon reach their ESU date have an ESU date listed.
Newer models that are still fully supported typically have no ESU date
listed.

Patching
---------

The following packages contain security updates that, when applied to
systems with Intel CPUs that still receive microcode updates, will
address the vulnerabilities described in this bulletin:

  For Qubes 4.2, in dom0:
  - Xen packages, version 4.17.5-7
  - microcode_ctl version 2.1.20250512

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [2] Once available, the packages should be installed
via the Qubes Update tool or its command-line equivalents. [1]

Dom0 must be restarted afterward in order for the updates to take
effect.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new Xen
binaries.

Credits
--------

See the original Xen Security Advisory.

References
-----------

[1] https://www.qubes-os.org/doc/how-to-update/
[2] https://www.qubes-os.org/doc/testing/
[3] https://vusec.net/projects/training-solo
[4] https://comsec.ethz.ch/research/microarch/branch-privilege-injection/
[5] https://xenbits.xen.org/xsa/advisory-469.html
[6] https://lists.xenproject.org/archives/html/xen-devel/2025-05/msg00632.html
[7] https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-01153.html
[8] https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-01322.html
[9] https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-01247.html
[10] https://www.intel.com/content/www/us/en/support/articles/000022396/processors.html

--
The Qubes Security Team
https://www.qubes-os.org/security/

Source: qsb-107-2025.txt

Marek Marczykowski-Górecki’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmglMXYACgkQ1lWk8hgw
4GqHgQ//QA0yub2WEdafYjyeLq2XZrI5K7afWhSCLsaDAIQxGJA9WHsN6ohEbAaV
EDmf7tsbj9GyChIrrYbl5QhAxD7dMDSYAKEXPJ9DtGaEtFf/vNlBG/EChg5RnOCm
l7nzSBB/tqeJkI+WxoW+sArDGAVIppggZ9ja/D75sTwAJMvlR1Saa9zGG3y4uuFk
ENqdmiF9xcKSeQKtthBEGr3CIa8VPuUMqVoBUE+oL3CycCUy4wz5NOOrX4qB6RFR
t41cRQGSsebj7iNNfgbO8qE24XLvyHjJg7wh26OBNt9zVTphH8d3X9MQmXks3AKS
2YQctTC+HaTZyc22qRQNNQ9ry2mSL9hhdHSZvXcnvFZ6vyonQ+I841HiByeaM/Vt
UWpguyUxKtxj8H6ES77kzIBVUT0kI5k6AobmklG4g6WBpAFWDfW5E+dz2MAr+Esy
Xz0DdUQYjx30o8fX1ex3cksaVtH4MbroaoYLX9l7XG91Z4xCRS8XaIEB87aqL3aj
vAJLP+X6XrUU82Cwky/33CC/U4+Wbn0IyNEQW/KDkigKb0wNoygxCnWclvlSmRiH
4NgBRvca7evnvvf6OwE0d3wdL2Dv9Ion+QWyAGbCrOQpMC/hGRtwUT6hRFlo+vyv
8ZRcZKIWpbs8w0sIXqJbdv7qPiPnekIRBECaXZWhVMlPER3Y9OY=
=wbzX
-----END PGP SIGNATURE-----

Source: qsb-107-2025.txt.sig.marmarek

Simon Gaiser (aka HW42)’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmglK+EACgkQSsGN4REu
FJBYcQ/+Nmvky9JNQZrzo8aJwkJu53z1Pp8T0msS97pfnom+8cQgXkdnxpTBCrjD
66sF+VWp7O5z3zEPdy6uFg0u/i86CO0Hx4+2A5occjhjiNk46QbOmx4k+YUNmoy9
qK8A0/246TpaVU57xhD91lIeCYuMVNor0Zn1RrCYrz2suzXYVl0AW9wrz2NI55K/
hjmdJl2PBD9I5Yh7vU8NQLV0S6EmnucZm9n2Aaonw+j7BRdb367W12gIzOU2FUJO
mvU5L9lx+jDavxDRoSfb7MY4tyJ+fczh89m6F6SUPPESznI/ewuXYkn55uoyDwsw
fA+o/720IK3SF0fudbIlc4iJ/xkQrAk1iFvtyv0KJB5lNWfQgyF7jbnQfqQHD2KP
Tc9sUMKs0smPR6Pqm1FRQdCfI67J7fcm04wEFptte/4nPCNqTISB9c+3XfiLCTvt
Gw/n1BeMUrqsA6+lNhzLlhUBsFmLdeJ3O7VzAxvaEgsJLGkP3n8h5UnwtJVKsnbW
I8BE4hRMNaDaWZsr64jK0+UIzvFxoGUf+N5y7ISgck1+e3QM9OsYm5rxEmX+dRV4
0Tc9P6yWCoR80uMZKNYbnoXVE+NstiQ1g9d1wh36Kk7IR/fHw69Wa+Vm9ZTLxIhJ
S3dYheReCOOk5kMQrQO/pwXqLKzlwkNnRFLV/sxFaSoc5h0N2vg=
=smTO
-----END PGP SIGNATURE-----

Source: qsb-107-2025.txt.sig.simon

What is the purpose of this announcement?

The purpose of this announcement is to inform the Qubes community that a new Qubes security bulletin (QSB) has been published.

What is a Qubes security bulletin (QSB)?

A Qubes security bulletin (QSB) is a security announcement issued by the Qubes security team. A QSB typically provides a summary and impact analysis of one or more recently-discovered software vulnerabilities, including details about patching to address them. For a list of all QSBs, see Qubes security bulletins (QSBs).

Why should I care about QSBs?

QSBs tell you what actions you must take in order to protect yourself from recently-discovered security vulnerabilities. In most cases, security vulnerabilities are addressed by updating normally. However, in some cases, special user action is required. In all cases, the required actions are detailed in QSBs.

What are the PGP signatures that accompany QSBs?

A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all QSBs so that Qubes users have a reliable way to check whether QSBs are genuine. The only way to be certain that a QSB is authentic is by verifying its PGP signatures.

Why should I care whether a QSB is authentic?

A forged QSB could deceive you into taking actions that adversely affect the security of your Qubes OS system, such as installing malware or making configuration changes that render your system vulnerable to attack. Falsified QSBs could sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.

How do I verify the PGP signatures on a QSB?

The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)

  1. Obtain the Qubes Master Signing Key (QMSK), e.g.:

    $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
    gpg: directory '/home/user/.gnupg' created
    gpg: keybox '/home/user/.gnupg/pubring.kbx' created
    gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
    gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
    gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
    gpg: Total number processed: 1
    gpg:               imported: 1
    

    (For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)

  2. View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)

    $ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
    gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
       
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    gpg> fpr
    pub   rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
     Primary key fingerprint: 427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
    
  3. Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.

    Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.

  4. Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.

    gpg> trust
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    Please decide how far you trust this user to correctly verify other users' keys
    (by looking at passports, checking fingerprints from different sources, etc.)
       
      1 = I don't know or won't say
      2 = I do NOT trust
      3 = I trust marginally
      4 = I trust fully
      5 = I trust ultimately
      m = back to the main menu
       
    Your decision? 5
    Do you really want to set this key to ultimate trust? (y/N) y
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: ultimate      validity: unknown
    [ unknown] (1). Qubes Master Signing Key
    Please note that the shown key validity is not necessarily correct
    unless you restart the program.
       
    gpg> q
    
  5. Use Git to clone the qubes-secpack repo.

    $ git clone https://github.com/QubesOS/qubes-secpack.git
    Cloning into 'qubes-secpack'...
    remote: Enumerating objects: 4065, done.
    remote: Counting objects: 100% (1474/1474), done.
    remote: Compressing objects: 100% (742/742), done.
    remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
    Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
    Resolving deltas: 100% (1910/1910), done.
    
  6. Import the included PGP keys. (See our PGP key policies for important information about these keys.)

    $ gpg --import qubes-secpack/keys/*/*
    gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
    gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
    gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
    gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
    gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
    gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
    gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
    gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
    gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
    gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
    gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
    gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
    gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
    gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
    gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
    gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
    gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
    gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
    gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
    gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
    gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
    gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
    gpg: Total number processed: 17
    gpg:               imported: 16
    gpg:              unchanged: 1
    gpg: marginals needed: 3  completes needed: 1  trust model: pgp
    gpg: depth: 0  valid:   1  signed:   6  trust: 0-, 0q, 0n, 0m, 0f, 1u
    gpg: depth: 1  valid:   6  signed:   0  trust: 6-, 0q, 0n, 0m, 0f, 0u
    
  7. Verify signed Git tags.

    $ cd qubes-secpack/
    $ git tag -v `git describe`
    object 266e14a6fae57c9a91362c9ac784d3a891f4d351
    type commit
    tag marmarek_sec_266e14a6
    tagger Marek Marczykowski-Górecki 1677757924 +0100
       
    Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
    gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    

    The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.

  8. Verify PGP signatures, e.g.:

    $ cd QSBs/
    $ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    $ cd ../canaries/
    $ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    

    Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.

For this announcement (QSB-107), the commands are:

$ gpg --verify qsb-107-2025.txt.sig.marmarek qsb-107-2025.txt
$ gpg --verify qsb-107-2025.txt.sig.simon qsb-107-2025.txt

You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the QSB-107 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.

15 May, 2025 12:00AM

May 14, 2025

hackergotchi for Whonix

Whonix

Whonix 17.3.9.9 - Point Release!

Download

(What is a point release?)


Upgrade

Already using Whonix? No need to reinstall — perform an in-place upgrade using the Whonix repository.


This milestone was made possible thanks to the incredible support from our community. Thank you!


Please Donate!


Get Involved — Please Contribute!


Major Changes

  • install user-sysmaint-split by default on Whonix-Workstation Xfce. It is strongly recommended to review the feature’s documentation:

Changelog

  • whonix-welcome-page:
    • Add startpage and more button, remove duckduckgo (Thanks to Hans!)
    • Implementation of transparency tooltip for search engines (Thanks to Hans!)
  • sysmaint-panel
    • Fully apply wording changes, fix background color on Whonix VMs (Thanks to @ArrayBolt3!)
  • user-sysmaint-split
    • Fix systemcheck under Whonix by enabling onion-grater in sysmaint sessions (Thanks to @ArrayBolt3!)

same as 17.3.9.2:


Full difference of all changes

https://github.com/Whonix/derivative-maker/compare/17.2.8.5-developers-only…17.3.9.9-developers-only

1 post - 1 participant

Read full topic

14 May, 2025 06:57PM by Patrick

hackergotchi for Grml developers

Grml developers

grml development blog: New signing key for deb.grml.org repositories

Starting today, our package respositories on https://deb.grml.org/ are signed with a new key.

With input and feedback from the community, especially from anarcat and Guillem Jover, we have chosen ECC keys, specifically ed25519.

There are known compatibility issues with EOL Debian versions, like buster. If you are using such an old version, please either upgrade, or remove the Grml repositories from your setup.

How to get the new key?

The grml-keyring package provided in the repositories contains the new key since March 2025. If you updated the package on your system since then, you are all set. Alternatively you can also get the grml-keyring package from Debian testing.

14 May, 2025 10:30AM

Evgeni Golov: running modified containers with podman

Everybody (who runs containers) knows this situation: you've been running happycontainer:stable for a while and it's been great but now something external changed and you need to adjust the code while there is still no release with the patch.

I've encountered exactly this when our Home-Assistant stopped showing the presence of our cat correctly, but we've also been discussing this at work recently.

Now the most obvious (to me?) solution would be to build a new container, based on the original one, and perform the modifications at build time. Something like this:

FROM happycontainer:stable
RUN curl … | patch -p1

But that's not interactive, and if you don't have a patch readily available, that's not what you want. (And I'll save you the idea of RUNing sed and friends to alter files!)

You could run vim inside the container, but that requires vim to be installed there in the first place. And a reasonable configuration. And…

Well, turns out podman can mount the root fs of a running container.

[root@sai ~]# podman mount homeassistant
/var/lib/containers/storage/overlay/f3ac502d97b5681989dff

And if you're running as non-root, you'll get an error:

[container@sai ~]$ podman mount homeassistant
Error: cannot run command "podman mount" in rootless mode, must execute `podman unshare` first

Luckily the solution is in the error message - use podman unshare

[container@sai ~]$ podman unshare
[root@sai ~]# podman mount homeassistant
/home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged

So in both cases (root and rootless) we get a path, which is the mounted root fs and we can edit things in there as we like.

[root@sai ~]# vi /home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged/usr/src/homeassistant/homeassistant/components/surepetcare/binary_sensor.py

Once done, the container can be unmounted again, and the namespace left

[root@sai ~]# podman umount homeassistant
homeassistant
[root@sai ~]# exit
[container@sai ~]$

At this point we have modified the code inside the container, but the running process is still using the old code. If we restart the container now to restart the process, our changes will be lost.

Instead, we can commit the changes as a new layer and tag the result.

[container@sai ~]$ podman commit homeassistant docker.io/homeassistant/home-assistant:stable

And now, when we restart the container, it will use the new code with our changes 🎉

[container@sai ~]$ systemctl --user restart homeassistant

Is this the best workflow you can get? Probably not. Does it work? Hell yeah!

14 May, 2025 08:54AM

hackergotchi for ZEVENET

ZEVENET

Enterprise Edition 10.0.10 Released

The latest update of SKUDONET Enterprise Edition is now available — bringing powerful new features, improved user management capabilities, and important bug fixes to further streamline operations and enhance system control.

Highlight: New DNS Integration with Infomaniak for Let’s Encrypt

One of our customers requested support for Infomaniak as a DNS provider when issuing Let’s Encrypt SSL certificates — and it’s now included. This means you can automatically generate and renew SSL certificates using Infomaniak DNS zones directly from the SKUDONET Let’s Encrypt connector.

#Thanks to our flexible integration model, new DNS providers can be added in just a few hours.

Improvements in User Roles and Access Control

  • System User Permissions in RBAC: Admins can now define which user settings can be changed by users themselves, offering more precise control over system access.
  • GSLB and DSLB Role Permissions: New permission options are now available in the RBAC module for both Global and Direct Server Load Balancing menus.

Bug Fixes Included

  • URL Rewrite and Header Removal Fixes: HTTP/S farm services now handle URL rewrites and HTTP header deletions correctly.
  • Improved Email Notifications: Password fields in the notifications module now support special characters.
  • GUI and Permission Alignment: The web interface now properly reflects the permissions defined in the RBAC module.
  • Wildcard Certificate Renewals: Let’s Encrypt autorenewal now correctly respects the “force” flag for wildcard certificates.

This release continues to enhance the stability, flexibility, and security that enterprise environments demand. Curious to see how SKUDONET Enterprise Edition can improve your infrastructure? Try it free for 30 days.

14 May, 2025 06:37AM by Nieves Álvarez

May 13, 2025

hackergotchi for Purism PureOS

Purism PureOS

FBI Raises Alarm on Encrypted Messaging Apps: A New Front in the Battle for Digital Privacy and National Security

As attackers continue breaching U.S. telecom infrastructure, the FBI has issued a stark warning: Americans should move away from insecure SMS messaging and toward encrypted platforms. But the story doesn’t end there. FBI’s Real Message: Encryption Must Be “Responsibly Managed” The FBI encourages users to adopt end-to-end encrypted apps like WhatsApp, Signal, and Messenger—but with […]

The post FBI Raises Alarm on Encrypted Messaging Apps: A New Front in the Battle for Digital Privacy and National Security appeared first on Purism.

13 May, 2025 06:11PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical + thanks.dev = giving back to open source developers

At Canonical we create a lot of open source, and we contribute features and fixes to upstream projects. We also support several large open source foundations such as the Eclipse Foundation, Cloud Native Computing Foundation (CNCF), and the Gnome Foundation.

In April, we added another way of giving back: using thanks.dev, we’re donating money to the smaller open source projects we depend on. Thanks.dev analyses your GitHub dependencies and splits your donation between them.

We’ve committed to donating US$120,000 to open source developers over the next 12 months, transferred at $10,000 per month. The distribution of funds is determined by thanks.dev’s algorithm, which splits the money based on which dependencies are used by more projects. As their website explains:

Our systems will (1) walk your repositories; (2) grab the manifest files; (3) collate your dependency tree up to 3 levels deep; and (4) trickle your donation breadth first across said tree.

Here are the top recipients, showing the first 10 out of 358 (the full list lives at thanks.dev/r/canonical):

We donated the first batch of funds in April, and we’ve already heard some great feedback from developers who received a pleasant surprise, for example:

Thanks to Canonical for sponsoring me through @thanks_dev! They’re the third company to sponsor me there :)

Mastodon link

I personally love how we’re giving to over 350 GitHub users and orgs. Each receives a relatively small amount, but it adds up over time. While very few open source developers do it for the money, the feeling of being recognised, knowing that someone cared enough to show it, has real meaning for an open source creator.

By default, thanks.dev splits the funds automatically according to how often a dependency is used. However, one can boost or reduce the weight at the programming language level and at the GitHub org level. For now, we’ve tweaked the language knobs to try to fairly represent our usage, and we’ll no doubt make other changes over time.

To take an example, one of the individuals that’s fairly high up on that list is gh/nedbat: he’s the author of coverage.py, which we use extensively.

Or take someone further down the list, gh/adamchainz: he’s the author of projects like time-machine, which is used by the Ubuntu website itself.

One of the things I love most about working for Canonical is being able to tell people, “the vast majority of the code we write is fully open source.” We work in the open on GitHub and use our own Launchpad project hosting system.

Speaking of Launchpad, thanks.dev doesn’t support Launchpad right now, but they’ve kindly agreed to add support for this Canonical system. They’ve been nothing but helpful, and we wish them well as they provide this excellent service to support open source developers.

It’s not just Canonical who supports open source projects in this way. We’ve joined the ranks of several other companies (and many individuals) that give back using thanks.dev:

Thanks.dev receives a small commission, of course, for their hard work (5%): they host the analysis and reports, contact potential donors regularly, and distribute the money (they often have to request that developers set up a way to receive funds).

A very common question is “how can I get paid to write open source?” Well, one way is to come work for Canonical! But if that’s not for you right now, go create an open source project, get people to start using it, and you may just see some donations coming your way.

13 May, 2025 11:52AM

hackergotchi for GreenboneOS

GreenboneOS

April 2025 Threat Report: The Consequences Are Real

In the early days of digital, hacking was often fame or prank driven. Fast forward to 2025; hacking has been widely monetized for illicit gains. Cybercrime is predicted to cost the global economy 10.5 trillion Dollar in 2025. Globally, the trend of increasing geocriminality is pushing individual countries and entire economic regions [1][2] to make […]

13 May, 2025 10:04AM by Joseph Lee

hackergotchi for Volumio

Volumio

Volumio OS: Powering Innovation and Speed in the High-End Audio Market

Volumio OS: Powering Innovation and Speed in the High-End Audio Market
How Ferrum’s WANDLA 2.0.0 Firmware Update Showcases the Flexibility and Strength of the Volumio Ecosystem


In the fast-moving world of high-end audio, innovation is not only about exceptional sound quality—it’s about flexibility, speed to market, and creating products that surprise and delight. For brands like Ferrum, that’s exactly what Volumio OS brings to the table: an open, reliable platform that allows forward-thinking manufacturers to develop next-generation features faster than ever before.

At the upcoming High End Munich 2025 show, Ferrum will unveil WANDLA 2.0.0, a groundbreaking firmware update for its award-winning family of DACs. The highlight? Ferrum Streaming Control Technology (FSCT), a feature made possible in part by the openness and extensibility of the Volumio platform.

Ferrum Streaming Control Technology: A Glimpse Into the Future of Integrated Audio

With FSCT, Ferrum has reimagined what a DAC can do. Thanks to the 2.0.0 firmware update, WANDLA DACs can now control your streaming setup directly from the unit’s touchscreen and remote control. You get play/pause/skip commands and artist and track info displayed in real time—right from the device itself.

Here’s where Volumio OS comes in: FSCT was built to be fully compatible with Volumio’s plugin and control architecture, allowing seamless interaction between the WANDLA DAC and a Volumio-powered streamer. This wasn’t an afterthought—it was made possible because Volumio’s architecture is partner-friendly, modular, and built for this kind of innovation.

“The beauty of FSCT is that WANDLA can show information of the artist, title, and progress bar from any device running macOS, Windows, or the Volumio OS plugin.”

The Power of an Open Ecosystem

Volumio has spent years refining its software to serve the most demanding audio use cases. That experience, especially in the high-end market, means manufacturers can rely on Volumio as a Grade-A software foundation for their own products.

When Ferrum needed a robust platform that could keep up with their fast pace of innovation and demanding customer base, Volumio was the obvious choice. Here’s why:

  • Speed to Market: Volumio OS allowed Ferrum to implement FSCT and release it across their full WANDLA product family without building a streaming interface from scratch.
  • Hardware Flexibility: Whether using Raspberry Pi boards, CM modules, or custom hardware, Volumio adapts and performs with reliability.
  • Extensible Architecture: Volumio plugins enabled Ferrum to tailor features specifically for their use case while preserving compatibility with the wider Volumio ecosystem.
  • Proven Track Record: Trusted by audiophiles, integrators, and OEMs around the world, Volumio is already a mainstay in high-end audio—Ferrum’s adoption just reinforces that.

A Win-Win for Brands and Users Alike

For Ferrum, the payoff is enormous. Their users now get smarter, more intuitive control of their streaming experience—without needing to switch apps, fumble for phones, or look at multiple screens. The remote works. The screen shows everything. The experience is fluid.

For Volumio, it’s further proof that open platforms empower creativity. Ferrum’s team didn’t need to wait on closed SDKs or jump through proprietary hoops. They had what they needed to build fast, test thoroughly, and ship confidently.

Conclusion: Volumio OS as a Launchpad for High-End Innovation

The story of Ferrum’s WANDLA 2.0.0 release is a perfect showcase of what Volumio OS was built to do: enable world-class audio manufacturers to bring advanced features to market quickly, reliably, and without compromise.

If you’re an audio brand looking to push boundaries, delight users, and move fast, Volumio is your ally. As Ferrum has shown, the future of high-end audio is not just about better sound. It’s also about better systems. And Volumio OS is the system that lets you build yours.

 

References:
Get more information about this product.


Want to learn more about partnering with Volumio for your next audio product?
Visit volumio.org to discover how we support innovators in creating the next generation of listening experiences.

The post Volumio OS: Powering Innovation and Speed in the High-End Audio Market appeared first on Volumio.

13 May, 2025 07:31AM by joel

May 12, 2025

hackergotchi for Tails

Tails

Fighting against a nuclear project in France

Stop sign in front of a nuclear power plant

Robin is an activist in struggle against a nuclear project in France. He has been using Tails as his default operating system for all his activism since 5 years, creating a clear separation between his activism (using Tails) and personal life (using encrypted Debian).

At his place, many nomadic activists use Tails USB sticks instead of having personal computers, allowing them to maintain privacy while using shared devices.

After facing repression, encrypting all their data became a baseline practice in his group. He prefers training others on Tails because it's easier to implement than teaching full computer encryption.

Because Tails is easy to share, even people with low technical abilities are able to use it. For the security of a group, what matters is the lowest security level within that group.

Read our full interview with Robin.

12 May, 2025 11:19PM

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 891

Welcome to the Ubuntu Weekly Newsletter, Issue 891 for the week of May 4 – 10, 2025. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • Rocks Public Journal; 2025-05-06
  • Other Reports
  • Upcoming Meetings and Events
  • LoCo Events
  • Proposed Agenda for the Communications Council Meeting for 2025-05-14
  • 3.0 solver now default in questing-proposed for apt-get and apt commands
  • Adopting sudo-rs By Default in Ubuntu 25.10
  • Plasma 6.3.5 update for Kubuntu 25.04 available via PPA
  • Ubuntu Unity 25.04 Brings Back Ubuntu’s Biggest Miss
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, 24.10, and 25.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Cristovao Cordeiro (cjdc) – Rocks
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

12 May, 2025 10:29PM

Launchpad News: build_by_default=False for i386

More than 5 years ago, i386 was dropped as an architecture in Ubuntu. Despite this, i386 has remained selected by default as an architecture to build when creating new PPAs, snap recipes, or OCI recipes.

Today, we have disabled building for i386 by default. From now on, only amd64 will be selected by default when creating new PPAs, snap recipes, or OCI recipes. This change only affects newly created PPAs, snap recipes, or OCI recipes. Existing PPAs and recipes remain unchanged.

It’s worth noting that, although we have disabled building for i386 by default, it’s still possible to select i386 as a target architecture when creating new PPAs, snap recipes, or OCI recipes. In future, we may yet decide to disable this altogether but for now, the ability to target i386 remains.

Because targeting i386 is still possible (but requires intervention to enable), we don’t anticipate that this change will affect users, but if you are affected, please log a bug.

And as always, if you have any feedback, please let us know!

12 May, 2025 03:37PM

hackergotchi for Deepin

Deepin

hackergotchi for GreenboneOS

GreenboneOS

CVE-2025-34028: Commvault Command Center Actively Exploited for RCE

CVE-2025-34028 (CVSS 10) is a maximum severity flaw in Commvault Command Center, a popular admin console for managing IT security services such as data protection and backups across enterprise environments. As of April 28th, CVE-2025-34028 has been flagged as actively exploited. CVE-2025-34028 also presents heightened risk due to the existence of publicly available proof-of-concept (PoC) […]

12 May, 2025 09:21AM by Joseph Lee

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: See a DeepSeek demo running on ESWIN Computing’s EIC77 series at RISC-V Summit Europe 2025 in Paris

Canonical, the publisher of Ubuntu, and ESWIN Computing have partnered to enable DeepSeek LLM 7B on the EIC77 series, showcasing ESWIN Computing’s powerful NPU, GPU and DSP running on Ubuntu. This development is part of a community development effort between Canonical and ESWIN Computing to bring the latest and greatest RISC-V technology to Ubuntu. See the demo live at ESWIN Computing’s booth at the RISC-V Summit in Paris, where leaders from across the RISC-V ecosystem will gather to discuss the latest innovations.

Ubuntu users can now make use of DeepSeek’s powerful reasoning model on ESWIN Computing’s cutting edge EIC77 series

With ESWIN Computing’s hardware accelerators such as NPUs, GPUs and DSPs, Ubuntu users can now maximize the usage of hardware resources available to further enhance performance. With DeepSeek, the hardware resources available can enable rapid parameter transfers during processing, therefore significantly enhancing the model’s  performance. Test results from ESWIN Computing show that the EIC7700X SoC EVB development board can run at 7 tokens per second.

The availability of Ubuntu developer images on the EIC 77 series, powered by SiFive’s P550 CPU, means that users can draw upon the latest open source tooling from the Ubuntu ecosystem, whilst at the same time benefit from the robustness and stability that Ubuntu brings to novel use cases. This expands the possibilities for creative projects in AI, robotics, IoT, education, and beyond.

This success is built on top of collaboration between ESWIN Computing, SiFive and Canonical, and serves as a testament to commitment to openness and collaboration within the RISC-V community. 

See the demo at RISC-V Summit in Paris

The demo will be at ESWIN Computing’s booth at RISC-V SUmmit Europe.
Check out the event details below

Venue
La Cité des Science et de l’Industrie, 30
Av. Corentin Cariou, 75019
Paris, France

Dates
May 13 – 15, 2025

Booth #31

Canonical’s commitment to RISC-V

At Canonical, we believe that it’s important to do our part to help RISC-V succeed and gain acceptance as an open standard. Ubuntu’s availability on ESWIN Computing’s EIC77 series is a testament to the continued collaboration between Canonical and the broader RISC-V community. 

The partnership brings all the ease of use, robust tooling and extensive packaging ecosystem that Ubuntu is known for to a new generation of RISC-V devices.

Join the community

We believe collaboration and community support drive innovation and we invite you to join the Ubuntu and ESWIN Computing communities to share your experiences, ask questions, and help shape the future of RISC-V. 

Get in touch

If you have any questions about the platform or would like information about our certification program, contact us.

12 May, 2025 08:15AM

May 11, 2025

hackergotchi for ARMBIAN

ARMBIAN

Armbian Updates: OMV support, boot improvents, Rockchip optimizations

This week, the Armbian development team pushed several noteworthy enhancements, with improvements spanning user experience, bootloader upgrades, and broader system support. Notably, this week saw the debut of OpenMediaVault in Armbian’s software installer, a move that brings plug-and-play NAS functionality to supported boards.

OpenMediaVault is a feature-rich platform that enables users to turn single-board computers into fully-fledged network storage devices. Thanks to a contribution by Igor, the integration is now available through armbian-config interface, giving users a streamlined way to install and configure OpenMediaVault without needing to manually manage services or packages.

The usability of the software stack also saw a meaningful improvement. A previously persistent “Disable Wireless Hotspot?” prompt was eliminated when no hotspot had been enabled, reducing unnecessary friction during the setup process. This fix helps clarify Armbian’s default network behavior for users during first boot, particularly when configuring headless or appliance-style deployments.

On the hardware front, the Orange Pi 5 Max received a key upgrade: it now boots using mainline U-Boot. This transition replaces vendor-specific boot code with upstream-supported U-Boot, easing future updates and kernel integration. A related improvement was made to the PocketBeagle2, which migrated to extlinux for boot configuration—bringing it in line with Armbian’s broader standardization efforts.

Further enhancements came to the Rockchip64 platform. Previously missing Operating Performance Points (OPPs) were added to ensure proper voltage and frequency scaling across supported boards, which improves energy efficiency and stability under load. In addition, older workarounds for wireless firmware issues were removed, as upstream drivers have now resolved the compatibility concerns that necessitated them.

Finally, infrastructure refinement continued with the cleanup of unused or deprecated build artifacts, keeping the codebase lean and future-proof. The team also laid the groundwork for upcoming testing initiatives to ensure that new features like OpenMediaVault are validated across a wide array of supported devices.

For those interested in exploring OpenMediaVault or other curated software installations, the updated documentation is available in the Armbian Software User Guide.

The post Armbian Updates: OMV support, boot improvents, Rockchip optimizations first appeared on Armbian.

11 May, 2025 06:17AM by Didier Joomun

May 10, 2025

hackergotchi for SparkyLinux

SparkyLinux

Meru

There is a new application available for Sparkers: Meru What is Meru? Goals: – All your inboxes in one place – Never miss an important email – Enhanced protection against phishing – Privacy without compromise – Take back control of your inbox – Runs wherever you do There are 2 versions of your choice: – Free – For personal use and single account – Pro – For professional use and…

Source

10 May, 2025 11:27AM by pavroo

May 09, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: New 50 TOPS DC-ROMA RISC-V AI PC ships with Ubuntu Desktop 24.04 LTS pre-installed

Canonical is excited to announce the launch of DeepComputing’s new 50 TOPS DC-ROMA RISC-V AI PC and AI PC Mini with Ubuntu Desktop 24.04 LTS pre-installed. The PC was launched in collaboration with Framework and is powered by ESWIN’s advanced RISC-V AI SoC EIC7702X—featuring 8 SiFive’s high-performance P550 CPU cores

Built on the DC-ROMA RISC-V Mainboard II designed specifically for the Framework Laptop, this new AI PC is now available on the DeepComputing Store, with prices starting from $349.

DC-ROMA RISC-V AI PC

This launch marks a major milestone for the global developer community, offering the world’s first RISC-V-based AI PC with a chiplet dual-die connected AI SoC. Built for developers pioneering edge and AI-native applications, it delivers over 40 TOPS of local AI compute, enabling complex AI models—such as large language models (LLMs)—to run entirely on-device, without relying on the cloud.

DC-ROMA RISC-V AI PC Mini

At its core is a RISC-V 64-bit, 8-core out-of-order SiFive P550 out-of-order CPU, paired with:

  • A 40 TOPS NPU
  • A 512-bit-wide vector processor
  • Hardware support for 8K@50FPS video encoding
  • Support for up to 64GB LPDDR5 memory and NVMe SSD storage

This configuration provides ample power for advanced development, prototyping, and experimentation.

What sets the DC-ROMA AI PC apart is its commitment to local, secure, and private AI execution. Developers can run LLMs and other AI workloads through optimized APIs and open-source toolchains, enabling everything from on-device chatbots and AGI-driven media experiences to audio and video synthesis — all with full control over data, performance, and cost.

Canonical is excited to work with DeepComputing and its partners to build an open, modular, and future-ready developer ecosystem. The DC-ROMA AI PC represents the next evolution in sustainable and customizable RISC-V AI computing.

This achievement is the result of a collaboration across both hardware and software ecosystems. John Ronco, SiFive SVP of Product, remarked “This is a powerful product that demonstrates the power of RISC-V open-standard computing as both a development and commercial platform. With 8 of our flexible, high-performance P550 cores, and supported by a strong and growing RISC-V developer ecosystem, the DC-ROMA AI PC will help bring scalable, secure, and efficient AI solutions to life.”

Gordan Markuš, Director of Silicon Alliances at Canonical, added: “Canonical is proud to be part of this important step for the RISC-V ecosystem. Together with DeepComputing, Framework, ESWIN, and SiFive, we’re enabling developers to build next-generation AI solutions on RISC-V leveraging Ubuntu and open source software components. This collaboration highlights the strength of the RISC-V ecosystem in uniting companies to jointly shape the future of accessible and open innovation.”

“The DC-ROMA AI PC embodies the innovation and collaboration made possible by the RISC-V ecosystem. It will accelerate new applications, enabling engineers to natively develop code for RISC-V on RISC-V.” Andrea Gallo, CTO of RISC-V International, noted. “It’s about giving developers full-stack freedom, from hardware to AI applications.” Nirav Patel, Founder and CEO of Framework, said: “This launch is a powerful demonstration of what’s possible when modular design meets open architecture. We’re happy to see DeepComputing bringing a substantially more powerful RISC-V processor to developers worldwide through the Framework Laptop ecosystem.”

This milestone would not have been possible without the close collaboration of different hardware and software partners, the dedication of open-source contributors, and the growing global community supporting the RISC-V movement. Their collective efforts helped turn the vision of an open, local-first AI PC into a reality. Yuning Liang, Founder and CEO of DeepComputing, stated: “We built this AI PC to empower developers who believe in local-first AI, open innovation, and sustainable computing. It’s a foundational step toward a more open and privacy-respecting digital future.”

Availability

Pre-orders are now open, and prices start at $349. The DC-ROMA RISC-V AI PC will be showcased at the upcoming RISC-V Summit Europe and Computex Taipei 2025. Visit the DeepComputing booth for a hands-on experience with the AI PC!

For more information and to place an order, visit the DeepComputing Website and its online store.

09 May, 2025 09:30AM

May 08, 2025

Kubuntu General News: Plasma 6.3.5 update for Kubuntu 25.04 available via PPA

We are pleased to announce that the Plasma 6.3.5 bugfix update is now available for Kubuntu 25.04 Plucky Puffin in our backports PPA.

As usual with our PPAs, there is the caveat that the PPA may receive additional updates and new releases of KDE Plasma, Gear (Apps), and Frameworks, plus other apps and required libraries. Users should always review proposed updates to decide whether they wish to receive them.

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt full-upgrade

We hope you enjoy using Plasma 6.3.5!

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], and/or file a bug against our PPA packages [3].

1. KDE bugtracker::https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.u
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

08 May, 2025 06:28PM

hackergotchi for Pardus

Pardus

Pardus, 7. Verimlilik ve Teknoloji Fuarı’nda TÜBİTAK Ada Standında Yerini Aldı

Pardus Projesi olarak,24-27 Mayıs 2025 tarihleri arasında Ankara ATO Congresium’da düzenlenen 7. Verimlilik ve Teknoloji Fuarı'nda, TÜBİTAK Ada Standı içerisinde yerimizi alarak ziyaretçilerimizle buluştuk.

08 May, 2025 10:37AM by Hace İbrahim Özbal

hackergotchi for Deepin

Deepin

hackergotchi for ZEVENET

ZEVENET

DevSecOps and ADCs

Integrating security across all stages of the development cycle is no longer just a trend — it’s a necessity. In this context, the DevSecOps approach is gaining traction by promoting a model where Development (Dev), Security (Sec), and Operations (Ops) work together from the very beginning of a project. Within this model, solutions like the SKUDONET ADC are playing an increasingly relevant role.

1. What Is DevSecOps and Why Does It Matter?

DevSecOps is an approach that aims to integrate security practices into every phase of the software development lifecycle — from planning to production. Unlike traditional models, where security is added at the end, DevSecOps embeds it from the start, enabling:

  • Early detection of vulnerabilities
  • Lower remediation costs
  • Better collaboration across teams
  • More secure and agile deployments

To make this possible, teams need tools that not only support the process but also integrate seamlessly into pipelines, deployment environments, and monitoring systems.

2. The Role of the Application Delivery Controller (ADC) in DevSecOps

In a DevSecOps strategy, security is not a final step — it’s a continuous component throughout the application lifecycle. However, many threats only become visible when the application is live and exposed to real traffic. This is where the value of an ADC comes into play.

A modern Application Delivery Controller not only ensures that applications are available, responsive, and scalable. It also serves as a critical control point in production, helping enforce the security policies and standards defined earlier in the cycle.

For example, an ADC can:

  • Filter and inspect incoming requests by analyzing URLs, headers, payloads, or unusual patterns that may signal attacks (e.g., injections, scans, brute force).
  • Block malicious or unauthorized traffic, enforcing access policies and providing real-time protection.
  • Detect anomalies in expected behavior, supporting early identification of breaches or vulnerabilities missed during testing.

From this perspective, the ADC acts both as an active last line of defense and as an observability tool, offering visibility into what really happens when the application is exposed. This role is essential in DevSecOps, allowing teams to:

  • Validate that production deployments comply with defined security criteria
  • Dynamically adjust security policies without service interruption
  • Detect and respond to threats quickly, thanks to real-time traffic visibility

In short, ADCs strengthen the delivery phase in DevSecOps, becoming an integral part of a continuous security strategy without slowing down development cycles.

3. How SKUDONET Fits into DevSecOps Environments

SKUDONET provides an ideal platform for teams implementing DevSecOps practices, combining high performance, observability, and built-in security.

Automation and integration

SKUDONET can be managed through its RESTful API and skd-cli command-line tool, allowing smooth integration into CI/CD pipelines and orchestration systems.

Observability

Integration with platforms like Grafana and Nagios enables continuous monitoring of system health, application performance, and early detection of issues.

Integrated threat prevention and visibility

SKUDONET includes a built-in Intrusion Prevention and Detection System (IPDS), featuring a Web Application Firewall (WAF) that protects applications against threats such as SQL injection, cross-site scripting (XSS), and other OWASP Top 10 attacks. This module operates at the delivery layer, enabling real-time protection without modifying application code. It also generates detailed logs and attack reports, which can be used for auditing, compliance, or integration with external security analysis tools.

Secure performance and scalability

Designed for automated deployments across multiple environments, SKUDONET ensures a seamless user experience without compromising security.

4. Key Advantages for DevSecOps Teams

Integrating SKUDONET into the DevSecOps workflow brings concrete benefits:

  • Simplified management directly from the deployment server via skd-cli
  • Seamless integration with existing automation and monitoring tools
  • A solid security foundation for production environments, without sacrificing deployment speed

In a DevSecOps context, every tool matters. And modern ADCs like SKUDONET are no exception — they’ve become essential components in achieving continuous, secure delivery.

SKUDONET gives you full control over traffic, deep visibility into threats in production, and an architecture built for integration with advanced automated deployment practices.

See how SKUDONET fits into your DevSecOps strategy. Try SKUDONET Enterprise Edition free for 30 days and experience its ability to combine performance, security, and automation in real-world environments.

08 May, 2025 07:03AM by Nieves Álvarez

hackergotchi for Qubes

Qubes

Invisible Things Lab is hiring a Linux graphics stack developer to work on Qubes OS

Position: Linux graphics stack developer
Company: Invisible Things Lab
Location: Fully remote
Employment type: Full-time (part-time considered)
Salary range: €70,000–€90,000/year (full-time base salary with potential for bonuses)
(Note: For part-time contracts, the full-time base salary will be scaled accordingly.)

Job description

We’re seeking a talented developer with a focus on the Linux graphics stack in a virtualized environment, specifically in Qubes OS. Qubes OS is a free and open-source security-oriented operating system that uses the Xen hypervisor to securely compartmentalize the user’s applications, data, and devices into isolated virtual machines called “qubes” so that the compromise of any one qube does not affect the rest of the system.

This role presents exciting challenges and the opportunity to work on pioneering solutions that have never been attempted before. As a key member of our team, you will lead the migration of the Qubes OS graphics stack from X11 to Wayland, as well as implement support for rendering hardware acceleration, all while maintaining the robust security properties for which Qubes OS is known.

Responsibilities

  • Lead the migration of the Qubes OS graphics stack from X11 to Wayland
  • Implement support for rendering hardware acceleration
  • Ensure the strong security properties of Qubes OS are preserved throughout the development process
  • Collaborate with team members and contribute to open-source projects

Requirements

  • Strong knowledge of the Linux graphics stack, especially Wayland (familiarity with X11 a plus)
  • Basic understanding of kernel drivers and virtualization
  • Proficiency in the C programming language
  • Previous contributions to an open-source project
  • Experience with Git
  • Ability to work independently, proactively solve problems, and seek assistance when needed

Preferred skills

  • Rust
  • Python
  • RPM packaging
  • DEB packaging

What we offer

  • Fully remote work with flexible hours
  • Long-term contract opportunities
  • A collaborative and innovative work environment

How to apply

If you’re passionate about pushing the boundaries of technology and want to be part of a groundbreaking project, we would love to hear from you! Please send your CV or résumé to jobs[at]invisiblethingslab[dot]com.

Join us in shaping the future of secure computing with Qubes OS!

08 May, 2025 12:00AM

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E347 Natal Das Extensões

A Canonical levou o Miguel a redescobrir o conforto da gama de sofás-cama do IKEA e entretanto, o Diogo trouxe um saco cheio de prendas, que incluem reflexões sobre direitos digitais e privacidade em tempo de eleições, uma obsessão acumuladora com arquivos, papéis, facturas, revistas velhas e jornais cheios de pó digital e ainda uma catrefada de extensões de Firefox para todos os usos e gostos - ou não fosse isto o Podcast Firefox Portugal, o podcast sobre Firefox, a Mozilla e outras cenas.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

08 May, 2025 12:00AM

May 07, 2025

hackergotchi for Purism PureOS

Purism PureOS

What Is PureOS? A Beginner’s Guide for iOS, Android, and Windows Users

  In today’s world, privacy and control over your digital life have become rare luxuries. Every tap, swipe, and click on most smartphones and PCs is tracked, analyzed, and monetized—usually without your explicit consent. That’s where PureOS comes in. So What Is PureOS? PureOS is a privacy-focused, secure, and open-source operating system developed by Purism. […]

The post What Is PureOS? A Beginner’s Guide for iOS, Android, and Windows Users appeared first on Purism.

07 May, 2025 03:21PM by Rex M. Lee

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: CRA compliance: Things IoT manufacturers can no longer do under the CRA (and what to do instead)

I’ve written about the EU Cyber Resilience Act (CRA) on our Canonical blog a few times now, and I think now’s the perfect time to talk about the implications of this new regulation and what it means for IoT and device manufacturers on the practical level of how they design and build Products with Digital Elements (PDEs).

In this blog, I’ll give you a thorough overview of common IoT manufacturer and PDE developer practices that need immediate attention, and how to change or improve these practices so that your work and PDEs can keep their place on the EU market with full CRA compliance.

What you can’t do under the CRA (and what to do instead)

In general, the things you can and cannot do under that CRA depend on how you and your PDEs are classified or categorized under this new piece of legislation. If you’re not familiar with the CRA’s wording, classifications, and requirements, you can catch up on the specifics by reading the previous articles I wrote here:

However, outside of the category- and classification-specific requirements of the CRA, this regulation introduces an extremely broad set of changes to IoT and PDE cybersecurity and vulnerability management that will affect everyone, regardless of where they fall under the CRA’s specific wording. 

Let’s take a closer look:

No more passing the buck

No more passing security responsibility to your downstream users or expecting that your upstream providers will take care of vulnerabilities. In fact, building and shipping things often means you will be categorized as a manufacturer, which means that you will be burdened with an increased level of compliance assessment and higher demands for PDE compliance. 

If you don’t want to bear the brunt of Manufacturer compliance, you should find a supplier willing to assume that responsibility. 

You can no longer hide behind documentation – or treat it as optional

You can no longer hide behind documentation. If there are vulnerabilities, limitations, or flaws in the PDE, or specific outlines for its use, you cannot simply expect users to have fully read your documentation and follow these hard-to-find instructions to use your product safely. 

On a practical level, this means that instead of simply documenting vulnerabilities and communication – for example, telling users not to use the device on unsecured networks, to change the password before use, or to manually disable certain ports or features before use. It’s no longer enough to document vulnerabilities and then warn users about them: you need to patch them yourself. 

And when it comes to documentation, the CRA outlines stricter requirements for how to approach your docs and make them accessible. In general, the CRA means you will have new documentation requirements, with more communication around where this documentation can be accessed, and you’ll need to produce a software supply chain and formal software bill of materials (SBOM) that is accessible and machine readable. 

As a minimum, you need to have the following documented and available for the public and EU authorities:

  • A description of the design, development, and vulnerability handling process
  • An assessment of cybersecurity risks
  • A list of harmonised EU cybersecurity standards the product meets
  • A signed EU Declaration of Conformity that the above essential requirements have been met
  • A Software Bill of Materials (SBOM) documenting vulnerabilities and components in the product

You can no longer hide behind intention

It’s not just documentation that you can’t use as a crutch or shield – intention is out too.

This means that you can’t defend flaws, design issues, or vulnerabilities as intentional design choices. For example, if your device has ports, features, or functionality that could reasonably be used to access the device or connect to networks, you need to take steps to mitigate the risks and attack vectors that these elements pose. 

In the next section, we’ll go through some of the practical steps you can take to address device cybersecurity.

The security basics are no longer optional

Many of the requirements of the CRA simply formalize cybersecurity practices and security features that should be considered as minimum standards. By this I’m referring to things like shipping with known vulnerabilities, expecting users and consumers to secure your devices after purchase, ignoring cybersecurity fundamentals like no default admin-password credentials, or hiding behind obscure or inadequate documentation. 

Some of these cybersecurity essentials include: 

  • Ensuring that whatever you’re building is as secure as it can be. It must have minimal attack surfaces. 
  • Hardening your device or product. Its data must be encrypted or protected and it must prevent unauthorized access.
  • Preventing downtime. Your device must keep working, even under DDoS attack, and it mustn’t interrupt other devices, even when attacked with exploits.
  • Keeping track of activity. Your device needs to be able to provide security data by monitoring or logging changes in the device. 
  • Proactive patching. Your product needs to be able to receive security updates or rollbacks. This includes direct or remote updates, user notifications about updates, and the ability to roll back updates or reset the product to a factory/default state.

Even without the CRA compelling you to meet higher cybersecurity standards, you should be meeting these basic standards in PDE security design. Here are some steps you could take to ensure your PDEs are as robust and secure as possible before they reach the market:

  • Implement a Zero Trust Architecture wherever possible
  • Ensure that your authentication, authorization, and access control are fully secure (and that you have control over your credentials)
  • Use Secure by Default configurations
  • Minimize your attack surface – if your device, system, or organization isn’t actively using something (whether it’s a port, component, or package), then disable it by default until it’s needed
  • Ensure proper use of cryptography to ensure data is protected at rest and in transfer, that traffic is encrypted, and that you avoid plaintext or cleartext data
  • Validate all input and handle all exceptions
  • Secure all individual components and their dependencies, not just the stack
  • Minimize the access permissions of apps and systems, and design your baseline to stop server-side request forgery from Day Zero
  • Use automated security patching software to ensure that validated and authenticated security updates, CVE fixes, and other patches are downloaded timeously and automatically 

In short, the goldrush of taking unsafe IoT devices to market is over: consumers have higher expectations for the security and privacy of the devices they use, and if your products don’t meet them, it will lead only to disaster down the line. We understand the serious impacts that CVEs have on users and businesses alike, which is why we take such a strong commitment to patching critical CVEs within 24 hours through Ubuntu Pro.

Ubuntu Pro gives organizations a hands-free, automated way of receiving vital software packages and security updates for up to 12 years, ensuring that they’re covered no matter what new vulnerabilities or regulatory compliance comes up. 

You can no longer ignore products after launch

Another priority you should focus on is patching and vulnerability management for your devices and software. One of the CRA’s primary requirements is to ensure that your devices can be securely updated against new vulnerabilities. Your updates must be free and sent out as soon as vulnerabilities are discovered, along with information to users about what actions they should take.

When this happens, you need to provide:

  • A description of the vulnerabilities and their severity
  • Information allowing users to identify the product with digital elements affected
  • The impacts of the vulnerabilities
  • Information helping users to remediate the vulnerabilities

What’s more, these patching and security update efforts must be long term, and cover the PDE’s entire lifecycle. You must regularly test the product, and fix vulnerabilities immediately – and once a fix has been applied, you need to publicly disclose what the fixed vulnerability was (in line with the new coordinated public disclosure policy you need to have, under the CRA). 

And for a period of a maximum of 5 years (or the product lifespan, whichever is shorter) you’ll be required to recall or withdraw products that don’t meet conformity standards of the CRA.

Get vital CRA compliance insights in our CRA compliance guide video

No more hidden or gray dependencies

Whether you’re classified as a manufacturer or not, you still need to think about your software supply chain like a manufacturer does. This is because the CRA introduces new requirements for documentation, transparent software supply chains, and a software bill of materials to show your software is securely sourced. As a minimum, you should be consuming trusted open source only, or only sourcing packages from trusted suppliers. 

If you’re unsure about your software supply chain and its ability to meet the CRA’s regulatory standards, documentation requirements, vulnerability disclosure demands and transparency expectations, you should evaluate your service and software providers to choose those who make it effortless to meet your CRA obligations. 

Generally, this means picking a vendor who meets one of the following criteria:

  • Has a CE marking
  • Can provide supply chain certification
  • Has decided to take on the category of ‘Manufacturer’

Our recommendation is to consume packages or software updates from large and trusted suppliers who have taken on responsibility for CRA compliance. This means that you should be sourcing versions of your open source software, (or security patches for that software) directly from a vendor who has decided to take on the category of ‘Manufacturer’ in the software supply chain.

At Canonical we understand how important this is, which is why we’ve committed to meeting the manufacturer responsibilities for many of our products. The many, many tools and products we develop and maintain at Canonical – Ubuntu; our distributions of Kubernetes, MicroCloud, and OpenStack; and more – are designed with security in mind, supported through security maintenance and vulnerability patching, and aligned with the regulatory oversight in the CRA. On top of this, services like Ubuntu Pro for Devices ensure your devices will receive security maintenance for up to 12 years. 

No more “market-first” approach

The days of “move fast and break things” are over. Under the CRA, you cannot hyperfocus on market timing or a launch date and ship an MVP that skimps on security design and long-term support. Instead, you need to build on a strong foundation for security and support for your packages and software that extends for many years past your launch date.

You should be reassessing your choices – of OS, development environment, and software vendors – to meet this new change. And the systems you do choose should give both the robust baseline of security and the long-term security support the CRA requires – as well as the minimized attack surface that reduces the number of attack vectors and vulnerabilities as much as possible. 

Luckily, this has benefits that go beyond security: a minimized attack surface, device-optimized OS, or containerized build keeps everything to the smallest footprint possible – which means faster performance, lower device specification requirements, and cheaper manufacturing costs. In fact, Ubuntu Core (the embedded Linux OS for devices) takes these requirements and benefits to heart: it acts as a pared-down, strictly confined flavor of Ubuntu for embedded devices. Ubuntu Core has an optimized profile that’s perfect for devices that have limited specifications or hardware but which still demand robust security, long-term maintenance, and high levels of on-device performance. 

Summary of what you need to change to meet CRA compliance

In conclusion, the CRA means that a lot of things have changed. Gone are the days of hiding behind obscure documentation, passing the buck to manufacturers or users, or launching market-first, “fire-and-forget” devices with unknown dependencies and no support. However, by intensifying the security of your PDEs with cybersecurity basic practices, consuming trusted packages and security updates from a manufacturer-category supplier with a long-term support program, and building a clear list of your software supply chain and dependencies, you can easily meet these new requirements head-on, and access the EU market for years to come. 

To sum everything up, if you want to meet the new challenges and requirements of the CRA head-on, you need to follow these simple 6 steps: 

  • Adopt best practices for PDE hardening and device security
  • Implement cybersecurity hardening to the greatest extent possible
  • Conduct your compliance assessment and testing as soon as you can
  • Document everything and make it publicly available, along with SBOMs that show your software composition and dependencies 
  • Take a customer-first approach, and beware of rushing an MVP to market
  • Pick vendors who take on manufacturer responsibility for packages/software

To find out more about how Canonical can help you to meet the EU Cyber Resilience Act requirements for your devices, visit or comprehensive CRA webpage at https://canonical.com/solutions/open-source-security/cyber-resilience-act or fill out this form to contact our team of compliance experts.

Learn more

Find out how you can design and support CRA-ready PDEs by bringing up to 12 years of automated security patching to your device by visiting www.ubuntu.com/pro

Learn how Ubuntu Core is ideal for your PDEs, IoT devices, and all embedded systems by visiting www.ubuntu.com/core 

More reading

07 May, 2025 02:36PM

hackergotchi for Deepin

Deepin

May 06, 2025

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.2.1-28 Released

This release of Clonezilla live (20250504-plucky) includes major enhancements and bug fixes.

ENHANCEMENTS and CHANGES from 20250303-oracular

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Ubuntu Plucky Puffin (25.04) repository (as of 2025/May/04).
  • Linux kernel was updated to 6.14.0-15.15.
  • Partclone was updated to 0.3.36.
  • Added packages libfsapfs-utils, usb-modeswitch and fscrypt in Clonezilla live system.
  • Added new program ocs-find-live-key & updated ocs-put-log-usb so that it can copy the log files even when Clonezilla live USB drive is used in "To RAM" mode. It will find the USB key that: (1) is vFAT file system and (2) contains the file "Clonezilla-Live-Version", then it will be treated as Clonezilla live USB drive.
  • ocs-live-repository: added dev=///OCS_LIVE_USB so that the Clonezilla live USB drive can be assigned as the image repo, especially when Clonezilla live USB drive is booted in "To RAM" mode. The Clonezilla live USB drive has to be vFAT file system since it's used to boot in both uEFI and MBR mode. Hence it is not a good choice for image repo since vFAT file system has many restrictions. Better to use UUID or LABEL to assign the image repo.
  • Improved the saving dialog menu and prompt.
  • Added a mechanism to mitigate the random order of block devices. It can be enabled by adding the boot parameter "ocs_1_cpu_udev", i.e., when udev is started in initramfs, only 1 CPU is online. After that, all CPUs will be enabled.
  • drbl-ocs.conf: enabled btrfs support in drbl-ocs.conf since Partclone has been updated to 0.3.36 which supports btrfs v6.13.

BUG FIXES

  • Disabled devices list cache mechanism since blkid is run too many times that makes program run slowly.

06 May, 2025 01:14PM by Steven Shiau

hackergotchi for ARMBIAN

ARMBIAN

Armbian Development Highlights – Early May 2025

Early May brought another round of steady advancements to the Armbian project, with progress in U-Boot updates, board enablement, firmware fixes, and notable improvements to Armbian’s growing catalog of self-hosted applications.

Bootloader and Firmware Enhancements

Several platforms saw significant U-Boot improvements. The Cherryba-M1 now benefits from an upgraded U-Boot and reorganized patch structure, thanks to Igor‘s work on upgrading Cherryba-M1 to latest u-boot and moving patch to new folder. Andy bumped U-Boot to v2025.04 for the Lubancat2, keeping the board current. The Radxa Rock 4 SE also migrated to this version, where Niklas refined its configuration and boot behavior.

Meanwhile, the Khadas VIM3 received a broader bootloader overhaul led by Ricardo, introducing SD-first boot order, squashfs and fileenv support, and enhanced compatibility with Home Assistant OS in a comprehensive update to U-Boot for Khadas VIM3.

Older configurations didn’t go unnoticed: Igor removed deprecated ATF tags for sun50iw9 / H61x, while Olaf pushed the sunxi64 platform to the latest LTS version of ATF.

Expanding Device Support

Armbian continues to grow its ecosystem. Rolf introduced official support for the Banana Pi M2+, making it easier for users to deploy on this compact board. On the RISC-V side, libiunc brought the kernel for the StarFive2 platform up to v6.6, ensuring ongoing support and compatibility.

Installer Improvements and Runtime Fixes

Improving install experience, Igor Velkov added Btrfs root subvolume support when installing to NVMe, paving the way for better snapshot and maintenance workflows. Igor also corrected missing Broadcom firmware for Raspberry Pi boards to fix wireless support and suppressed firmware warnings related to built-in Realtek USB network drivers, helping clean up logs and reduce confusion.

Self-Hosted App Catalog Grows

The list of installable apps during Armbian setup has expanded. Two powerful platforms are now just a selection away:

  • Immich, a self-hosted photo and video backup system, was added with the introduction of Immich to configNG.

  • NetBox, a leading infrastructure resource management solution, joined the roster in the addition of NetBox to Armbian configNG.

Both are available via the configNG provisioning interface.

Deprecations and Housekeeping

Support for legacy distributions has now ended: Debian Bullseye and Ubuntu Focal and Jammy will no longer receive repository updates, as noted in the userspace status change to EOS.

Elsewhere, dependency and CI maintenance continued. Automated tools like Dependabot bumped packages such as setuptools and GitHub actions for changed-files, while amazingfate restored support for the AIC8800 Wi-Fi driver by reverting a mistaken disable.

Further Reading

Explore the full range of updates in the official Armbian snapshot.

The post Armbian Development Highlights – Early May 2025 first appeared on Armbian.

06 May, 2025 10:47AM by Didier Joomun

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: IBM LinuxONE 5 and Ubuntu Server, a great combination from day one

Today, IBM announced the launch of their latest server: the new IBM LinuxONE Emperor 5. This fifth generation redefines IBM’s LinuxONE system as their most secure and high-performing Linux computing platform for data, applications and trusted AI. 

Canonical supports LinuxONE Emperor 5 with Ubuntu Server. Ubuntu is cost-efficient and easy to install and manage on the servers – all whilst enabling the most up-to-date LinuxONE hardware features. Ubuntu Server for IBM Z and LinuxONE is ready for deployment from day one. 

This blog provides an overview of the IBM LinuxONE Emperor 5’s key features, and will demonstrate why Ubuntu Server is the right choice of software to install.

The new system was developed towards three main aspects and goals:

1. Industry-leading cyber security and privacy

IBM LinuxONE Emperor 5 lets users deploy confidential containers, and use quantum-safe encryption which can be scaled and unified across an enterprise. 

Ubuntu Server for IBM Z and LinuxONE is also designed with security in mind, making the perfect fit for the security features of the IBM LinuxONE 5 generation hardware. 

Canonical uses high security cryptography algorithms and disables weak cryptography algorithms by default. Ubuntu Server for IBM Z and LinuxONE is one of the very first Linux distributions that introduced Secure Execution support (since 20.04), and provides support for pervasive encryption in all dimensions, be it data at-rest, in-flight, or in-use. Ubuntu Server for IBM Z and LinuxONE supports quantum-safe cryptography, requiring only two commands to do so, and making use of the IBM LinuxONE 5 generation’s quantum-safe encryption becomes a transparent and easy process. 

Canonical’s Kernel Livepatch service (Pro) delivers kernel patches for high or critical vulnerabilities that can be applied to a running kernel, without needing immediate downtime, so the system can continue running.

Ubuntu is built for compliance, and has various security certifications including FIPS. 

These are only the beginning – you can learn more about the wealth of inherent Ubuntu Security Features in our wiki.

2. Optimized IT for energy and cost savings

IBM’s enterprise class of LinuxONE systems is renowned for its large-scale workload consolidation which can result in significant savings on energy, space, and operational costs.

These strengths improve with each new hardware generation, as overall resources and performance increase whilst maintaining the system’s maximum energy consumption at a steady level. IBM LinuxONE Emperor 5 includes up to 208 customer cores, up to 64TB of memory, and introduces a simplified system IO architecture. 

By choosing Ubuntu Pro to run on the IBM servers, you can get Expanded Security Maintenance for open source at a transparent rate with our unique drawer-based pricing. Learn more about the benefits of an Ubuntu Pro subscription here.

3. Built-in AI, engineered for better outcomes

On the IBM LinuxONE Emperor 5, you can develop AI models in a hybrid cloud and run inference alongside data and applications within a trusted execution environment (TEE), enhancing prediction accuracy using a multiple model AI approach with the integration AI accelerator in Telum II, and scale AI while maximizing energy efficiency. In addition, the previously announced Spyre is expected to become available in 2025, and will enable Generative AI applications to be developed and run.

Ubuntu is a popular choice for most AI and machine learning researchers. Ubuntu balances ease of use, compatibility with the larger AI stacks and popular frameworks, and support from the open-source community or commercial support through Ubuntu Pro.

Canonical is working hard  to ensure Telum II (of LinuxONE 5) and the upcoming Spyre Accelerator card are supported on Ubuntu Server, allowing users to deploy a multiple AI model approach in the future and as well as hardware-assisted generative AI.

After 9 years of LinuxONE and IBM Z (s390x) platform support, Canonical is proud that Ubuntu Server plays a central role in open source workloads, helping to make LinuxONE 5 generation easier to use, more secure, more reliable, and available to all at scale.

Download Ubuntu Server for IBM Z and LinuxONE

For more information about IBM LinuxONE Emperor 5, visit www.ibm.com/products/linuxone-5

Or if you have any questions, you can contact us directly:

Valerie Noto, Alliance Business Director – valerie.noto@canonical.com

Frank Heimes, Staff Engineer – frank.heimes@canonical.com

06 May, 2025 03:59AM

May 05, 2025

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 890

Welcome to the Ubuntu Weekly Newsletter, Issue 890 for the week of April 27 – May 3, 2025. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • LXD: Weekly News #393
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • UbuCon Korea 2025 참가등록 안내 [Registration Guide]
  • LoCo Events
  • Patch Pilot Hand-off 25.10
  • Ubuntu Server Gazette – Issue 3 – Document with a little help from my friends
  • Other Community News
  • Canonical News
  • In the Blogosphere
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, 24.10, and 25.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić – LXD
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

05 May, 2025 10:14PM by guiverc

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ubuntu IoT Day in Singapore – Unlock compliant and scalable innovation in edge AI

Singapore | May 27, 2025 | Full-day event

How do you build robust, performant edge AI infrastructure? This is the question organizations are asking themselves when looking to capitalize on the opportunity of edge AI.

Ubuntu IoT Day is your opportunity to find out – and it’s coming to Singapore! Join us on May 27 to discover how Canonical and our IoT partners are powering innovation in edge computing, AI, and secure IoT at scale.

Wondering what to expect? Here’s a quick rundown:

  • 150+ attendees from Southeast Asia’s embedded and industrial tech community
  • Technical sessions, live demos, and real-world use cases
  • Expert speakers and partner showcases
  • The latest in regulation-ready open source solutions for IoT and Edge AI

This full-day event brings together system integrators, hardware manufacturers, and software architects to explore the latest advancements in embedded Linux, edge computing, and security.

Save your seat

Join us in shaping the future of edge AI with Ubuntu 

Open source is already at the heart of AI – a 2025 report by McKinsey revealed that, across the board, more than half of organizations are already using open source AI tooling. 

But why stop at tooling? As AI applications move closer to the edge, businesses need powerful, secure, and reliable platforms to stay ahead. Open source software provides you with the latest innovations, with the flexibility and control required to be both fast-moving and compliant. Ubuntu provides all the benefits of the open source ecosystem, wrapped up in enterprise-grade support. 

So what does this look like in practice? Ubuntu IoT Day is your opportunity to engage with Canonical engineers, explore real-world use cases, and connect with partners building the next generation of smart, compliant devices.

We’ll be covering key topics like:

  • Implementing edge AI at scale with open-source solutions
  • Accelerating time-to-market with certified hardware and Ubuntu Core
  • Navigating security regulations (CRA, NIS2, IEC 62443)
  • Managing device fleets with real-time monitoring and long-term support

Reach CRA cybersecurity compliance 

During the event, we will be available to help you navigate the requirements of the Cyber Resilience Act (CRA) and ensure your Ubuntu-based devices meet compliance standards. As the CRA introduces new cybersecurity regulations impacting device design, deployment, and maintenance – from secure software development to long-term vulnerability management – understanding these changes is crucial to maintaining market access. 

Join us to learn how to:

  • Unlock additional security updates, hardening profiles, and get up to 10 years of security maintenance with Ubuntu Pro for Devices, our enterprise subscription.
  • Harness Landscape, Canonical’s systems management tool, to monitor fleets and track vulnerabilities in real-time.
  • Harden your Ubuntu-based devices by implementing security best practices, such as kernel hardening, access controls, and secure update mechanisms.
  • Explore how Ubuntu Core brings the benefits of Ubuntu to embedded devices through a minimal, strictly containerized architecture.

For those looking for complete security and industrial-grade deployments, Ubuntu Core is a reliable embedded Linux OS for the Internet of Things (IoT), devices, and edge systems. It encapsulates every system component, along with the system itself, into a set of containers. These containers operate with strict kernel-enforced confinement, ensuring security and stability. Ubuntu Core supports reliable over-the-air updates, minimizing disruptions. Additionally, failsafe rollbacks provide a safety net, making it ideal for intelligent edge and IoT applications.

Experience innovation first-hand from our partners 

Ubuntu is at the heart of a well-established hardware ecosystem. We work directly with silicon vendors and ODMs to certify their devices, ensuring that innovators like you get the full, performant Ubuntu experience, out of the box. 

 But don’t just take our word for it. Join us at Ubuntu IoT Day and learn how Canonical works closely with partners to deliver devices that are optimized for performance, security, and regulatory compliance — ready for real-world deployment.

At the event, you’ll be able to:

  • Learn how Canonical’s IoT program bridges the gap from silicon vendors, ODMs to end customers and accelerate your IoT development
  • See live demos on devices running Ubuntu from Aaeon, Adlink, Advantech, ASUS IoT, Everfocus and Qualcomm.
  • Join key sessions covering topics such as AI and robotics development from Mediatek, ARM, Aaeon, Advantech and more.
  • Meet our silicon and ODM partners who are shaping the future of industrial AI.

Join us

Whether you’re building IoT gateways, deploying AI at the edge, or managing large fleets of devices with Ubuntu, we’re here to help you build smarter and ship faster. 

📅 Save the date: May 27, 2025 – Ubuntu IoT Day 

📍 Location: Shangri-La Hotel, Singapore

🔗 Register now!

Let’s keep in touch 

Your learning journey doesn’t end at Ubuntu IoT Day. Discover more about defining your software stack for embedded devices in our latest whitepaper.

Which embedded Linux distribution should you choose? In this whitepaper, you can learn how to ensure your embedded devices meet the requirements of the Cyber Resilience Act (CRA). This whitepaper explores the critical considerations for device manufacturers, developers, and relevant stakeholders when choosing between custom-built Linux distributions using the Yocto Project and commercially supported solutions like Ubuntu Core.

05 May, 2025 11:42AM