(中文) 技术双周报 | 任务栏支持锁定 + 内核性能调优,这波优化很实用!
17 June, 2025 08:21AM by xiaofei
17 June, 2025 08:21AM by xiaofei
Canonical, the publisher of Ubuntu and trusted open source solutions provider, is proud to sponsor HPE Discover Las Vegas 2025. Join us from June 23–26 to explore how our collaboration with Hewlett Packard Enterprise (HPE) is transforming the future of enterprise IT, from virtualization and cloud infrastructure to AI/ML workloads.
Register to HPE Discover Las Vegas 2025
Stop by our booth to engage with our team and get a closer look at our latest innovations. Here’s what’s in store:
Visit our booth and attend sessions led by industry experts covering a range of open source solutions. Plus, all attendees will receive a special gift!
Discover how to gain control over your infrastructure, optimize costs, and automate operations while building a flexible, secure foundation that scales seamlessly with your business growth – whether integrated into your GreenLake multi-cloud strategy or deployed as a standalone private cloud.
From Kubeflow for MLOps to Charmed Kubernetes for orchestration. See how open source AI infrastructure drives innovation while reducing complexity and costs.
Learn how Ubuntu powers HPE VM Essentials to deliver the simplicity, security, and scalability your business demands – making enterprise virtualization accessible to organizations of every size.
As a strategic partner of HPE and a member of the HPE Technology Partner Program, Canonical brings decades of open source innovations to enterprise-grade solutions. Together, we deliver a full-stack experience — with integrated, secure, and cost-effective platforms that scale with your business.
Through our joint collaboration, organizations gain:
Know more about our offerings and how Canonical and HPE can propel your business forward.
Want to see more?
Stop by the booth #2235 to speak to our experts.
Are you interested in setting up a meeting with our team?
Reach out to our Alliance Business Director:
Valerie Noto – valerie.noto@canonical.com
Welcome back! If you've been following our PureOS Crimson milestones, you'll see that the few remaining tasks relate to providing ready-to-flash images for the Librem 5.
The post PureOS Crimson Development Report: May 2025 appeared first on Purism.
16 June, 2025 04:26PM by Purism
In a recent interview republished by Yahoo Finance, Purism CEO Todd Weaver explained why the Liberty Phone, Purism’s secure made in the USA smartphone, is exempt from U.S. tariffs targeting smartphones manufactured in China—such as Apple’s iPhone.
The post Purism Liberty Phone free from tariffs, as reported by Yahoo Finance appeared first on Purism.
16 June, 2025 04:03PM by Purism
The Internet has changed a lot in the last 40+ years. Fads have come and gone. Network protocols have been designed, deployed, adopted, and abandoned. Industries have come and gone. The types of people on the internet have changed a lot. The number of people on the internet has changed a lot, creating an information medium unlike anything ever seen before in human history. There’s a lot of good things about the Internet as of 2025, but there’s also an inescapable hole in what it used to be, for me.
I miss being able to throw a site up to send around to friends to play with without worrying about hordes of AI-feeding HTML combine harvesters DoS-ing my website, costing me thousands in network transfer for the privilege. I miss being able to put a lightly authenticated game server up and not worry too much at night – wondering if that process is now mining bitcoin. I miss being able to run a server in my home closet. Decades of cat and mouse games have rendered running a mail server nearly impossible. Those who are “brave” enough to try are met with weekslong stretches of delivery failures and countless hours yelling ineffectually into a pipe that leads from the cheerful lobby of some disinterested corporation directly into a void somewhere 4 layers below ground level.
I miss the spirit of curiosity, exploration, and trying new things. I miss building things for fun without having to worry about being too successful, after which “security” offices start demanding my supplier paperwork in triplicate as heartfelt thanks from their engineering teams. I miss communities that are run because it is important to them, not for ad revenue. I miss community operated spaces and having more than four websites that are all full of nothing except screenshots of each other.
Every other page I find myself on now has an AI generated click-bait title, shared for rage-clicks all brought-to-you-by-our-sponsors–completely covered wall-to-wall with popup modals, telling me how much they respect my privacy, with the real content hidden at the bottom bracketed by deceptive ads served by companies that definitely know which new coffee shop I went to last month.
This is wrong, and those who have seen what was know it.
I can’t keep doing it. I’m not doing it any more. I reject the notion that this is as it needs to be. It is wrong. The hole left in what the Internet used to be must be filled. I will fill it.
Throughout the 2000s, some of my favorite memories were from LAN parties at my friends’ places. Dragging your setup somewhere, long nights playing games, goofing off, even building software all night to get something working—being able to do something fiercely technical in the context of a uniquely social activity. It wasn’t really much about the games or the projects—it was an excuse to spend time together, just hanging out. A huge reason I learned so much in college was that campus was a non-stop LAN party – we could freely stand up servers, talk between dorms on the LAN, and hit my dorm room computer from the lab. Things could go from individual to social in the matter of seconds. The Internet used to work this way—my dorm had public IPs handed out by DHCP, and my workstation could serve traffic from anywhere on the internet. I haven’t been back to campus in a few years, but I’d be surprised if this were still the case.
In December of 2021, three of us got together and connected our houses together in what we now call The Promised LAN. The idea is simple—fill the hole we feel is gone from our lives. Build our own always-on 24/7 nonstop LAN party. Build a space that is intrinsically social, even though we’re doing technical things. We can freely host insecure game servers or one-off side projects without worrying about what someone will do with it.
Over the years, it’s evolved very slowly—we haven’t pulled any all-nighters. Our mantra has become “old growth”, building each layer carefully. As of May 2025, the LAN is now 19 friends running around 25 network segments. Those 25 networks are connected to 3 backbone nodes, exchanging routes and IP traffic for the LAN. We refer to the set of backbone operators as “The Bureau of LAN Management”. Combined decades of operating critical infrastructure has driven The Bureau to make a set of well-understood, boring, predictable, interoperable and easily debuggable decisions to make this all happen. Nothing here is exotic or even technically interesting.
The hardest part, however, is rejecting the idea that anything outside our own LAN is untrustworthy—nearly irreversible damage inflicted on us by the Internet. We have solved this by not solving it. We strictly control membership—the absolute hard minimum for joining the LAN requires 10 years of friendship with at least one member of the Bureau, with another 10 years of friendship planned. Members of the LAN can veto new members even if all other criteria is met. Even with those strict rules, there’s no shortage of friends that meet the qualifications—but we are not equipped to take that many folks on. It’s hard to join—-both socially and technically. Doing something malicious on the LAN requires a lot of highly technical effort upfront, and it would endanger a decade of friendship. We have relied on those human, social, interpersonal bonds to bring us all together. It’s worked for the last 4 years, and it should continue working until we think of something better.
We assume roommates, partners, kids, and visitors all have access to The Promised LAN. If they’re let into our friends' network, there is a level of trust that works transitively for us—I trust them to be on mine. This LAN is not for “security”, rather, the network border is a social one. Benign “hacking”—in the original sense of misusing systems to do fun and interesting things—is encouraged. Robust ACLs and firewalls on the LAN are, by definition, an interpersonal—not technical—failure. We all trust every other network operator to run their segment in a way that aligns with our collective values and norms.
Over the last 4 years, we’ve grown our own culture and fads—around half of the people on the LAN have thermal receipt printers with open access, for printing out quips or jokes on each other’s counters. It’s incredible how much network transport and a trusting culture gets you—there’s a 3-node IRC network, exotic hardware to gawk at, radios galore, a NAS storage swap, LAN only email, and even a SIP phone network of “redphones”.
We do not wish to, nor will we, rebuild the internet. We do not wish to, nor will we, scale this. We will never be friends with enough people, as hard as we may try. Participation hinges on us all having fun. As a result, membership will never be open, and we will never have enough connected LANs to deal with the technical and social problems that start to happen with scale. This is a feature, not a bug.
This is a call for you to do the same. Build your own LAN. Connect it with friends’ homes. Remember what is missing from your life, and fill it in. Use software you know how to operate and get it running. Build slowly. Build your community. Do it with joy. Remember how we got here. Rebuild a community space that doesn’t need to be mediated by faceless corporations and ad revenue. Build something sustainable that brings you joy. Rebuild something you use daily.
Bring back what we’re missing.
16 June, 2025 12:55PM by Joseph Lee
This week’s Armbian updates focused on kernel improvements, bootloader modernization, and several core enhancements to the build infrastructure. Key work spanned platforms like Rockchip, Sunxi, and Odroid, emphasizing kernel stability and broader compatibility across boards.
Several boards received kernel updates:
Patches also landed to adapt Wi-Fi drivers to 6.15-era changes, including fixes for xradio and uwe5622 on Sunxi, contributed by The-going:
Improvements were made to bootloader support:
Several build system enhancements landed this cycle:
PR #8259 and commit cdf71df by djurny expanded DHCP configuration in netplan to automatically include interfaces matching lan* and wan*, simplifying initial setup across devices.
The post Armbian Development Highlights: June 2–9, 2025 first appeared on Armbian.
12 June, 2025 10:16PM by Michael Robinson
Choosing an Application Delivery Controller (ADC) is not just about ticking boxes. It’s about making sure your infrastructure is prepared to deliver fast, secure, and resilient applications—without overengineering or overspending. From availability to security and automation, the ADC sits at the core of how your service behaves under pressure.
In this guide, we’ll walk through the key criteria that define a capable ADC, explaining not just what to look for, but why each factor matters.
An ADC should grow with your business—not become a bottleneck.
When evaluating scalability, don’t just ask “how much traffic can it handle?” Consider how performance evolves as demand increases. Can the ADC handle thousands of concurrent sessions, millions of requests per second, or high-throughput SSL traffic without introducing latency?
You’ll also want to know:
An Application Delivery Controller that scales poorly can turn traffic spikes into outages. One that scales well can become the foundation for future growth.
Look for: high throughput (Gbps), connection-per-second capacity, clustering support, autoscaling capabilities.
Not all ADCs offer the same traffic management logic. Some only offer basic Layer 4 balancing (based on IP or port), while others support Layer 7 intelligence (routing based on URLs, cookies, headers…).
Layer 7 capabilities are especially useful for:
Application awareness also includes dynamic health checks to avoid sending users to unhealthy servers.
Look for: Layer 4 and 7 balancing, persistence options, content-based routing, health checks, SSL offloading.
Modern ADCs are not just traffic routers—they’re the front line of your application security.
A solid ADC should include:
The key here is native integration. In many platforms, these are added as extra modules—sometimes from third-party vendors—making management more complex and pricing less predictable.
Look for: integrated WAF, rule customization, L7 DDoS protection, bot mitigation, API traffic control.
Your ADC should adapt to your infrastructure—not the other way around.
Whether you’re running on-premises, in the cloud, or in a hybrid setup, the ADC must support a variety of deployment methods:
Some vendors tie features or form factors to licensing restrictions—make sure the platform you choose works where you need it to.
Look for: multiple deployment formats, public cloud compatibility, support for major hypervisors.
Even the best features become frustrating if they’re hard to manage.
Many ADCs suffer from steep learning curves, non-intuitive UIs, or missing automation options. Worse, some make their visual consoles or monitoring tools part of separate, paid modules—meaning essential functions like traffic monitoring or cert management come at an extra cost.
You want:
Look for: REST+JSON API, full-featured web console, real-time traffic visibility, external integrations.
For industries like finance, healthcare, or eCommerce, security compliance is not optional.
ADC encryption handling must meet modern standards, including:
The ADC should simplify compliance, not add to the operational burden.
Look for: automatic certificate renewal (e.g., via Let’s Encrypt), strong encryption policies, compliance certification.
This is where many ADC vendors fall short. What looks like a complete product often turns out to be a basic package—missing essential features unless you purchase additional modules or licenses.
This modular approach makes it hard to estimate the actual cost of the solution over time. It also complicates procurement and makes pricing comparisons difficult, as vendors vary based on throughput, users, features, and support tiers.
Support is another critical pain point.
Many ADC vendors outsource their support to general helpdesk services operating through ticketing systems. The first-line staff often lack deep technical knowledge of the product, and response times can be slow—even for urgent issues. This doesn’t just delay resolution; it puts service continuity and customer trust at risk.
Look for: all-inclusive pricing models, included updates, fast SLA response, expert technical support.
SKUDONET Enterprise Edition is built for companies that want full control over application performance and security—without hidden costs or overcomplicated licensing.
Try SKUDONET Enterprise Edition free for 30 days and explore how a real ADC should work.
12 June, 2025 06:04AM by Nieves Álvarez
Ainda a braços com livros electrónicos e cachuchos espertos, o Miguel e o Diogo dão belas lições sobre como Reduzir, Reutilizar e Reciclar que envolvem passarinhos e ovos estrelados; dizem mal do Windows 11 e como dizer adeus ao Windows 10 da melhor maneira - e ainda têm tempo, entre reuniões muito LoCo, para fazerem rebentar a última bomba da Canonical - que está a dar prémios! - mas também envolve deixar X.org para trás na berma da estrada. Depois revimos as novidades sobre cimeiras variadas, datas novas para as agendas e o que podemos esperar das novas versões de Ubuntu Touch e Questing Cueca (é assim que se diz, não é…?).
Já sabem: oiçam, subscrevam e partilhem!
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os efeitos sonoros deste episódio possuem as seguintes licenças: Risos de piadas secas; patrons laughing.mp3 by pbrproductions – https://freesound.org/s/418831/ – License: Attribution 3.0; Trombone: wah wah sad trombone.wav by kirbydx – https://freesound.org/s/175409/ – License: Creative Commons 0; Quem ganhou? 01 WINNER.mp3 by jordanielmills – https://freesound.org/s/167535/ – License: Creative Commons 0; Isto é um Alerta Ubuntu: Breaking news intro music by humanoide9000 – https://freesound.org/s/760770/ – License: Attribution 4.0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.
Apple has introduced a new open-source Swift framework named Containerization, designed to fundamentally reshape how Linux containers are run on macOS. In a detailed presentation, Apple revealed a new architecture that prioritizes security, privacy, and performance, moving away from traditional methods to offer a more integrated and efficient experience for developers.
The new framework aims to provide each container with the same level of robust isolation previously reserved for large, monolithic virtual machines, but with the speed and efficiency of a lightweight solution.
Here is the video:
libc
implementation, increasing the attack surface and requiring constant updates.The Containerization framework was built with three core goals to address these challenges:
Containerization is more than just an API; it’s a complete rethinking of the container runtime on macOS.
The most significant architectural shift is that each container runs inside its own dedicated, lightweight virtual machine. This approach provides profound benefits:
EXT4
. Apple has even developed a Swift package to manage the creation and population of these EXT4
filesystems directly from macOS.vminitd
: The Swift-Powered Heart of the ContainerOnce a VM starts, a minimal initial process called vminitd
takes over. This is not a standard Linux init
system; it’s a custom-built solution with remarkable characteristics:
vminitd
is written entirely in Swift and runs as the first process inside the VM.vminitd
runs in is barebones. It contains no core utilities (like ls
, cp
), no dynamic libraries, and no libc
implementation.vminitd
is cross-compiled from a Mac into a single, static Linux executable. This is achieved using Swift’s Static Linux SDK and musl
, a libc
implementation optimized for static linking.vminitd
is responsible for setting up the entire container environment, including assigning IP addresses, mounting the container’s filesystem, and supervising all processes that run within the container.
container
Command-Line ToolTo showcase the power of the framework, Apple has also released an open-source command-line tool simply called container
. This tool allows developers to immediately begin working with Linux containers in this new, secure environment.
container image pull alpine:latest
container run -ti alpine:latest sh
Within milliseconds, the user is dropped into a shell running inside a fully isolated Linux environment. Running the ps aux
command from within the container reveals only the shell process and the ps
process itself, a clear testament to the powerful process isolation at work.
Both the Containerization framework and the container tool are available on GitHub. Apple is inviting developers to explore the source code, integrate the framework into their own projects, and contribute to its future by submitting issues and pull requests.
This move signals a strong commitment from Apple to making macOS a first-class platform for modern, Linux container-based development, offering a solution that is uniquely secure, private, and performant.
Source:
The post Apple Unveils “Containerization” for macOS: A New Era for Linux Containers on macOS appeared first on Utappia.
Release notes: https://kde.org/announcements/gear/25.04.2/
Now available in the snap store!
Along with that, I have fixed some outstanding bugs:
Ark: now can open/save files in removable media
Kasts: Once again has sound
WIP: Updating Qt6 to 6.9 and frameworks to 6.14
Enjoy everyone!
Unlike our software, life is not free. Please consider a donation, thanks!
To ease the path of enterprise AI adoption and accelerate the conversion of AI insights into business value, NVIDIA recently published the NVIDIA Enterprise AI Factory validated design, an ecosystem of solutions that integrates seamlessly with enterprise systems, data sources, and security infrastructure. The NVIDIA templates for hardware and software design are tailored for modern AI projects, including Physical AI & HPC with a focus on agentic AI workloads.
Canonical is proud to be included in the NVIDIA Enterprise AI Factory validated design. Canonical Kubernetes orchestration supports the process of efficiently building, deploying, and managing a diverse and evolving suite of AI agents on high-performance infrastructure. The Ubuntu operating system is at the heart of NVIDIA Certified Systems across OEM partnerships like Dell. Canonical also collaborates with NVIDIA to ensure the stability and security of open-source software across AI Factory by securing agentic AI dependencies within NVIDIA’s artifact repository.
Canonical’s focus on open source, model-driven operations and ease of use offers enterprises flexible options for building their AI Factory on NVIDIA-accelerated infrastructure.
Canonical Kubernetes is a securely designed and supported foundational platform. It unifies the management of a complex stack – including NVIDIA AI Enterprise, storage, networking, and observability tools – onto a single platform.
Within the NVIDIA Enterprise AI Factory validated design, Kubernetes is used to independently develop, update, and scale microservice-based agents, coupled with automated CI/CD pipelines. Kubernetes also handles the significant and often burstable compute demands for training AI models and scales inference services for deployed agents based on real-time needs.
Based on upstream Kubernetes, Canonical Kubernetes is integrated with the NVIDIA GPU and Networking Operators to leverage NVIDIA hardware acceleration and supports the deployment of NVIDIA AI Enterprise, enabling AI workloads with NVIDIA NIM and accelerated libraries.
Canonical Kubernetes provides full-lifecycle automation and has long-term support with recently announced 12-year security maintenance.
Ubuntu is the most widely used operating system for AI workloads. Choosing Ubuntu as the base OS for NVIDIA AI Factory gives organizations a trusted repository for all their open source, not just the OS. With Ubuntu Pro, customers get up to 12 years of security maintenance for thousands of open source packages including the most widely used libraries and toolchains, like Python, R and others. Organizations can complement that with Canonical’s Container Build Service to get custom containers built to spec, and security maintenance for their entire open source dependency tree.
To learn more about what the NVIDIA Enterprise AI Factory validated design could do for you, get in touch with our team – we’d love to hear about your project.
Visit us at our booth E03 at NVIDIA GTC Paris on June 11-12 for an in-person conversation about how NVIDIA Enterprise AI Factory validated designs.
The combined solutions simplify infrastructure operations and accelerate time-to-value for AI, telecom, and enterprise computing workloads.
At GTC Paris today, Canonical announced support for the NVIDIA DOCA Platform Framework (DPF) with Canonical Kubernetes LTS. This milestone strengthens the strategic collaboration between the two companies and brings the benefits of NVIDIA BlueField DPU accelerations to cloud-native environments with end-to-end automation, open-source flexibility, and long-term support.
DPF is NVIDIA’s software framework for managing and orchestrating NVIDIA BlueField DPUs at scale. NVIDIA BlueFieldenables advanced offloading of infrastructure services (such as networking, storage, and security) directly onto the DPU, freeing up host CPU resources and enabling secure, high-performance, zero-trust architectures. With Canonical Kubernetes now officially supporting DPF 25.1, developers and infrastructure teams can easily integrate these capabilities into their AI, telecom and enterprise computing workloads using Canonical’s proven tooling and automation stack.
This integration enables organizations to deploy DPU-accelerated infrastructure across telco, enterprise, and edge use cases, with key benefits including:
“This milestone marks a significant step forward in our collaboration with NVIDIA,” said Cedric Gegout, VP of Product at Canonical. “Together, we’re enabling a new class of infrastructure that combines the power of NVIDIA BlueField DPUs with the flexibility and automation of Canonical Kubernetes – laying the groundwork for secure, high-performance environments across AI, telecom, and enterprise use cases.”
When combined, the NVIDIA DOCA software framework and Canonical Kubernetes empower cloud architects and platform engineers to design scalable, secure infrastructure while minimizing operational complexity. For DevOps and SRE teams, the integration of BlueField-accelerated services into CI/CD pipelines becomes streamlined, enabling consistent, automated delivery of infrastructure components. Application developers gain access to offloaded services through the DOCA SDK and APIs, accelerating innovation without compromising performance. Meanwhile, IT decision makers benefit from enhanced efficiency, workload isolation, and built-in compliance as they modernize their infrastructure.
Open source and cloud-native communities now have a powerful foundation to build on with Canonical Kubernetes and NVIDIA DPF. This integration enables contributors, researchers, and ecosystem partners to adopt and extend a truly open DPU architecture – one where offloaded networking, security, and observability services run independently of the host CPU. By leveraging DOCA’s modular approach and Canonical’s fully supported Kubernetes stack, developers can co-create a rich ecosystem of BlueField-accelerated functions that complement and enhance the performance, scalability, and resilience of applications running on the main CPU.
Canonical is enabling the next generation of composable, DPU-accelerated cloud infrastructure with NVIDIA. Canonical Kubernetes offers a robust, enterprise-grade platform to install, operate, and scale NVIDIA BlueField-accelerated services using NVIDIA DPF with full automation and long-term support. To get started, visit ubuntu.com/kubernetes for product details and support options, and explore the DPF documentation on GitHub for deployment guides and examples.
If you have any questions about running DPF with Canonical Kubernetes, please stop by our booth #E03 at NVIDIA GTC Paris or contact us.
11 June, 2025 08:21AM by xiaofei
Everyone agrees security matters – yet when it comes to big data analytics with Apache Spark, it’s not just another checkbox. Spark’s open source Java architecture introduces special security concerns that, if neglected, can quietly reveal sensitive information and interrupt vital functions. Unlike standard software, Spark design permits user-provided code to execute with extensive control over cluster resources, thus requiring strong security measures to avoid unapproved access and information leaks.
Securing Spark is key to maintaining enterprise business continuity, safeguarding data in memory as well as at rest, and defending against emerging vulnerabilities unique to distributed, in-memory processing platforms. Unfortunately, securing Spark is far from a trivial task; in this blog we’ll take a closer look at what makes it so challenging, and the steps that enterprises can take to protect their big data platforms.
Closing vulnerabilities in Java applications is very hard. Closing CVEs is fundamental for any software because it is one of the best ways to reduce the risk of being impacted by a cyber attack through known vulnerabilities. However, closing CVEs in Java applications like Spark is uniquely challenging for a number of reasons.
The first issue is the complexity in managing dependencies: a typical Java app may include more than 100 third-party libraries, each with different versions and dependencies. When a vulnerability is found in one library, updating or downgrading it can break compatibility with other dependencies that rely on specific versions, making remediation complex and risky. This tangled nest of dependencies can make some vulnerabilities practically impossible to fix without extensive testing and refactoring.
Apart from this, Java is very verbose and utilized greatly in corporate applications, typically in monolithic architectures of great complexity. Therefore, it is often the case that vulnerabilities affect millions of Java applications all over the world, creating a huge attack surface. The simplicity of exploitation and magnitude of these vulnerabilities make them challenging to eradicate entirely when impacted versions are deeply embedded in many systems. Consequently, developers are typically faced with a massive volume of CVE reports, which is challenging to prioritize and delays remediation.
Research shows that delayed patch updates are a major cause of security breaches in enterprise environments, for example the IBM 2024 Cost of Data Breach report shows that Known Unpatched Vulnerabilities caused $4.33M damage, and the Canonical and IDC 2025 state of software supply chains report indicates that 60% of organizations have only basic or no security controls to safeguard their AI/ML systems. These challenges create significant risks because delays in applying security patches can leave systems exposed to known vulnerabilities, while compatibility issues can force organizations to choose between security and stability and finally widespread vulnerabilities in widely used Java components can compromise millions of applications simultaneously, causing disruptions due to the need of critical fixes needed right away.
Java related challenges have a deep impact on Apache Spark. In the first place, Apache Spark has thousands of dependencies, so it becomes difficult to fix a CVE (both by pathing or bumping the version) because it is easy for the fix to break compatibility. This huge number of dependencies also has an impact on the number and severity of the vulnerabilities. In fact Spark has experienced several critical and high vulnerabilities over the years, which are traceable to its Java origins. In 2022 developers discovered the command injection vulnerability in the Spark UI (CVE-2022-33891) which had a 94.2% of exploitation and was in the top 1% of known exploitable vulnerabilities in recent times, and in 2024 alone two new critical vulnerabilities came out, clearly showing the threat posed by slow patching adoption in Java. These issues are not only a security concern for Spark clusters, but also force companies to make hard choices between implementing the latest security updates and prioritizing stability of their infrastructure.
At Canonical, we believe that robust security should be an integral part of your data analytics platform, not a secondary element – and with Charmed Spark, we aim to address the traditional complexity of securing enterprise Spark deployments.
We maintain a steady release pace of roughly one new version per month, while simultaneously supporting two major.minor version tracks, which as of today are 3.4.x and 3.5.x. This dual-track support ensures stability for existing users while allowing for ongoing feature development and security improvements. In addition, our proactive vulnerability management has led us, in the past year, to close 10 critical CVEs, resolve 84 high-severity vulnerabilities, and address 161 medium-severity vulnerabilities in Spark and its dependencies, extending this focus to related projects such as Hadoop for its dependencies.
By investing in automated, self-service security testing, we accelerate the detection and fixing of vulnerabilities, minimizing downtime and manual intervention. Our comprehensive approach to security includes static code analysis, continuous vulnerability scans, rigorous management processes, and detailed cryptographic documentation, as well as hardening guides to help you deploy Spark with security in mind from day one.
Charmed Spark is a platform where security is a central element, which benefits users by reducing exposure to breaches related to known vulnerabilities through updates and timely fixes, and by giving access to useful tools and documentation for installing and operating Spark in a securely designed manner. In an environment in which Java applications are a frequent focus of attacks and dependency complexity can slow the deployment of patches, Canonical’s approach acts to maintain increased levels of protection from threats, with users able to analyze and use data without inappropriate levels of concern regarding security weakness. This ultimately enables enterprises to focus on their core business application and to provide value to their customers without having to worry about external threats.
While the complexity of Java applications and their extensive dependency ecosystems present ongoing challenges, Charmed Apache Spark gives you a securely designed open source analytics engine without the level of vulnerability challenges that typically come with such a large Java-based project. Moving forward, these foundational security practices will continue to play a vital role in protecting the Spark ecosystem and supporting the broader open source community.
To learn more about securing your Spark operations, watch our webinar.
To learn more about securing your Spark operations, watch our webinar:
There are some new packages available in the BunsenLabs Carbon apt repository:
labbe-icons-bark
labbe-icons-sage
These will be used by default in Carbon.
Also available are:
labbe-icons-grey
labbe-icons-oomox
labbe-icons-telinkrin
And there's a new wallpaper setter:
xwwall
and a gtk GUI builder that xwwall uses:
gtk3dialog
Both of those will be used in Carbon.
All of these packages come thanks to the work of @micko01
A gentleman by the name of Arif Ali reached out to me on LinkedIn. I won’t share the actual text of the message, but I’ll paraphrase:
“I hope everything is going well with you. I’m applying to be an Ubuntu ‘Per Package Uploader’ for the SOS package, and I was wondering if you could endorse my application.”
Arif, thank you! I have always appreciated our chats, and I truly believe you’re doing great work. I don’t want to interfere with anything by jumping on the wiki, but just know you have my full backing.
“So, who actually lets Arif upload new versions of SOS to Ubuntu, and what is it?”
Great question!
Firstly, I realized that I needed some more info on what SOS is, so I can explain it to you all. On a quick search, this was the first result.
Okay, so genuine question…
Why does the first DuckDuckGo result for “sosreport” point to an article for a release of Red Hat Enterprise Linux that is two versions old? In other words, hey DuckDuckGo, your grass is starting to get long. Or maybe Red Hat? Can’t tell, I give you both the benefit of the doubt, in good faith.
So, I clarified the search and found this. Canonical, you’ve done a great job. Red Hat, you could work on your SEO so I can actually find the RHEL 10 docs quicker, but hey… B+ for effort. ;)
Anyway, let me tell you about Arif. Just from my own experiences.
He’s incredible. He shows love to others, and whenever I would sponsor one of his packages during my time in Ubuntu, he was always incredibly receptive to feedback. I really appreciate the way he reached out to me, as well. That was really kind, and to be honest, I needed it.
As for character, he has my +1. In terms of the members of the DMB (aside from one person who I will not mention by name, who has caused me immense trouble elsewhere), here’s what I’d tell you if you asked me privately…
“It’s just PPU. Arif works on SOS as part of his job. Please, do still grill him. The test, and ensuring people know that they actually need to pass a test to get permissions, that’s pretty important.”
That being said, I think he deserves it.
Good luck, Arif. I wish you well in your meeting. I genuinely hope this helps. :)
And to my friends in Ubuntu, I miss you. Please reach out. I’d be happy to write you a public letter, too. Only if you want. :)
Theodore Roosevelt is someone I have admired for a long time. I especially appreciate what has been coined the Man in the Arena speech.
A specific excerpt comes to mind after reading world news over the last twelve hours:
“It is well if a large proportion of the leaders in any republic, in any democracy, are, as a matter of course, drawn from the classes represented in this audience to-day; but only provided that those classes possess the gifts of sympathy with plain people and of devotion to great ideals. You and those like you have received special advantages; you have all of you had the opportunity for mental training; many of you have had leisure; most of you have had a chance for enjoyment of life far greater than comes to the majority of your fellows. To you and your kind much has been given, and from you much should be expected. Yet there are certain failings against which it is especially incumbent that both men of trained and cultivated intellect, and men of inherited wealth and position should especially guard themselves, because to these failings they are especially liable; and if yielded to, their- your- chances of useful service are at an end. Let the man of learning, the man of lettered leisure, beware of that queer and cheap temptation to pose to himself and to others as a cynic, as the man who has outgrown emotions and beliefs, the man to whom good and evil are as one. The poorest way to face life is to face it with a sneer. There are many men who feel a kind of twister pride in cynicism; there are many who confine themselves to criticism of the way others do what they themselves dare not even attempt. There is no more unhealthy being, no man less worthy of respect, than he who either really holds, or feigns to hold, an attitude of sneering disbelief toward all that is great and lofty, whether in achievement or in that noble effort which, even if it fails, comes to second achievement. A cynical habit of thought and speech, a readiness to criticise work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life’s realities — all these are marks, not as the possessor would fain to think, of superiority but of weakness. They mark the men unfit to bear their part painfully in the stern strife of living, who seek, in the affection of contempt for the achievements of others, to hide from others and from themselves in their own weakness. The rôle is easy; there is none easier, save only the rôle of the man who sneers alike at both criticism and performance.”
The riots in LA are seriously concerning to me. If something doesn’t happen soon, this is going to get out of control.
If you are participating in these events, or know someone who is, tell them to calm down. Physical violence is never the answer, no matter your political party.
De-escalate immediately.
Be well. Show love to one another!
My Debian contributions this month were all sponsored by Freexian. Things were a bit quieter than usual, as for the most part I was sticking to things that seemed urgent for the upcoming trixie release.
You can also support my work directly via Liberapay or GitHub Sponsors.
After my appeal for help last month to
debug intermittent sshd
crashes, Michel
Casabona helped me put together an environment where I could reproduce it,
which allowed me to track it down to a root
cause and fix it. (I
also found a misuse of
strlcpy
affecting at
least glibc-based systems in passing, though I think that was unrelated.)
I worked with Daniel Kahn Gillmor to fix a regression in ssh-agent
socket
handling.
I fixed a reproducibility bug depending on whether passwd
is installed on
the build system, which would have
affected security updates during the lifetime of trixie.
I backported openssh 1:10.0p1-5 to bookworm-backports.
I issued bookworm and bullseye updates for CVE-2025-32728.
I backported a fix for incorrect output when formatting multiple documents as PDF/PostScript at once.
I added a simple autopkgtest.
I upgraded these packages to new upstream versions:
In bookworm-backports, I updated these packages:
I fixed problems building these packages reproducibly:
I backported fixes for some security vulnerabilities to unstable (since we’re in freeze now so it’s not always appropriate to upgrade to new upstream versions):
I fixed various other build/test failures:
I added non-superficial autopkgtests to these packages:
I packaged python-django-hashids and python-django-pgbulk, needed for new upstream versions of python-django-pgtrigger.
I ported storm to Python 3.14.
I fixed a build failure in apertium-oci-fra.
Back in 2020 I posted about my desk setup at home.
Recently someone in our #remotees
channel at work asked about WFH setups and given quite a few things changed in mine, I thought it's time to post an update.
But first, a picture!
(Yes, it's cleaner than usual, how could you tell?!)
It's still the same Flexispot E5B, no change here. After 7 years (I bought mine in 2018) it still works fine. If I'd have to buy a new one, I'd probably get a four-legged one for more stability (they got quite affordable now), but there is no immediate need for that.
It's still the IKEA Volmar. Again, no complaints here.
Now here we finally have some updates!
A Lenovo ThinkPad X1 Carbon Gen 12, Intel Core Ultra 7 165U, 32GB RAM, running Fedora (42 at the moment).
It's connected to a Lenovo ThinkPad Thunderbolt 4 Dock. It just works™.
It's still the P410, but mostly unused these days.
An AOC U2790PQU 27" 4K. I'm running it at 150% scaling, which works quite decently these days (no comparison to when I got it).
As the new monitor didn't want to take the old Dell soundbar, I have upgraded to a pair of Alesis M1Active 330 USB.
They sound good and were not too expensive.
I had to fix the volume control after some time though.
It's still the Logitech C920 Pro.
The built in mic of the C920 is really fine, but to do conference-grade talks (and some podcasts 😅), I decided to get something better.
I got a FIFINE K669B, with a nice arm.
It's not a Shure, for sure, but does the job well and Christian was quite satisfied with the results when we recorded the Debian and Foreman specials of Focus on Linux.
It's still the ThinkPad Compact USB Keyboard with TrackPoint.
I had to print a few fixes and replacement parts for it, but otherwise it's doing great.
Seems Lenovo stopped making those, so I really shouldn't break it any further.
Logitech MX Master 3S. The surface of the old MX Master 2 got very sticky at some point and it had to be replaced.
I'm still terrible at remembering things, so I still write them down in an A5 notepad.
I've also added a (small) whiteboard on the wall right of the desk, mostly used for long term todo lists.
Turns out Xeon-based coasters are super stable, so it lives on!
Yepp, still a thing. Still USB-A because... reasons.
Still the Bose QC25, by now on the third set of ear cushions, but otherwise working great and the odd 15€ cushion replacement does not justify buying anything newer (which would have the same problem after some time, I guess).
I did add a cheap (~10€) Bluetooth-to-Headphonejack dongle, so I can use them with my phone too (shakes fist at modern phones).
And I do use the headphones more in meetings, as the Alesis speakers fill the room more with sound and thus sometimes produce a bit of an echo.
The Bose need AAA batteries, and so do some other gadgets in the house, so there is a technoline BC 700 charger for AA and AAA on my desk these days.
Yepp, I've added an IKEA Tertial and an ALDI "face" light. No, I don't use them much.
I've "built" a KVM switch out of an USB switch, but given I don't use the workstation that often these days, the switch is also mostly unused.
Hey everyone,
Get ready to dust off those virtual cobwebs and crack open a cold one (or a digital one, if you’re in a VM) because uCareSystem 25.05.06 has officially landed! And let me tell you, this release is so good, it’s practically a love letter to your Linux system – especially if that system happens to be chilling out in Windows Subsystem for Linux (WSL).
That’s right, folks, the big news is out: WSL support for uCareSystem has finally landed! We know you’ve been asking, we’ve heard your pleas, and we’ve stopped pretending we didn’t see you waving those “Free WSL” signs.
Now, your WSL instances can enjoy the same tender loving care that uCareSystem provides for your “bare metal” Ubuntu/Debian Linux setups. No more feeling left out, little WSLs! You can now join the cool kids at the digital spa.
Here is a video of it:
But wait, there’s more! (Isn’t there always?) We didn’t just stop at making friends with Windows. We also tackled some pesky gremlins that have been lurking in the shadows:
-k
option? Yeah, that’s gone too. We decided it was useless so we had to retire it to a nice, quiet digital farm upstate.So, what are you waiting for? Head over to utappia.org (or wherever you get your uCareSystem goodness) and give your system the pampering it deserves with uCareSystem 25.05.06. Your WSL instance will thank you, probably with a digital high-five.
Download the latest release and give it a spin. As always, feedback is welcome.
Thanks to the following users for their support:
Your involvement helps keep this project alive, evolving, and aligned with real-world needs. Thank you.
Happy maintaining!
As always, I want to express my gratitude for your support over the past 15 years. I have received countless messages from inside and outside Greece about how useful they found the application. I hope you find the new version useful as well.
If you’ve found uCareSystem to be valuable and it has saved you time, consider showing your appreciation with a donation. You can contribute via PayPal or Debit/Credit Card by clicking on the banner.
Once installed, the updates for new versions will be installed along with your regular system updates.
The post uCareSystem 25.05.06: Because Even Your WSL Deserves a Spa Day! appeared first on Utappia.
Welcome to WordPress. This is your first post. Edit or delete it, then start writing!
The Cybersecurity Maturity Model Certification, or CMMC for short, is a security framework for protecting Controlled Unclassified Information (CUI) in non-federal systems and organizations. The CMMC compliance requirements map to the set of controls laid out in the NIST SP 800-171 Rev 2 and NIST SP 800-172 families.
CMMC version 2.0 came into effect on December 26, 2023, and is designed to ensure adherence to rigorous cybersecurity policies and practices within the public sector and amongst wider industry partners.
Whilst many of the controls relate to how organizations conduct their IT operations, there are several specific technology requirements, and Ubuntu Pro includes features which meet these requirements head on.
CMMC has 3 levels, designed to meet increasing levels of security scrutiny:
Most independent contractors and industry partners will use level 2, and perform an annual self-assessment of their security posture against the program requirements.
While the 2.0 standard has been live since December 2023, CMMC will become a contractual requirement after 3 years, which falls in 2026. However, it takes time to work through the controls and achieve the security requirements, and organizations may take anywhere from months to years to gain this level of maturity, depending on their size and agility. Undoubtedly, the best course of action is to start planning now in order to remain eligible for contracts and to keep winning business.
CMMC is based on the NIST SP 800-171 security controls framework for handling Controlled Unclassified Information – similar to FedRAMP – and so anyone familiar with these publications will feel comfortable with the CMMC requirements. Whilst NIST SP 800-171 provides a wide range of security controls, the exact implementation can be left to the user’s discretion; CMMC gives exact requirements and provides a framework for self-assessment and auditing.
In order to become CMMC compliant, you should be systematic in your approach. Here’s how to proceed:
Patching security vulnerabilities
Ubuntu Pro supports the CMMC requirement to remediate software vulnerabilities in a timely manner. Since starting out 20 years ago, Canonical has typically released patches for critical vulnerabilities within 24 hours. We provide 12 years of security patching for all the software applications and infrastructure components within the Ubuntu ecosystem.
FIPS-certified crypto modules
Ubuntu Pro provides FIPS 140-2 and FIPS 140-3 certified cryptographic modules that you can deploy with a single command. These certified modules replace the standard cryptographic libraries which ship with Ubuntu by default, making the system FIPS 140 compliant, and allowing existing applications to make use of FIPS-approved cryptographic algorithms and ciphers without further certification or modification.
System hardening
DISA-STIG is a system hardening guide that describes how to configure an Ubuntu system to be maximally secure, by locking it down and restricting unnecessary privileges. The STIG for Ubuntu lists several hundred individual configuration steps to turn a generic Ubuntu installation into a fully secure environment. System hardening is an important CMMC requirement.
You can simplify STIG hardening with the Ubuntu Security Guide (USG): the USG tool enables automated auditing and remediation of the individual configuration steps in order to comply with the STIG benchmark, and allows you to customize the hardening profile to meet individual deployment needs.
Canonical is a software distributor rather than a service provider, and as such we are not CMMC certified ourselves, but through Ubuntu Pro we provide the tools that enable our customers to meet these specific technology requirements within the baseline controls.
As such, Ubuntu Pro provides an easy pathway to CMMC compliance. It delivers CVE patching for Ubuntu OS and Applications covering 36,000 packages, along with automated, unattended, and restartless updates, and the best tools to secure and manage your Ubuntu infrastructure, developed by the publisher of Ubuntu. Learn more about Ubuntu Pro on our explanatory web page.
06 June, 2025 10:04AM by xiaofei
Bazaar is a distributed revision control system, originally developed by Canonical. It provides similar functionality compared to the now dominant Git.
Bazaar code hosting is an offering from Launchpad to both provide a Bazaar backend for hosting code, but also a web frontend for browsing the code. The frontend is provided by the Loggerhead application on Launchpad.
Bazaar passed its peak a decade ago. Breezy is a fork of Bazaar that has kept a form of Bazaar alive, but the last release of Bazaar was in 2016. Since then the impact has declined, and there are modern replacements like Git.
Just keeping Bazaar running requires a non-trivial amount of development, operations time, and infrastructure resources – all of which could be better used elsewhere.
Launchpad will now begin the process of discontinuing support for Bazaar.
We are aware that the migration of the repositories and updating workflows will take some time, that is why we planned sunsetting in two phases.
Loggerhead, the web frontend, which is used to browse the code in a web browser, will be shut down imminently. Analyzing access logs showed that there are hardly any more requests from legit users, but almost the entire traffic comes from scrapers and other abusers. Sunsetting Loggerhead will not affect the ability to pull, push and merge changes.
From September 1st, 2025, we do not intend to have Bazaar, the code hosting backend, any more. Users need to migrate all repositories from Bazaar to Git between now and this deadline.
The following blog post describes all the necessary steps on how to convert a Bazaar repository hosted on Launchpad to Git.
Our users are extremely important to us. Ubuntu, for instance, has a long history of Bazaar usage, and we will need to work with the Ubuntu Engineering team to find ways to move forward to remove the reliance on the integration with Bazaar for the development of Ubuntu. If you are also using Bazaar and you have a special use case, or you do not see a clear way forward, please reach out to us to discuss your use case and how we can help you.
You can reach us in #launchpad:ubuntu.com on Matrix, or submit a question or send us an e-mail via feedback@launchpad.net.
It is also recommended to join the ongoing discussion at https://discourse.ubuntu.com/t/phasing-out-bazaar-code-hosting/62189.
06 June, 2025 03:04AM by xiaofei
The only ‘Made in America’ smartphone maker has a message for Apple about manufacturing in the Trump tariff era.
The post Fortune.com Features Purism and the Made in America Liberty Phone appeared first on Purism.
05 June, 2025 06:16PM by Purism
Software supply chain security has become a top concern for developers, DevOps engineers, and IT leaders. High-profile breaches and dependency compromises have shown that open source components can introduce risk if not properly vetted and maintained. Although containerization has become commonplace in contemporary development and deployment, it can have drawbacks in terms of reproducibility and security.
There is a dire need for container builds that are not only simple to deploy, but also safe, repeatable, and maintained long-term against new threats – and that’s why Canonical is introducing the Container Build Service.
The use of open source software (OSS) is becoming more and more prevalent in enterprise environments. With analyses showing that it makes up around 70% of all software in use, OSS is no longer considered a supplementary element but rather the foundation of modern applications. What’s even more interesting is that 97% of commercial codebases are reported to have integrated some OSS components, highlighting how fundamental it has truly become. However, we’re also seeing that with this growing use of OSS, open source vulnerabilities are frequently discovered. Research indicates that 84% of codebases contain at least one known open source vulnerability, with almost half of those vulnerabilities being categorized as high-severity. Black Duck’s 2025 Open Source Security and Risk Analysis (OSSRA) report showed that this risk has been increased due to by the sheer number of open source files used by applications -, which have tripled in just four years, this number has nearly tripled, from an average of 5,300 in 2020 to over 16,000 in 2024. This increased attack surface is directly correlated with this rise.
According to a report from Canonical and IDC, organizations are adopting OSS primarily to reduce costs (44%), accelerate development (36%), and increase reliability (31%). Despite nine out of ten organizations expressing a preference to source packages from trusted OS repositories like those in their OS, most still pull directly from upstream registries. This means that the responsibility for patching falls heavily on IT teams. The report found that seven in ten teams dedicate over six hours per week (almost a full working day) to sourcing and applying security updates., and tThe same proportion mandates that high and critical-severity vulnerabilities arebe patched within 24 hours, yet only 41% feel confident they can meet that SLA. What’s also interesting is that more than half of organizations do not automatically upgrade their in-production systems or applications to the newest versions, leaving them exposed to known vulnerabilities.
Supply chain attacks are also becoming more frequent. A study conducted by Sonatype displayed how the number of software supply chain attacks doubled in 2024 alone, and according to a study done by Blackberry over 75% of organizations experienced a supply chain-related attack in the previous year. The Sonatype study also highlighted how malicious packages became highly prevalent in the last 12 months, with more than 500,000 malicious packages being found in public repositories – a 156% increase from the previous year. This highlights how attackers target upstream open source in order to compromise downstream users.
In light of these trends, development teams are seeking ways to ensure the integrity of their container images. Practices like reproducible builds and signed images are gaining popularity as defenses against tampering, while minimal images promise fewer vulnerabilities. However, implementing these measures requires significant effort and expertise. This is where Canonical’s latest offering comes in.
Canonical has launched a new Container Build Service designed to meet the above challenges head-on. In essence, through this service, Canonical’s engineers will custom-build container images for any open source project or stack, with security and longevity as primary features. Whether it’s an open source application or a custom base image containing all the dependencies for your app, Canonical will containerize it according to your specifications and harden the image for production. The resulting container image is delivered in the Open Container Initiative (OCI) format and comes with up to 12 years of security maintenance.
Every package and library in the container – even those not originally in Ubuntu’s repositories – is covered under Canonical’s security maintenance commitment. We have a track record of patching critical vulnerabilities within an average of 24 hours on average, ensuring quick remediation of emerging threats. Unlike standard base images that cover only OS components, Canonical’s service will include all required upstream open source components in the container build. In other words, your entire open source dependency tree is kept safe – even if some parts of it were not packaged in Ubuntu before. This means teams can confidently use the latest frameworks, AI/ML libraries, or niche utilities, knowing Canonical will extend Ubuntu’s famous long-term support to those pieces as well.
Each container image build comes with a guaranteed security updates period of up to 12 years. This far outlasts the typical support window for community container images. It ensures that organizations in regulated or long-lived environments can run containers in production for a decade or more with ongoing patching.
The hardened images are designed to run on any popular Linux host or Kubernetes platform. Whether your infrastructure is Ubuntu, RHEL, VMware, or a public cloud Kubernetes service, Canonical will support these images on that platform. This broad compatibility means you don’t have to be running Ubuntu on the host to benefit: the container images are truly portable and backed by Canonical across environments.
Canonical’s build pipeline emphasizes reproducibility and automation. Once your container image is designed and built, an automated pipeline takes over to continuously rebuild and update the image with the latest security patches. This ensures the image remains up to date over time without manual intervention, and it provides a reproducible build process (verifiable by Canonical) to guarantee that the image you run in production exactly matches the source and binaries that were vetted.
In short, the new Container Build Service delivers secure, reproducible, and highly dependable container images, tailor-made for your applications by the experts behind Ubuntu. It effectively offloads the heavy lifting of container security maintenance to Canonical, so your teams can focus on writing code and deploying features and not constantly chasing the next vulnerability in your container image.
A standout aspect of Canonical’s approach is the use of chiseled Ubuntu container images. Chiseled images are Canonical’s take on the “distroless” container concept – ultra-minimal images that include only the essential runtime components needed by your application and nothing more. By stripping away unnecessary packages, utilities, and metadata, chiseled images dramatically reduce image size and attack surface.
What exactly are chiseled images? They are built using an open source tool called Chisel which effectively sculpts down an application to its bare essentials. A chiseled Ubuntu container image still originates from the Ubuntu base you know, but with all surplus components carved away.
Chiseled images include only the files and libraries strictly required to run your application, excluding surplus distro metadata, shells, package managers, and other tools not needed in production. Because of this minimalist approach, chiseled images are significantly smaller than typical Ubuntu images. This not only means less storage and faster transfer, but also inherently fewer places for vulnerabilities to hide. In a .NET container optimization exercise done by the ACA team at Microsoft, chiseling reduced the Backend API image size from 226 MB to 119 MB, a 56.6% reduction, and slashed CVEs from 25 to just 2 meaning a 92% decrease. Packages also dropped from 451 to 328, offering far fewer potential vulnerabilities to manage.
With less bloat, chiseled containers start up faster and use less memory. They have only the essentials, so pulling images and launching containers is quicker. For example, chiseling the .NET runtime images trimmed about 100 MB from the official image and produced a runtime base as small as 6 MB (compressed) for self-contained apps. Such a tiny footprint translates to faster network transfers and lower memory overhead at scale.
By using chiseled Ubuntu images in its container builds, Canonical ensures that each container is as small and locked down as possible, while still being based on the world’s most popular Linux distribution for developers. It’s a combination that delivers strong security out of the box. And because these images are built on Ubuntu, they inherit Ubuntu’s long-term support policies. Our container images align with Ubuntu LTS release cycles and receive the same five years of free security updates, extended to ten years with Ubuntu Pro, for the core components. In the new build service, that support can stretch to 12 years for enterprise customers, keeping even the minimal runtime components patched against CVEs over the long term.
Canonical coined the term “Long Term Support (LTS)” back in 2006 with Ubuntu 6.06 LTS, pioneering the idea of stable OS releases with 5 years of guaranteed updates. Since then, Ubuntu LTS has become a byword for reliability in enterprises. In 2019, Canonical introduced Ubuntu Pro, which expanded on this foundation by providing comprehensive security maintenance not just for Ubuntu’s core system, but for thousands of community (universe) packages as well, along with enterprise features like FIPS 140 certified cryptography. Today, Ubuntu Pro is a very comprehensive open source security offering, covering over 36,000 packages with 10-year maintenance.
This background matters because the new Container Build Service is essentially Ubuntu Pro for your container images. Canonical is extending its expertise in automated patching, vulnerability remediation, and long-term maintenance to the full stack inside your containers. By having Canonical design and maintain your container image, you’re effectively gaining a dedicated team to watch over your software supply chain. Every upstream project included in your container is continually monitored for security issues. If a new vulnerability emerges in any layer of your stack – whether it’s in the OS, a shared library, or an obscure Python package – Canonical will proactively apply the patch and issue an updated image through the automated pipeline. All of this happens largely behind the scenes, and you receive notifications or can track updates as needed for compliance. It’s a level of diligence that would be costly and difficult to replicate in-house.
Furthermore, Canonical’s involvement provides a chain of custody and trust that is hard to achieve with self-built images. The containers are built and signed by Canonical using the same infrastructure that builds official Ubuntu releases, ensuring integrity. Canonical and its partners have even established a zero-distance supply chain for critical assets – meaning there’s tight integration and verification from source code to the final container artefact. This approach greatly reduces the risk of tampering or hidden malware in the supply chain.
Because Ubuntu is so widely trusted, Canonical’s container images come pre-approved for use in highly regulated environments. Notably, hardened Ubuntu container images are already certified and available in the U.S. Department of Defense’s “Iron Bank” repository, which is a collection of hardened containers for government use. By leveraging Canonical’s service, organizations inherit this level of credibility and compliance. It’s easier to meet standards like FedRAMP, DISA-STIG, or the upcoming EU Cyber Resilience Act when your base images and components are backed by Ubuntu Pro’s security regime and provide auditable evidence of maintenance.
In summary, the Container Build Service stands on the shoulders of Ubuntu Pro and Canonical’s long experience in open source security. Your custom container isn’t just another bespoke image, it becomes an enterprise-grade artifact, with clear maintenance commitments and security SLAs that auditors and IT governance teams will appreciate.
Canonical’s container build service aims to have every layer of the container stack – from OS to app dependencies – maintained. With optimized chiseled sizes, a decade of updates, and Canonical’s support, these images are crafted for production.
Learn more about Canonical’s Container build Service >
Get in touch to discuss securing your container stack today >
The Incus team is pleased to announce the release of Incus 6.13!
This is a VERY busy release with a lot of new features of all sizes
and for all kinds of different users, so there should be something for
everyone!
The highlights for this release are:
The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:
You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
Enjoy!
This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what’s common is that at almost all points there was a mechanism to publish while on the move.
In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.
The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.
Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.
In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.
If we wanted to include photos to the stories, we’d have to find an Internet cafe.
For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST
, an empty blog post would be created with the rest of that message’s text as the title.
As I got into neogeography, I could also send a NEWPOSITION
message. This would update my position on the map, connecting weather metadata to the posts.
As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.
A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.
And so my blog included a new “moblog” section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.
Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.
In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.
And I traveled off-the-grid rarely enough that it didn’t make sense to develop a system.
But now that we’re sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?
Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.
However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.
But despite what you’d think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.
Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.
When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.
I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.
One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it’d even enable rudimentary communications if we found ourselves in a liferaft.
The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.
Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.
Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.
With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We’d then need a mailbox that can receive these mails, and some automation to fetch and publish.
05 June, 2025 12:00AM by Henri Bergius (henri.bergius@iki.fi)
De regresso triunfal do Oppidum Sena, onde apanharam uma onda de calor e uma barrigada de cabrito, queijo e vinho, os nossos heróis trazem novidades da Wikicon Portugal 2025 e contam-nos as suas aventuras tecnológicas, que incluem despir um Cervantes e apanhar felinos chamados Felicity em sítios estranhos da Internet. Para recebê-los esteve presente a Princesa Leia, a.k.a., Joana Simões, a.k.a. A Senhora dos Anéis, regressada de uma gloriosa missão em Tóquio e de partida para o México - a conversa fará tremer a terra debaixo dos vossos pés e os satélites acima das vossas cabeças!
Já sabem: oiçam, subscrevam e partilhem!
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.
If you’re looking for a low-power, always-on solution for streaming your personal media library, the Raspberry Pi makes a great Plex server. It’s compact, quiet, affordable, and perfect for handling basic media streaming—especially for home use.
In this post, I’ll guide you through setting up Plex Media Server on a Raspberry Pi, using Raspberry Pi OS (Lite or Full) or Debian-based distros like Ubuntu Server.
sudo apt update && sudo apt upgrade -y
Plex is available for ARM-based devices via their official repository.
curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo apt-key add -
echo deb https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list
sudo apt update
sudo apt install plexmediaserver -y
Enable Plex on boot and start the service:
sudo systemctl enable plexmediaserver
sudo systemctl start plexmediaserver
Make sure it’s running:
sudo systemctl status plexmediaserver
Open your browser and go to:
http://<your-pi-ip>:32400/web
Log in with your Plex account and begin the setup wizard.
Plug in your external HDD or mount a network share, then:
sudo mkdir -p /mnt/media
sudo mount /dev/sda1 /mnt/media
Make sure Plex can access it:
sudo chown -R plex:plex /mnt/media
Add the media folder during the Plex setup under Library > Add Library.
32400
only if you want remote access.A Raspberry Pi might not replace a full-blown NAS or dedicated server, but for personal use or as a secondary Plex node, it’s surprisingly capable. With low energy usage and silent operation, it’s the perfect DIY home media solution.
If you’re running other services like Pi-hole or Home Assistant, the Pi can multitask well — just avoid overloading it with too much transcoding.
The post Building a Plex Media Server with Raspberry Pi appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
Google has recently implemented new restrictions on sideloading Android apps, citing growing security concerns. In a pilot program launched in Singapore, the tech giant now blocks the installation of certain sideloaded apps—particularly those requesting sensitive permissions such as SMS access or accessibility services—if they are downloaded via web browsers, messaging apps, or file managers. The move, developed in partnership with Singapore's Cyber Security Agency, is designed to prevent fraud and malware-enabled scams.
The post Google Restricts Android Sideloading—What It Means for User Autonomy and the Future of Mobile Freedom appeared first on Purism.
04 June, 2025 06:40PM by Purism
04 June, 2025 07:08AM by xiaofei
As the Armbian project transitions from spring into summer, the final week of May 2025 brought a dense flurry of development activity, delivering improvements across kernel support, bootloader updates, system performance, and user experience enhancements. With over 35 pull requests merged, this week showcased the Armbian community’s continued dedication to modernizing and stabilizing its build framework and board support packages.
A notable performance enhancement arrived via #8248, where build engineer @rpardini delivered a major speed-up in Docker extension handling, cutting processing time by over 50%. Complementing this, PR #8249 addressed inefficiencies in rootfs-to-image
by avoiding --sparse
, significantly improving I/O speeds on various filesystems.
Kernel version parsing and custom kernel description functionality also landed with #8152, thanks to @Grippy98, enabling displaying kernel versioning within build branches.
A slew of boards received attention this week. The NanoPC-T6 series saw a key modernization in #8219 and #8239, switching to mainline Arm Trusted Firmware and bumping U-Boot to v2025.04 final. The Quartz64A board followed suit in #8250, while the Odroid HC4, Khadas VIM3, and Mixtile Blade3 all received U-Boot updates or reverts to improve stability.
Legacy and edge kernel support was also improved. Notably, Rockchip64 edge kernel configuration gained CONFIG_NETKIT=y
(#8237), and fixes for display mode handling on RK3588 boards were added (#8253).
Meanwhile, the Orangepi 5 Ultra switched to a mainline kernel source (#8252), reinforcing Armbian’s ongoing effort to shed legacy components and embrace upstream compatibility.
Behind the scenes, @igorpecovnik contributed multiple usability tweaks, including a fix for HiDPI detection (#8236) and @rpardini added improved serial console fallback behavior in GRUB (#8247). The GPG key placement was standardized across distros (#8128), simplifying build reproducibility.
The smart am40 received a long-needed RTC node and U-Boot bump (#8214), while the Helios4‘s wake-on-LAN service was fixed (#8235), reinforcing Armbian’s commitment to community-requested board maintenance.
This week’s burst of activity highlights the Armbian community’s tireless commitment to refinement and modernization. Whether through performance enhancements, kernel bumps, or quality-of-life fixes, the project continues to evolve rapidly. Users can expect a more responsive, stable, and future-proof experience across a growing roster of supported hardware.
Stay tuned for further updates as June unfolds.
The post Armbian Development Highlights: End of May 2025 first appeared on Armbian.
04 June, 2025 05:05AM by Didier Joomun
We have published Qubes Canary 043. The text of this canary and its accompanying cryptographic signatures are reproduced below. For an explanation of this announcement and instructions for authenticating this canary, please see the end of this announcement.
---===[ Qubes Canary 043 ]===---
Statements
-----------
The Qubes security team members who have digitally signed this file [1]
state the following:
1. The date of issue of this canary is June 03, 2025.
2. There have been 107 Qubes security bulletins published so far.
3. The Qubes Master Signing Key fingerprint is:
427F 11FD 0FAA 4B08 0123 F01C DDFA 1A3E 3687 9494
4. No warrants have ever been served to us with regard to the Qubes OS
Project (e.g. to hand out the private signing keys or to introduce
backdoors).
5. We plan to publish the next of these canary statements in the first
fourteen days of September 2025. Special note should be taken if no new
canary is published by that time or if the list of statements changes
without plausible explanation.
Special announcements
----------------------
None.
Disclaimers and notes
----------------------
We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently compromised.
This means that we assume NO trust in any of the servers or services
which host or provide any Qubes-related data, in particular, software
updates, source code repositories, and Qubes ISO downloads.
This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other means,
like blackmail or compromising the signers' laptops, to coerce us to
produce false declarations.
The proof of freshness provided below serves to demonstrate that this
canary could not have been created prior to the date stated. It shows
that a series of canaries was not created in advance.
This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to anybody.
None of the signers should be ever held legally responsible for any of
the statements made here.
Proof of freshness
-------------------
Tue, 03 Jun 2025 06:49:12 +0000
Source: DER SPIEGEL - International (https://www.spiegel.de/international/index.rss)
Interview with Danish Prime Minister Frederiksen: "If Trump Wants a Trade War, We Will Respond"
A Difficult Relationship: How the Gaza War Is Changing Germany's View of Israel
The German Chancellor's Diplomacy Offensive: Can Friedrich Merz Help Keep Trump from Abandoning Europe?
Death Zone Drama on K2: "Brother, It's Just Me and You"
Ruqqia Fights for Survival: Children in Gaza Facing Malnourishment as Humanitarian Situation Worsens
Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
Gaza Cease-Fire Negotiations Hit A New Impasse Over An Old Dispute
South Korea’s New President Will Lead A Country More Divided Than Ever
Ukraine and Russia Met for 2nd Round of Talks as Attacks Escalate
In Drone Attacks on Russia, Ukraine Aims for Strategic and Symbolic Blow
Poland Election: Karol Nawrocki Wins Presidential Vote
Source: BBC News (https://feeds.bbci.co.uk/news/world/rss.xml)
Suspect in Colorado fire attack planned for a year, FBI says
Russia and Ukraine fail again to agree ceasefire but commit to prisoner swap
How Ukraine carried out daring 'Spider Web' attack on Russian bombers
Disney makes hundreds more layoffs as it cuts costs
Mount Etna erupts as large plumes rise from volcano
Source: Blockchain.info
00000000000000000000ad08ae4d191a62914466df62c55d1785bea4b4fd2b01
Footnotes
----------
[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this canary
in the qubes-secpack.git repo, and (2) via digital signatures on the
corresponding qubes-secpack.git repo tags. [2]
[2] Don't just trust the contents of this file blindly! Verify the
digital signatures! Instructions for doing so are documented here:
https://www.qubes-os.org/security/pack/
--
The Qubes Security Team
https://www.qubes-os.org/security/
Source: canary-043-2025.txt
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmg+28MACgkQ1lWk8hgw
4GoYZw/7BDoro5qo2ZXxLO3IxeA6jomDhyVWgL/neAe+Y6QH2AZ8sLPewtZ9MJ7f
bll8DXLosz+WdWYbFJTnYvsSzZETPq/UQIWc88FXEVcaGAAEUtULDD0PvW0XK2J5
kd1kQiFUij+zWCGHcLU89ZeaCHIWgDMpdDputc8KmLsT2twLFn7sqCPT3K9l9iNY
7Az4wGrAtffVVQt//lU98AS6bsvuPf+J8LdoUm4P/PUGDdr7P8/B7OvXX59bYXaj
3G74LzcXVpesfcGapZbSztnxst7JfYK3N+CVaKaWNiF/hWRdrIYnLkTHZzJspB3m
nxaiTOgTIq7uxwmtenkurMCcyzNEEx6qsaAhkmc+lrqTqnNP2PpRR6EIIr6ROmPL
ev/1pBW/epsZFK6qOiaHxpyId8BTMSmYGzw5jJ1Bh70g8qg8XTNynv3w4tPAXS76
eU2bW5yPYvgmpI5vK4rmro+3G6GeqMJhZCBBGywUABwzbBG+cp9Vpc7zaSCqE1q3
SPxGopOEOPN65sUHVkqKoOpofYFB20IEF9bXV9Q343FFqn0zRpuOr/FxlYh1JFoL
1JUsJ3FBR3b/x93pddMvzBr0/nZBbrGns1SYTL7lKYGBG5h+cnq5FU+cIKlxdfks
j1QEO5yCwyFviqcVT/GyOresRiVORuRRc9XscN8d7ktMXQqKd7g=
=xzVH
-----END PGP SIGNATURE-----
Source: canary-043-2025.txt.sig.marmarek
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmg+8loACgkQSsGN4REu
FJCYxA/+NveIrCGg+GKe2nA0SrfpsPSjV3p2HCJeYjUpNVLihKWK4HKPS6seQAeq
2Q5a/E+8YqR7mnnTXOQotyhpKS5+MJ5rmMt3u6wshiYrFzXZ/GB0PmTH3qUPnyFW
+hdxmiFAgYHKdriYWFddMOKwNcvlR+gcK0x9QsIRctv4P/Cak/ggumP5435qKUIl
n+/qK5mzWP8etlJadOVzyybLAhwbzkRhVRnGDZdUQJMukCeerlAOGEHKl0jq7Wxx
MCbNExtodjF6Dh7Tn4Kw2Oy2WdaTLqvWDFLtmnJUSReeJopZ8qHSknLHdwj4kKzm
EnqwaazzBcKBSq0GwnjJ4hKnUanvAApaHs/lv83oGkb8Kynhvf+bHATvKe7bxy++
dQjI+zkD3ZuFcTQ0pPrnMlXOPgj+/UZIINlmKmCNnUQdGLUP+IQnxGQD76xkekBu
6dMAUDWPhzJWlQiPPs1Rwb4huXIS78+fUc8fpjVS6uT1AzSqMLVFFSuRmyLm+I1X
u+SWgTGxo+7xDhbXaVJoEVB9bEkUMnBxCLxVtRsRSnI6TXFLdpxnuLg4bn6Ojxw5
EmnxHN/LUZbY7ZL3cyQNu+gYm/BPp4phs3peP+C2o/HLVgQf7rUWGEIgUOoyMxyH
s0gM/I7fDV37IT1wBGY/pCIGd4b0AkdFw77IL4H+nj44IBE1G8Q=
=2iCq
-----END PGP SIGNATURE-----
Source: canary-043-2025.txt.sig.simon
The purpose of this announcement is to inform the Qubes community that a new Qubes canary has been published.
A Qubes canary is a security announcement periodically issued by the Qubes security team consisting of several statements to the effect that the signers of the canary have not been compromised. The idea is that, as long as signed canaries including such statements continue to be published, all is well. However, if the canaries should suddenly cease, if one or more signers begin declining to sign them, or if the included statements change significantly without plausible explanation, then this may indicate that something has gone wrong. A list of all canaries is available here.
The name originates from the practice in which miners would bring caged canaries into coal mines. If the level of methane gas in the mine reached a dangerous level, the canary would die, indicating to miners that they should evacuate. (See the Wikipedia article on warrant canaries for more information, but bear in mind that Qubes Canaries are not strictly limited to legal warrants.)
Canaries provide an important indication about the security status of the project. If the canary is healthy, it’s a strong sign that things are running normally. However, if the canary is unhealthy, it could mean that the project or its members are being coerced in some way.
Here is a non-exhaustive list of examples:
No, there are many canary-related possibilities that should not worry you. Here is a non-exhaustive list of examples:
In general, it would not be realistic for an organization to exist that never changed, had zero turnover, and never made mistakes. Therefore, it would be reasonable to expect such events to occur periodically, and it would be unreasonable to regard every unusual or unexpected canary-related event as a sign of compromise. For example, if something usual happens with a canary, and we say it was a mistake and correct it (with valid signatures), you will have to decide for yourself whether it’s more likely that it really was just a mistake or that something is wrong and that this is how we chose to send you a subtle signal about it. This will require you to think carefully about which among many possible scenarios is most likely given the evidence available to you. Since this is fundamentally a matter of judgment, canaries are ultimately a social scheme, not a technical one.
A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all canaries so that Qubes users have a reliable way to check whether canaries are genuine. The only way to be certain that a canary is authentic is by verifying its PGP signatures.
If you fail to notice that a canary is unhealthy or has died, you may continue to trust the Qubes security team even after they have signaled via the canary (or lack thereof) that they been compromised or coerced.
Alternatively, an adversary could fabricate a canary in an attempt to deceive the public. Such a canary would not be validly signed, but users who neglect to check the signatures on the fake canary would not be aware of this, so they may mistakenly believe it to be genuine, especially if it closely mimics the language of authentic canaries. Such falsified canaries could include manipulated text designed to sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.
The following command-line instructions assume a Linux system with git
and gpg
installed. (For Windows and Mac options, see OpenPGP software.)
Obtain the Qubes Master Signing Key (QMSK), e.g.:
$ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
gpg: directory '/home/user/.gnupg' created
gpg: keybox '/home/user/.gnupg/pubring.kbx' created
gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
gpg: Total number processed: 1
gpg: imported: 1
(For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)
View the fingerprint of the PGP key you just imported. (Note: gpg>
indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)
$ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
gpg> fpr
pub rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
Primary key fingerprint: 427F 11FD 0FAA 4B08 0123 F01C DDFA 1A3E 3687 9494
Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.
Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.
Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q
.
gpg> trust
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
1 = I don't know or won't say
2 = I do NOT trust
3 = I trust marginally
4 = I trust fully
5 = I trust ultimately
m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: ultimate validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg> q
Use Git to clone the qubes-secpack repo.
$ git clone https://github.com/QubesOS/qubes-secpack.git
Cloning into 'qubes-secpack'...
remote: Enumerating objects: 4065, done.
remote: Counting objects: 100% (1474/1474), done.
remote: Compressing objects: 100% (742/742), done.
remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
Resolving deltas: 100% (1910/1910), done.
Import the included PGP keys. (See our PGP key policies for important information about these keys.)
$ gpg --import qubes-secpack/keys/*/*
gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
gpg: Total number processed: 17
gpg: imported: 16
gpg: unchanged: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 6 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1 valid: 6 signed: 0 trust: 6-, 0q, 0n, 0m, 0f, 0u
Verify signed Git tags.
$ cd qubes-secpack/
$ git tag -v `git describe`
object 266e14a6fae57c9a91362c9ac784d3a891f4d351
type commit
tag marmarek_sec_266e14a6
tagger Marek Marczykowski-Górecki 1677757924 +0100
Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
The exact output will differ, but the final line should always start with gpg: Good signature from...
followed by an appropriate key. The [full]
indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.
Verify PGP signatures, e.g.:
$ cd QSBs/
$ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
$ cd ../canaries/
$ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
Again, the exact output will differ, but the final line of output from each gpg --verify
command should always start with gpg: Good signature from...
followed by an appropriate key.
For this announcement (Qubes Canary 043), the commands are:
$ gpg --verify canary-043-2025.txt.sig.marmarek canary-043-2025.txt
$ gpg --verify canary-043-2025.txt.sig.simon canary-043-2025.txt
You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the Qubes Canary 043 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.
03 June, 2025 05:01PM by Hace İbrahim Özbal
In this guest article, we would like to report on how we have future-proofed the IT infrastructure of around 50 schools in Oldenburg, how we have overcome challenges and which solutions have convinced us, and which exciting projects still lie ahead of us.
As the school IT department for the city of Oldenburg, we look after the IT infrastructure of 48 general and vocational schools with around 28,000 pupils and 2,300 teachers. With around 10,000 mobile and 2,500 stationary devices, our administrative workload is constantly growing. In order to support the increasing use of digital media in teaching, we had to comprehensively modernize the IT infrastructure in Oldenburg to make it fit for the future, streamline its administration and simplify the use of IT in schools.
Until a few years ago, most schools administered their IT infrastructure themselves and were also supported by external service providers, which led to major differences in technical equipment. Only the vocational schools had their own IT administration and therefore an expert contact person on site. As digitalization progressed, it quickly became clear that the previous structures were not sufficient. A modern, future-proof solution was urgently needed to meet the increasing requirements and provide the schools with optimum support.
In 2018, we kicked off the realignment of our school IT with a standardized media development concept. Our aim was to standardize the heterogeneous IT structures and provide central support for all educational establishments. It was particularly important to us to find a reliable, modular and user-friendly solution that would meet the technological and educational requirements of all school locations.
We were looking for a data protection-compliant and secure on-premises solution that we could operate centrally in the city’s data center, which would provide central user IDs and offer the option of connecting other IT services via secure interfaces. It should also support the use of modern forms of learning with mobile devices and offer personalized access to a school Wi-Fi network for all users.
We decided on the UCS@school open source platform because its centralized identity and access management significantly simplifies user administration. We also found its pedagogical functions such as exam mode and room management as well as administrative features such as automated user import, self-service for account adjustments and password resets very useful in order to relieve teachers of IT administration tasks as far as possible. With the new solution, we can administer and provide the entire IT environment centrally in the municipal data center. UCS@school also enables other services such as software distribution, mobile device management, mail and learning platforms to be connected to the central Identity & Access Management system. Thanks to the platform’s adaptability, we can tailor and expand the system to meet current requirements. A central IT service desk and service level agreements guarantee a stable and reliable IT infrastructure.
Our UCS@school environment in Oldenburg comprises 12 servers with different roles, which provide a stable, scalable basic IT structure. User data is maintained in the DaNiS school administration system, synchronized via the moin.schule state directory service and finally transferred to the central identity management system via the Schulconnex interface, where it is automatically imported. This simplifies administration, reduces errors and enables an efficient rollout of user accounts without having to manually import CSV files.
The UCS@school portal serves as a central platform for accessing web services, instructions and admin tools. Keycloak, installed as an app on two UCS backup servers, acts as an identity provider and enables Single Sign-On (SSO) for all connected services and external applications. Our Nextcloud instance, for collaborative work on documents, runs on several virtual servers for load balancing and reliability and is integrated into Identity & Access Management via SAML and LDAP so that teachers and students can access it via SSO in the future. Preconfigured groups automate the creation of class folders and storage space allocations.
For digital learning, Moodle and BigBlueButton are connected, both integrated via OpenID Connect (OIDC), which enables automatic course assignment and virtual classrooms in Moodle. Other services such as the timetabling software WebUntis, BILDUNGSLOGIN for digital educational media and Microsoft 365 are also connected via OIDC or SAML so that users can log in centrally with a single password.
The first project phase for the introduction of the IT infrastructure at five pilot schools went smoothly at the beginning, but we faced challenges: Our team was heavily involved in supporting the vocational schools, which meant that capacity for onboarding was limited.
The Covid-19 pandemic, supply bottlenecks and the “DigitalPakt Push” required a reprioritization, which is why not all pilot schools could be connected by 2021. The network renewal at all schools, the increased use of mobile devices and the shortage of specialists also presented us with additional challenges.
The network expansion is well advanced and numerous services have been rolled out centrally. A mobile device management system with around 10,000 devices has been set up and the school IT team has been expanded to 24 employees. A central service desk and a hotline offer comprehensive support, and SLAs have been agreed for 41 of 48 schools that transparently regulate rollout and support. Media concepts and LAN and Wi-Fi connections are to be completed by the end of 2024.
Around 6,000 users at vocational schools and 7,200 at general education schools are already using the central identity management and new functions. Despite the challenges, we have created a solid foundation for the digital transformation.
Another milestone has been reached with a Microsoft FWU framework agreement for vocational schools. The focus is now on expanding central services and providing additional user accounts. For 2025, we are planning an email and messenger solution as well as a standardized solution in the administration network, which will be tested in a pilot phase.
You can find out more details about our project in our presentation Oldenburg: Zentrales IAM mit Keycloak & moin.schule , which we gave at the Univention Summit in January 2025.
Der Beitrag Digital Education in Oldenburg: Ensuring Future-Proof School IT with UCS@school erschien zuerst auf Univention.
03 June, 2025 11:15AM by Alice Horstmann
“The most frustrating thing in the world for me, is when normal people judge a man like myself on what it really takes to extract greatness from nothing. It takes every bit of who you are.”
— David Goggins
Willpower is a muscle, just like lifting weights. Just like a muscle, in order to build it up, you must actively do things you do not want to do.
If you’re an athlete, you already understand this to an extent. When you’re training to run a marathon, you’re not training your physical muscles, as much as you are training your mind to overcome that barrier. When you progressively run more distance, and perhaps some days take it a bit easy, that’s a muscle.
Still don’t believe me? Don’t worry, the neuroscience is still new. It’s entirely possible I’m wrong. :)
I’ll start by sharing the podcast which made me realize this is an actual muscle:
https://medium.com/media/8ee564a66edae48d09412033fdadecf2/hrefI’d encourage you to take the time and watch that podcast in its entirety. It will answer any remaining questions you have about David Goggins.
In case Goggins is a bit too crude for you, I’ll share the following quote from Dr. Huberman in that exact episode:
“In humans, not some mouse, when people do something they don’t want to do, it’s not about adding more work. It’s about adding more work that you don’t want to do.”
He also introduces us to the Anterior Mid-Cingulate Cortex.
“But Simon, you don’t have a doctorate. How do you know about this?”
I just showed you. The video is linked above. :)
The Anterior Mid-Cingulate Cortex is the part of your brain dealing with willpower. We now know we can measure it, how it’s related to the pleasure center of the brain, and even that it’s related to neuroticism. Decision-making skills? Yep.
As you can probably see, the science on this is incredibly new. If you’re in a position to fund academic research, please, graciously go fund research on this part of the brain. I genuinely want to see if someone can give me an agreeable dissent.
In the meantime, I’ll lean on psychology here:
https://medium.com/media/fcadeff588931b9ed5982bce2e058a3b/hrefI’m human, just like you. I can’t tell you to do things you don’t want to do. I’m also not right 100% of the time. In fact, if you’re okay reading the Bible, I’d recommend you read Matthew 18:3–5. It will explain a lot.
In short, your motivation and willpower has to come from you.
Not me, not Goggins, not Jocko, not Peterson. It has to come from within. I’m not talking spiritually here (again, does not replace the underlying spiritual motivators), but rather, mentally.
I walked 15 miles last week.
Did I want to walk 15 miles? Nope. It hurt.
Did I do it anyway, because it was sunny outside and I’ve been sincerely focused on improving my habit structure? Yes.
Did I inspire tons of people to start walking in a healthy way, just by leading by example? Yes.
My advice to you all is, remember to take your day off. That doesn’t mean a lazy day:
https://medium.com/media/e4eff5dc25bd5d9dfdb27acfd29b8be4/hrefMeaning, be responsible. Just know your own limits. Know your own personal local minimums and maximums. Know your absolutes, too. Show love to others; don’t hit them with a hammer from my toolbox, please.
That’s all for now. Be well. This week doesn’t have a theme; now I can finally write about more of the technical topics! :D
02 June, 2025 11:21AM by Greenbone AG
Launching a cloud service used to be about uptime and bandwidth. Today, the landscape has evolved. Clients expect resilience under pressure, protection against attacks, and seamless performance—even during traffic surges or degraded conditions.
For cloud and hosting providers, delivering raw infrastructure is no longer enough. Security and availability have become essential components of a modern service offering. Providers that include these capabilities as part of their stack gain a significant edge in a competitive market.
Whether it’s a regional hosting service or a public cloud platform, most providers face similar obstacles:
Including application delivery and security capabilities directly in the infrastructure stack is a strategic advantage. It helps providers deliver:
Integrating advanced delivery and security features is no longer a luxury reserved for large clouds—it’s an opportunity for providers of all sizes to offer high-value services without relying solely on generic cloud-native tools. With solutions like SKUDONET, these capabilities can be tailored, isolated per tenant, and scaled with the provider’s infrastructure—whether on-premise, hybrid, or public cloud.
SKUDONET Enterprise Edition is built to meet the operational and business needs of infrastructure providers—from hosting companies to public cloud platforms.
SKUDONET is compatible with a wide variety of infrastructure strategies. It can run as a standalone ADC on virtual machines, bare metal, or cloud instances using BYOL (Bring Your Own License) models—currently available on AWS and DigitalOcean. It’s also available as a cloud-native option via Cloud, for those who prefer managed, on-demand delivery. This flexibility makes it suitable for hosting providers, enterprise platforms, or service integrators looking for advanced capabilities without being tied to vendor-specific limitations.
Cloud and hosting providers are under pressure to deliver more than just uptime. Clients today expect performance, resilience, and built-in security from their infrastructure vendors.
By integrating a platform like SKUDONET, providers can evolve their offering from basic infrastructure to value-added service—strengthening their brand, improving customer retention, and unlocking new growth opportunities.
02 June, 2025 08:56AM by Nieves Álvarez
02 June, 2025 08:34AM by Markus Feilner
The 5th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.15.0, 6.12.31-LTS, 6.6.92-LTS – added to repos: Meru (it replaced gmail-desktop), Sparky Package Tool – fixed sddm-theme2-sparky which works with KF6 on testing trixie now (it is based on Breeze) Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and…
01 June, 2025 05:29PM by pavroo
If you’re a Linux user craving a real-time strategy (RTS) game with the polish of Age of Empires and the historical depth of a university textbook—yet entirely free and open source—then you need to try 0 A.D.. This epic project by Wildfire Games is not just an open-source alternative to mainstream RTS games—it’s a serious contender in its own right, crafted with passion, precision, and community spirit.
0 A.D. (Zero Anno Domini) is a free, open-source, cross-platform RTS game that takes players deep into ancient history, allowing them to build and battle with civilizations from 500 B.C. to 500 A.D. The game is built using the custom Pyrogenesis engine, a modern 3D engine developed from scratch for this purpose, and available under the GPL license—yes, you can even tinker with the code yourself.
It’s not just a clone. 0 A.D. sets itself apart with:
Linux gamers often get the short end of the stick when it comes to big-name games—but 0 A.D. feels like it was made for us. Here’s why Linux users should care:
0 A.D. runs natively on Linux without the need for Wine, Proton, or compatibility layers. You can install it directly from your distro’s package manager or build it from source if you like full control.
For example:
# On Debian/Ubuntu
sudo apt install 0ad
# On Arch Linux
sudo pacman -S 0ad
# On Fedora
sudo dnf install 0ad
No weird dependencies. No workarounds. Just pure, native performance.
With Alpha 27 “Agni”, 0 A.D. now supports Vulkan, giving Linux users much better graphics performance, lower CPU overhead, and compatibility with modern GPU features. Plus, it includes AMD FidelityFX Super Resolution (FSR)—which boosts frame rates and visual quality even on low-end hardware.
This makes 0 A.D. one of the few FOSS games optimized for modern Linux graphics stacks like Mesa, Wayland, and PipeWire.
The development team and community are highly active, with new features, bug fixes, and optimizations arriving steadily. You don’t need to wait years for meaningful updates—0 A.D. grows with each alpha release, and Linux users are treated as first-class citizens.
Want to contribute a patch or translate the UI into Malay? You can. Everything is transparent and accessible.
Let’s dive deeper into why the gameplay itself shines.
Unlike many fast-paced arcade RTS games, 0 A.D. rewards planning and resource management. You’ll manage four resources—food, wood, stone, and metal—to construct buildings, raise armies, and advance through phases that represent a civilization’s growth. Advancing from village phase to town phase to city phase unlocks more units and structures.
Each civilization has unique architectural styles, tech trees, and military units. For example:
Units in 0 A.D. aren’t just damage sponges. There’s formation control, terrain advantage, flanking tactics, and unit counters. The AI behaves strategically, and in multiplayer, experienced players can pull off devastating maneuvers.
Naval combat has received significant improvements recently, with better ship handling and water pathfinding—something many commercial RTS games still struggle with.
0 A.D. includes:
And thanks to the integrated mod downloader, you can browse, install, and play with community mods in just a few clicks. Want to add new units, tweak balance, or add fantasy elements? You can.
Play with friends over LAN, the Internet, or against the built-in AI. The game includes:
There’s even an in-game lobby where players coordinate matches across all platforms.
The 0 A.D. project thrives because of its community:
If you’re a Linux user and want to contribute to an ambitious FOSS project, this is the perfect gateway into game development, design, or open collaboration.
Here’s a quick reference:
sudo apt install 0ad
sudo pacman -S 0ad
sudo dnf install 0ad
sudo zypper install 0ad
Follow the official instructions at https://trac.wildfiregames.com/wiki/BuildInstructions
0 A.D. is more than just a game—it’s a testament to what free and open-source software can achieve. For Linux gamers, it’s a rare gem: a game that respects your platform, performs well, and lets you own your experience entirely.
So whether you’re a seasoned general or a curious strategist, download 0 A.D. today and relive history—on your terms.
Visit https://play0ad.com to download and start playing.
The post 0 A.D. on Linux: A Stunning, Free RTS Experience That Rivals the Best appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
When it comes to customizing your shell experience, most people are familiar with Oh My Zsh, the popular Zsh configuration framework. But what if you’re a Bash user—loyal to the world’s most widely used shell? Enter Oh My BASH, a community-driven framework that brings power, flexibility, and style to Bash.
Whether you’re a casual Linux user, a seasoned sysadmin, or a developer who loves to tinker with terminal aesthetics and productivity tools, Oh My BASH might just be the productivity upgrade you’re looking for. In this post, we’ll dive deep into what Oh My BASH is, its origins, features, pros and cons, and who it’s best suited for.
Oh My BASH is an open-source, community-maintained framework for managing your Bash shell configuration. Much like its inspiration, Oh My Zsh, this project aims to simplify the process of customizing and enhancing your Bash environment.
With features like themes, plugins, and modular configuration, Oh My BASH turns the plain, default Bash shell into a more vibrant, powerful, and efficient tool.
The project was created as a direct counterpart to Oh My Zsh, for users who prefer sticking with Bash rather than switching shells. Bash remains the default shell on many Unix-like systems, including most Linux distributions and macOS (until Catalina).
Oh My BASH started as a fork of Oh My Zsh and was adapted for Bash users by enthusiasts in the open-source community. Its development continues on GitHub: https://github.com/ohmybash/oh-my-bash
Though not as widely adopted as Oh My Zsh, it has built a niche community of loyal users who appreciate what it offers without requiring them to abandon Bash.
The primary goal of Oh My BASH is to:
.bashrc
.It acts as a productivity booster and aesthetic enhancer for your command-line interface, all while staying true to Bash’s simplicity and compatibility.
Here are some of the standout benefits of using Oh My BASH:
Oh My BASH includes a variety of prompt themes. These themes add colors, Git status indicators, timestamps, and other useful elements to your prompt, making it visually appealing and informative.
Rather than dumping all settings into a monolithic .bashrc
, Oh My BASH supports a modular structure. You can organize custom aliases, environment variables, and plugin settings in separate files.
It comes with a set of built-in plugins that extend Bash’s functionality. These include helpful aliases and functions for Git, Docker, system info, SSH, and more.
Oh My BASH comes packed with smart defaults, including common aliases and shell functions that can save you typing time and reduce mistakes.
It’s open-source and maintained by contributors around the world. You can create your own plugins or themes, or contribute back to the main repository.
Installing Oh My BASH is straightforward and reversible. You can easily uninstall it if needed, reverting to your old .bashrc
setup.
While Oh My BASH offers many benefits, it’s not without drawbacks. Here are some things to consider:
As with many feature-rich shell frameworks, Oh My BASH can slightly increase your shell’s startup time, especially if you enable many plugins and themes.
Oh My BASH is not as actively maintained or as widely used as Oh My Zsh. You may find fewer themes, plugins, and community tutorials available.
The plugin ecosystem is smaller compared to other frameworks. Some power users may find it limited for more advanced workflows.
If you prefer a minimalist setup or enjoy hand-tweaking your .bashrc
, this framework might feel like overkill or too opinionated.
Oh My BASH is ideal for:
However, it may not be the best fit for:
To install Oh My BASH, run this command in your terminal:
bash -c "$(curl -fsSL https://raw.githubusercontent.com/ohmybash/oh-my-bash/master/tools/install.sh)"
This will back up your existing .bashrc
and replace it with a new one using the Oh My BASH structure. You can start exploring themes by editing your .bashrc
file and changing the OSH_THEME
variable.
Example:
OSH_THEME="font"
To list available themes, check the themes/
directory in the cloned .oh-my-bash
folder.
alias
commands in your .bash_aliases.custom
file.custom/
directory..bashrc
clean by leveraging modular files in .oh-my-bash/custom
.Oh My BASH is a solid choice for anyone looking to improve their Bash shell experience. It brings modern features, customization, and user-friendliness to a traditional and powerful shell without forcing you to switch ecosystems.
It may not have the flashiest bells and whistles of Oh My Zsh or Fish, but for many users, its blend of power and simplicity hits the sweet spot. If you’re someone who enjoys working in the terminal and wants a smoother, more expressive experience—give Oh My BASH a try.
Have you tried Oh My BASH? Got a favorite theme or plugin? Share your thoughts or custom setups in the comments below!
The post Exploring Oh My BASH: The Bash Framework You Might Be Missing Out On appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
If you’re looking for a sleek, lightweight, and no-nonsense dock for your Linux desktop, Plank might just be your perfect match. Built for simplicity and performance, Plank provides just what you need — and nothing you don’t.
Plank is a minimalist dock designed for Linux desktop environments. Its goal is to be the simplest dock on the planet. It’s lightweight, visually clean, and designed to stay out of your way while still being highly functional.
Plank is also the core backend used by Docky (from version 3.0.0 onward), meaning it can be extended to create more feature-rich docks if needed — but Plank itself keeps things lean.
To install Plank on Debian-based distributions:
sudo add-apt-repository ppa:ricotz/docky
sudo apt-get update
sudo apt-get install plank
Launch it by typing plank
in a terminal or adding it to your startup applications.
You can tweak Plank’s appearance using community-made themes.
~/.local/share/plank/themes
(per user)/usr/share/plank/themes
(system-wide)If you want a beautiful, no-frills dock for your Linux desktop, Plank nails the balance between functionality and simplicity. It just works — and looks good doing it.
Check out the source code or contribute at:
github.com/ricotz/plank
The post Plank: The Minimalist Dock for a Clean Linux Desktop appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
“Do not forget about the Trolley Problem,” a middle school teacher once told us. “You’ll look back on this moment and be glad you studied this specific dilemma early.”
In hindsight, she was right. In fact, I had no idea I would need to use that knowledge until I was directly in the middle of the situation.
Imagine this: you’re standing by a railroad track. There’s a lever in front of that railroad track, next to a button. The button sends an affirmative signal to a waiting train, so it can proceed. The lever allows the operators to switch between the primary track, and a secondary track, just in case.
Before pressing the button, standard operating procedure is to manually inspect the tracks, to ensure no debris exists. The operators even run a small cart through, to ensure the tracks themselves are not in bad shape.
All of the operators are out for the weekend. That’s okay, they just helped prepare the train for departure. They deserve the time off.
Except, someone bumped the big red button on the way out, without realizing it.
The train starts to build momentum. There’s no stopping it now. Everyone was so busy preparing the train, that checking the tracks to see if they were ready… not on their mind.
How could they forget? Well, the cart was out for repair. It passed the last three dozen times, why would it fail this time? Why would there suddenly be debris on the tracks?
Imagine you’re an engineer who just happened to walk by the button and lever. You see the train is starting to move. There’s debris even before the junction, and you can hear it. It’s loud.
But again, the operators are off-site. It’s not technically “your job” to even touch the lever. You’re just an engineer, you’re not an operator. Why would you flip that switch?
Well, because there’s a large rock straight ahead. If nobody flips the lever, it will hurdle right towards it and crash.
So, last second, you walk up to the lever, and flip it. That being said, in order to clear the last bit of debris off that secondary track to avoid the rock, you have to sacrifice your favorite wooden stick.
You spent decades whittling that stick until it was perfect. Except now, it’s the only thing you have to stop the train.
Would you sacrifice your favorite stick to save the train from the rock?
Let’s entertain both scenarios.
What if you just “let it be,” and pretended you didn’t see it? That would probably make some feel guilty, as if they were the person that pressed the button.
The other option would be to sacrifice your stick. It’s a material object, but you put years of effort into it. That would make you feel a different set of emotions.
That’s why the Trolley Problem is so difficult. You’re either sacrificing one person, or multiple. You have to choose one, you can’t choose both.
For the sake of argument, let’s say there’s a perfect 50% chance that you walk by the train. There are security cameras, but they can’t be viewed off-site due to a technical outage.
A few people in management heard the sound of the train from a great distance away, which made them nervous. They might be asking several questions, such as:
* Is the train intact?
* Which track did the train go down?
* Was there actually any debris in the way?
* Who caused the train to start moving prematurely?
It was an honest mistake, someone just pushed the button. All of these questions though, they almost remind me of a different dilemma of sorts:
https://medium.com/media/4ec6b678f4c67437bd9f844bd50b9acb/hrefWhat would you do?
Talk to you all on Monday.
“What if there was a way to learn critical thinking and logical reasoning skills that are actually applicable in real life?” Well, there is.
Many people are required to learn mathematics in school. For some, it’s an arduous task requiring days to wrap their head around, and for others, it’s as simple as finding the time to practice and do math. (I would highly recommend Atomic Habits by James Clear.)
A common misconception exists that “you’ll never need to use math in real life.” My question to the people spreading that misconception would be, “what part of math?” People are commonly taught Algebra, working up to subjects like Trigonometry and Pre-Calculus.
That’s where the misconception comes from. People are taught the foundational skills, but are many people taught the origins of math, why it exists, and what you can use some niche parts of math for?
See, many people are taught about what you’re supposed to use math for. But did anyone actually make it interesting for you? Did you feel engaged during your math classes? Have you always wanted to study it further on your own, with no idea where to start?
That’s where this post comes in. Not only that, you will be shocked at the amount of places you’ll be able to use math in your everyday life, once you understand the basics of what makes each part unique:
https://medium.com/media/53fe357d3e25e4ce4ad5b38275b29093/hrefWith this being said, I’m not going to focus on Algebra. I’m going to focus on Calculus.
What is Calculus? Take a look at this video for an easy explanation:
https://medium.com/media/2c45d941b4b5f173974315f1b7ea3254/hrefDon’t believe there are real world examples?
https://medium.com/media/27e9889dffb96e460a763293407f7bfc/hrefIf you’re now inspired to continue studying Calculus like I am, then I would personally recommend Khan Academy. I’ve used that website for 15/23 years of my life with great success, and it’s certainly not limited to mathematics.
“Okay, so are there any concepts that you believe everyone should know besides Calculus?”
Proofs. They’ll help you think in a more logical manner. If you understand the basics of logic gates and/or conditional statements, you’ll find this video fairly easy to understand:
https://medium.com/media/037d22e51aa2fe41360e276123026b39/hrefIf you’re learning math, study proofs. Especially if you’d like to make it a career, learning to prove your work (whether in text or via actual written math) is a great tool to have in your toolbox. I’ve been formally taught how to write mathematical proofs (occasionally you’ll see QED in my writing), and from experience, I can tell you, it was worth every second.
I’ll let you all know when I reach 100% on Calculus in Khan Academy. In the meantime, I’m just going to keep doing math. And, I’m just going to keep writing.
Be well. Talk soon.
If you’re a fan of real-time strategy (RTS) games and use Linux, OpenRA is a must-have. This open-source project breathes new life into classic Westwood titles like Command & Conquer: Red Alert, Tiberian Dawn, and Dune 2000, offering modern enhancements while preserving the nostalgic gameplay.
OpenRA is a community-driven initiative that reimagines classic RTS games for contemporary platforms. It’s not just a remake; it’s a complete overhaul that introduces:
These improvements ensure that both veterans and newcomers can enjoy a refined RTS experience.
The March 2025 release brought significant enhancements:
These updates demonstrate OpenRA’s commitment to evolving and enhancing the RTS genre.
Installing OpenRA on Linux is straightforward:
Alternatively, you can install OpenRA via:
sudo snap install openra
flatpak install flathub net.openra.OpenRA
These methods ensure that OpenRA integrates smoothly with your system.
OpenRA stands out in the Linux gaming landscape due to:
These features make OpenRA not just a game but a platform for learning and community interaction.
For a visual overview, check out this review:
If you’re exploring more strategy titles, consider:
Each offers unique gameplay experiences and is well-supported on Linux platforms.
OpenRA exemplifies how classic games can be revitalized for modern audiences. Its blend of nostalgia and innovation makes it a standout choice for strategy enthusiasts on Linux.
The post Rediscover Classic RTS with OpenRA on Linux appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
Everyone has a point of view. When you’re writing a story, notes for a presentation, or even non-fiction, you always have a point of view.
Are you looking to learn more about servers but don’t know where to start? Ubuntu Server provides the perfect foundation for beginners and experienced users alike. With its reliability, security features, and extensive community support, Ubuntu Server makes it easy to create useful projects that enhance your home network or provide valuable skills for your career.
In this guide, I’ll walk you through 10 practical projects that anyone can set up using Ubuntu Server. Each project requires minimal hardware and provides real-world benefits. Let’s get started!
What it is: A centralized location to store and access your files from any device on your network.
Why Ubuntu Server is perfect: Ubuntu’s stability ensures your data remains safe, while its lightweight nature means even older hardware can serve as a reliable NAS.
Setup overview:
sudo apt update
sudo apt install samba -y
sudo mkdir -p /media/nas
sudo chmod 777 /media/nas
sudo nano /etc/samba/smb.conf
[NASShare]
path = /media/nas
browseable = yes
read only = no
force create mode = 0660
force directory mode = 2770
valid users = @users
sudo smbpasswd -a yourusername
sudo systemctl restart smbd
Benefits: Access your files from any device, centralize your backups, and stream media throughout your home.
What it is: Your own personal cloud storage solution similar to Dropbox or Google Drive but hosted on your own hardware.
Why Ubuntu Server is perfect: Ubuntu’s package management makes installing dependencies straightforward, while LTS releases ensure long-term stability.
Setup overview:
sudo apt update
sudo apt install apache2 mariadb-server libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip -y
sudo mysql_secure_installation
sudo mysql -u root -p
CREATE DATABASE nextcloud;
CREATE USER 'nextclouduser'@'localhost' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextclouduser'@'localhost';
FLUSH PRIVILEGES;
EXIT;
cd /tmp
wget https://download.nextcloud.com/server/releases/latest.zip
unzip latest.zip
sudo mv nextcloud /var/www/html/
sudo chown -R www-data:www-data /var/www/html/nextcloud/
sudo nano /etc/apache2/sites-available/nextcloud.conf
<VirtualHost *:80>
DocumentRoot /var/www/html/nextcloud/
ServerName your_domain_or_IP
<Directory /var/www/html/nextcloud/>
Options +FollowSymlinks
AllowOverride All
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
sudo a2ensite nextcloud.conf
sudo a2enmod rewrite headers env dir mime
sudo systemctl restart apache2
Benefits: Maintain control over your data, avoid subscription fees, and get unlimited storage based on your hardware.
What it is: A powerful media server that organizes your movies, TV shows, music, and photos, making them accessible from anywhere.
Why Ubuntu Server is perfect: Ubuntu’s efficiency means more resources are available for transcoding media, and its compatibility with Plex is excellent.
Setup overview:
sudo apt update
sudo apt install apt-transport-https curl -y
curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo apt-key add -
echo deb https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list
sudo apt update
sudo apt install plexmediaserver -y
sudo mkdir -p /opt/plexmedia/{movies,tv,music,photos}
sudo chown -R plex:plex /opt/plexmedia
Benefits: Stream your media collection to any device, automatic metadata fetching, and smart organization of your content.
What it is: A central hub to control and automate your smart home devices.
Why Ubuntu Server is perfect: Ubuntu’s reliability ensures your home automation stays running, while its hardware compatibility supports various IoT devices.
Setup overview:
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce -y
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
mkdir ~/homeassistant
cd ~/homeassistant
nano docker-compose.yml
version: '3'
services:
homeassistant:
container_name: homeassistant
image: ghcr.io/home-assistant/home-assistant:stable
volumes:
- ./config:/config
environment:
- TZ=YOUR_TIME_ZONE
restart: always
network_mode: host
sudo docker-compose up -d
Benefits: Centralized control of all smart devices, powerful automation capabilities, and reduced dependence on cloud services.
What it is: Your own VPN server that allows secure remote access to your home network and protects your privacy when using public Wi-Fi.
Why Ubuntu Server is perfect: Ubuntu’s security focus makes it ideal for VPN services, and recent kernels include built-in WireGuard support.
Setup overview:
sudo apt update
sudo apt install wireguard -y
wg genkey | sudo tee /etc/wireguard/private.key
sudo chmod 600 /etc/wireguard/private.key
sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key
sudo nano /etc/wireguard/wg0.conf
[Interface]
PrivateKey = YOUR_SERVER_PRIVATE_KEY
Address = 10.0.0.1/24
ListenPort = 51820
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
# Client configuration example
[Peer]
PublicKey = CLIENT_PUBLIC_KEY
AllowedIPs = 10.0.0.2/32
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
sudo systemctl enable wg-quick@wg0
sudo systemctl start wg-quick@wg0
Benefits: Secure remote access to your home network, enhanced privacy on public networks, and better control over your internet connection.
What it is: A server to host your personal website, blog, or web application.
Why Ubuntu Server is perfect: Ubuntu’s robust LAMP stack support makes it the go-to choice for web hosting environments.
Setup overview:
sudo apt update
sudo apt install apache2 mariadb-server php libapache2-mod-php php-mysql -y
sudo mysql_secure_installation
sudo mkdir -p /var/www/yourwebsite
sudo chown -R $USER:$USER /var/www/yourwebsite
echo '<?php phpinfo(); ?>' > /var/www/yourwebsite/index.php
sudo nano /etc/apache2/sites-available/yourwebsite.conf
<VirtualHost *:80>
ServerName yourwebsite.local
ServerAlias www.yourwebsite.local
DocumentRoot /var/www/yourwebsite
ErrorLog ${APACHE_LOG_DIR}/yourwebsite_error.log
CustomLog ${APACHE_LOG_DIR}/yourwebsite_access.log combined
</VirtualHost>
sudo a2ensite yourwebsite.conf
sudo systemctl restart apache2
Benefits: Full control over your web presence, no monthly hosting fees, and valuable skills for web development.
What it is: A network-wide ad blocker that improves browsing speed and privacy by blocking ads at the DNS level.
Why Ubuntu Server is perfect: Ubuntu’s efficiency means Pi-hole can run alongside other services without issues, making it a perfect addition to any home server.
Setup overview:
sudo apt update
sudo apt install curl -y
curl -sSL https://install.pi-hole.net | bash
Benefits: Faster browsing, reduced bandwidth usage, enhanced privacy, and protection from malicious domains.
What it is: A lightweight, self-hosted Git service similar to GitHub but running on your own hardware.
Why Ubuntu Server is perfect: Ubuntu’s package management and system resource efficiency make it ideal for hosting developer tools like Git services.
Setup overview:
sudo apt update
sudo apt install git curl sqlite3 -y
sudo adduser --system --group --disabled-password --shell /bin/bash --home /home/git git
VERSION=$(curl -s https://api.github.com/repos/go-gitea/gitea/releases/latest | grep tag_name | cut -d '"' -f 4)
sudo wget -O /tmp/gitea https://dl.gitea.io/gitea/${VERSION}/gitea-${VERSION}-linux-amd64
sudo chmod +x /tmp/gitea
sudo mv /tmp/gitea /usr/local/bin/gitea
sudo mkdir -p /var/lib/gitea/{custom,data,log}
sudo chown -R git:git /var/lib/gitea
sudo chmod -R 750 /var/lib/gitea
sudo mkdir /etc/gitea
sudo chown root:git /etc/gitea
sudo chmod 770 /etc/gitea
sudo nano /etc/systemd/system/gitea.service
[Unit]
Description=Gitea (Git with a cup of tea)
After=syslog.target
After=network.target
[Service]
RestartSec=2s
Type=simple
User=git
Group=git
WorkingDirectory=/var/lib/gitea
ExecStart=/usr/local/bin/gitea web --config /etc/gitea/app.ini
Restart=always
Environment=USER=git HOME=/home/git GITEA_WORK_DIR=/var/lib/gitea
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now gitea
Benefits: Full control over your code repositories, no limitations on private repositories, and integrated issue tracking.
What it is: A dedicated server for hosting multiplayer games like Minecraft, Terraria, or Counter-Strike.
Why Ubuntu Server is perfect: Ubuntu’s stability and resource efficiency allow game servers to run smoothly and consistently, even on modest hardware.
Setup overview for Minecraft Server:
sudo apt update
sudo apt install openjdk-17-jre-headless screen -y
sudo adduser --system --home /opt/minecraft-server minecraft
sudo addgroup --system minecraft
sudo adduser minecraft minecraft
sudo su - minecraft
mkdir -p ~/server
cd ~/server
wget https://piston-data.mojang.com/v1/objects/8f3112a1049751cc472ec13e397eade5336ca7ae/server.jar -O minecraft_server.jar
echo "eula=true" > eula.txt
echo '#!/bin/sh
cd /opt/minecraft-server/server
java -Xmx2G -Xms1G -jar minecraft_server.jar nogui' > start.sh
chmod +x start.sh
exit
sudo nano /etc/systemd/system/minecraft.service
[Unit]
Description=Minecraft Server
After=network.target
[Service]
User=minecraft
Nice=5
KillMode=none
SuccessExitStatus=0 1
InaccessibleDirectories=/root /sys /srv /media -/lost+found
NoNewPrivileges=true
WorkingDirectory=/opt/minecraft-server/server
ReadWriteDirectories=/opt/minecraft-server/server
ExecStart=/opt/minecraft-server/server/start.sh
ExecStop=/usr/bin/screen -p 0 -S minecraft -X eval 'stuff "say SERVER SHUTTING DOWN IN 10 SECONDS. SAVING ALL MAPS..."\015'
ExecStop=/bin/sleep 10
ExecStop=/usr/bin/screen -p 0 -S minecraft -X eval 'stuff "save-all"\015'
ExecStop=/usr/bin/screen -p 0 -S minecraft -X eval 'stuff "stop"\015'
[Install]
WantedBy=multi-user.target
sudo systemctl enable minecraft.service
sudo systemctl start minecraft.service
Benefits: Host your favorite games with friends without subscription fees, customize server settings, and install mods freely.
What it is: A platform for running containerized applications, making it easy to deploy and manage various services.
Why Ubuntu Server is perfect: Ubuntu has excellent Docker support, regular updates, and a well-maintained Docker repository.
Setup overview:
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce -y
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo usermod -aG docker $USER
docker run hello-world
mkdir ~/docker-test
cd ~/docker-test
nano docker-compose.yml
version: '3'
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./html:/usr/share/nginx/html
restart: always
mkdir -p html
echo "<html><body><h1>My Docker Container is Working!</h1></body></html>" > html/index.html
docker-compose up -d
Benefits: Easily deploy complex applications, maintain isolated environments, and simplify updates and maintenance.
Throughout these projects, Ubuntu Server demonstrates its incredible versatility and power. Here’s why Ubuntu Server stands out from other options:
Ready to begin? Here’s how to get started:
Remember, these projects are just the beginning. As you become more comfortable with Ubuntu Server, you’ll discover countless more possibilities for creating valuable services for your home or small business.
Have you built any interesting projects with Ubuntu Server? Share your experiences in the comments below!
This guide was created to help newcomers explore the capabilities of Ubuntu Server. For enterprise environments, consider Ubuntu’s commercial support options.
The post 10 Simple and Useful Projects Anyone Can Build with Ubuntu Server appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
When installing or managing a Linux system, one of the most debated topics is whether to use a swap partition or a swap file—or even use swap at all.
In this post, we’ll go back to the origin of swap, explore why swap was needed, how modern systems use (or avoid) it, and the advantages and disadvantages of both swap partitions and swap files.
Swap is disk space used by the operating system when physical RAM is full. It acts as an extension of RAM to allow the system to offload memory pages that are not immediately needed, keeping critical applications running smoothly.
Swap originated in the early days of computing, when:
Unix systems implemented swap space as a way to avoid running out of memory entirely—this idea carried over to Linux.
Even with modern hardware, swap still has roles:
On the other hand:
Criteria | Swap Partition | Swap File |
---|---|---|
Resize Flexibility | ![]() | ![]() |
Setup Complexity | ![]() | ![]() |
Performance (HDD) | ![]() | ![]() |
Performance (SSD) | ![]() | ![]() |
Works with Hibernate | ![]() | ![]() |
Dynamic Management | ![]() | ![]() |
Modern Linux kernels support zram and zswap, which compress memory pages before swapping to disk:
These are great for low-memory systems like Raspberry Pi or embedded devices.
Swap is not dead—it’s evolved.
Whether you choose a swap partition or a swap file depends on your needs:
As always with Linux, the choice is yours—and that’s the power of open systems.
The post Swap Partition vs Swap File on Linux: Everything You Need to Know appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
As a system administrator, mastering the command line is critical. Whether you’re maintaining servers, managing users, monitoring performance, or securing your system, knowing the right tools can make your job faster, easier, and more efficient.
Here’s a comprehensive list of the most important Linux commands every sysadmin should know — organized by category, explained in plain language, and ready to turn you into a command-line ninja .
Keeping your system healthy starts with knowing what’s going on behind the scenes.
top
/ htop
htop
is an enhanced version of top
with a cleaner UI.uptime
vmstat
iostat
free -h
sar
ps aux
lsof
strace
Disk space issues are common — be ready to investigate and clean up.
df -h
du -sh *
lsblk
mount
/ umount
fdisk -l
/ parted -l
blkid
/etc/fstab
.find / -name filename
file
stat
Managing users and access rights is at the heart of system security.
adduser
/ useradd
adduser
is more user-friendly).passwd
usermod
deluser
/ userdel
groupadd
, groupdel
, gpasswd
chmod
chmod 755
).chown
/ chgrp
id
who
, w
, last
sudo
Networking is critical on any server. These tools help diagnose and configure network connections.
ip a
/ ip link
ip r
ss -tuln
/ netstat -tuln
ping
/ traceroute
dig
/ nslookup
curl
/ wget
nmap
tcpdump
hostname
Security is non-negotiable. These commands help you lock things down.
ufw
/ iptables
fail2ban-client
auditctl
, ausearch
getenforce
, setenforce
ssh
/ sshd
scp
, rsync
Installing and managing software is essential. Use the right tool based on your distro:
apt
, dpkg
, apt-cache
yum
, dnf
, rpm
pacman
snap
, flatpak
Examples:
apt update && apt upgrade
dnf install nginx
pacman -S htop
Keep your system running smoothly by managing services and watching logs.
journalctl
systemd
logs.dmesg
systemctl
systemd
systems (start, stop, enable, etc.).service
crontab -e
at
logrotate
shutdown
, reboot
Protect data and automate your tasks for efficiency.
rsync -avh
tar -czf archive.tar.gz folder/
scp user@host:file .
cron
, anacron
bash
/ sh
tmux
/ screen
ncdu
du
for quick inspection.glances
nc
(netcat)alias
These Linux commands are not just helpful — they’re the foundation of any good system administrator’s toolbox. Mastering them will give you confidence to manage, troubleshoot, and optimize Linux systems whether you’re working with a single server or an entire fleet of machines.
The post Essential Linux Commands Every System Administrator Should Know appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
Whether you’re a seasoned sysadmin or just diving into Linux, understanding the boot process is key to mastering how your system starts up. The Linux boot process is a fascinating journey that transforms powered-off hardware into a fully operational system. In this post, we’ll walk through the entire boot sequence, breaking down each stage with technical clarity.
The process begins the moment you press the power button.
GRUB.efi
. Note: UEFI is now the standard for most modern hardware.
The bootloader is the program that loads and starts the Linux kernel.
The bootloader:
/boot/vmlinuz-linux
).Example of GRUB config:
linux /boot/vmlinuz-6.1.0 root=/dev/sda2 ro quiet splash
initrd /boot/initrd.img-6.1.0
Now, the Linux kernel takes control.
init
process (PID 1) – the first user-space program.If anything goes wrong here (like missing root filesystem), you’ll see a kernel panic.
The init system is the “conductor” that starts all necessary services.
systemd
, it:/etc/systemd/system/
and /usr/lib/systemd/system/
.sshd
, NetworkManager
, cron
, and more.You can inspect boot performance using:
systemd-analyze
Once all services are up and running:
getty
spawns login prompts on virtual terminals (e.g., tty1–tty6).After login, the system is fully operational, ready for your commands or applications.
[ Power On ]
↓
[ BIOS / UEFI ]
↓
[ Bootloader (GRUB/systemd-boot) ]
↓
[ Kernel + initramfs ]
↓
[ Init system (systemd, etc.) ]
↓
[ System Services + Targets ]
↓
[ Login Prompt / GUI ]
systemd-analyze
systemd-analyze blame
journalctl -b
The Linux boot process may seem complex, but each stage is logically structured to ensure a flexible, powerful, and modular startup system. Whether you’re debugging a failed boot or optimizing your boot time, understanding this process equips you with the tools to handle your system like a pro.
If you’re using Linux in embedded projects, servers, or even on low-power SBCs like Raspberry Pi, this knowledge becomes even more critical.
The post Understanding the Linux Boot Process: From Power On to Login appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.
Qualys discovered two vulnerabilities in various Linux distributions which allow a local attacker with permission to create user namespaces to leak core dumps for processes of suid executables. These affect both apport, the Ubuntu default core dump handler (CVE-2025-5054), and systemd-coredump, the default core dump handler in Red Hat Enterprise Linux, Fedora and other Linux distributions (CVE-2025-4598). CVE-2025-4598 does not affect default Ubuntu installations, unless it was manually set as the core dump handler by installing the systemd-coredump package.
The vulnerabilities have a CVSS score of 4.7 (MEDIUM), requiring local access and high attack complexity. The impact is restricted to the confidentiality of the memory space of invoked suid executables. The exploit demonstrated by Qualys can leak hashed user passwords and, as such, has limited real-world impact.
Canonical’s security team has released updates for the apport package for all affected Ubuntu releases. This package is installed by default in all Ubuntu Server images and Ubuntu Desktop installations. The update remediates CVE-2025-5054. Information on the affected versions can be found in the CVE page linked above. If you have any of these installed, our recommendation is to update as soon as possible.
Qualys discovered that, when analyzing application crashes, apport attempts to detect if the crashing process was running inside a container before performing consistency checks on it. This means that if a local attacker manages to induce a crash in a privileged process and quickly replaces it with another one with the same process ID that resides inside a mount and pid namespace, apport will attempt to forward the core dump (which might contain sensitive information belonging to the original, privileged process) into the namespace.
In order to successfully carry out the exploit, an attacker must have permissions to create user, mount and pid namespaces with full capabilities.
Release | Package Name | Fixed Version |
Trusty (14.04) | apport | Not affected |
Xenial (16.04) | apport | 2.20.1-0ubuntu2.30+esm5 |
Bionic (18.04) | apport | 2.20.9-0ubuntu7.29+esm1 |
Focal (20.04) | apport | 2.20.11-0ubuntu27.28 |
Jammy (22.04) | apport | 2.20.11-0ubuntu82.7 |
Noble (24.04) | apport | 2.28.1-0ubuntu3.6 |
Oracular (24.10) | apport | 2.30.0-0ubuntu4.3 |
Plucky (25.04) | apport | 2.32.0-0ubuntu5.1 |
Trusty (14.04) | systemd-coredump | Not present |
Xenial (16.04) | systemd-coredump | Not affected |
Bionic (18.04) | systemd-coredump | Not affected |
Focal (20.04) | systemd-coredump | Not released |
Jammy (22.04) | systemd-coredump | Not released |
Noble (24.04) | systemd-coredump | Not released |
Oracular (24.10) | systemd-coredump | Not released |
Plucky (25.04) | systemd-coredump | Not released |
On your system, run the following command and compare the listed version to the table above.
apt list --installed | grep "^\(apport\|systemd-coredump\)"
We recommend you upgrade all packages:
sudo apt update && sudo apt upgrade
If this is not possible, the affected component can be targeted. If using apport (the default):
# on focal and newer:
sudo apt update && sudo apt install --only-upgrade apport python3-apport
# on xenial and bionic:
sudo apt update && sudo apt install --only-upgrade apport python3-apport python-apport
If using system-coredump:
sudo apt update && sudo apt install --only-upgrade systemd-coredump
The unattended-upgrades feature is enabled by default for Ubuntu 16.04 LTS onwards. This service:
The strongest protection is to apply the security updates. The following mitigations have also been explored. If security updates cannot be applied, you should only apply the following steps as a last resort and revert the configuration once updates are applied.
The ability to obtain core dumps of suid executable processes can be completely disabled by setting the fs.suid_dumpable sysctl to 0. While this completely neutralizes the vulnerability, it also restricts the ability to investigate crashes for all processes which change privileges upon execution..
echo "fs.suid_dumpable = 0" | sudo tee /etc/sysctl.d/10-cve-2025-5054.conf
sudo sysctl -p /etc/sysctl.d/10-cve-2025-5054.conf
We would like to thank Qualys for their excellent reporting and for inviting the Ubuntu Security Team to coordinate this issue.
https://www.qualys.com/apport-systemd-info-disclosure-vulnerabilities-discovered-by-qualys-tru
https://www.qualys.com/2025/05/29/apport-coredump/apport-coredump.txt
https://ubuntu.com/security/notices/USN-7545-1