June 17, 2025

hackergotchi for Deepin

Deepin

June 16, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Join Canonical at HPE Discover Las Vegas 2025

Canonical, the publisher of Ubuntu and trusted open source solutions provider, is proud to sponsor HPE Discover Las Vegas 2025. Join us from June 23–26 to explore how our collaboration with Hewlett Packard Enterprise (HPE) is transforming the future of enterprise IT, from virtualization and cloud infrastructure to AI/ML workloads.

Register to HPE Discover Las Vegas 2025

    What to expect at booth #2235

    Stop by our booth to engage with our team and get a closer look at our latest innovations. Here’s what’s in store:

    • Live Demos: Experience our solutions in action.
    • Expert Sessions: Learn directly from the teams shaping next-gen open source infrastructure.
    • 1:1 Consultations: Discuss your unique challenges with our team and discover how our joint technologies can help optimize your IT strategy.

    Speaking sessions at the Canonical booth

    Visit our booth and attend sessions led by industry experts covering a range of open source solutions. Plus, all attendees will receive a special gift!

    Transform your data center into a cloud-native powerhouse with OpenStack

    Discover how to gain control over your infrastructure, optimize costs, and automate operations while building a flexible, secure foundation that scales seamlessly with your business growth – whether integrated into your GreenLake multi-cloud strategy or deployed as a standalone private cloud.

    Accelerate your AI strategy with Canonical’s portfolio

    From Kubeflow for MLOps to Charmed Kubernetes for orchestration. See how open source AI infrastructure drives innovation while reducing complexity and costs.

    Ubuntu: a trusted foundation for HPE VM Essentials

    Learn how Ubuntu powers HPE VM Essentials to deliver the simplicity, security, and scalability your business demands – making enterprise virtualization accessible to organizations of every size.

    Driving innovation together: Canonical and HPE

    As a strategic partner of HPE and a member of the HPE Technology Partner Program, Canonical brings decades of open source innovations to enterprise-grade solutions. Together, we deliver a full-stack experience — with integrated, secure, and cost-effective platforms that scale with your business.

    Through our joint collaboration, organizations gain:

    • A single point of contact for the entire scope of their project.
    • The combined expertise from HPE and Canonical throughout the project lifecycle.

    Know more about our offerings and how Canonical and HPE can propel your business forward.

    Learn how UIDAI and HPE worked with Canonical to transition from a monolithic code base to a microservice architecture.

    Want to see more?
    Stop by the booth #2235 to speak to our experts.

    Are you interested in setting up a meeting with our team?
    Reach out to our Alliance Business Director:
    Valerie Noto – valerie.noto@canonical.com

    16 June, 2025 10:51PM

    The Fridge: Ubuntu Weekly Newsletter Issue 896

    Welcome to the Ubuntu Weekly Newsletter, Issue 896 for the week of June 8 – 14, 2025. The full version of this issue is available here.

    In this issue we cover:

    • Welcome New Members and Developers
    • Ubuntu Stats
    • Hot in Support
    • LXD: Weekly news #398
    • Other Meeting Reports
    • Upcoming Meetings and Events
    • LoCo Events
    • Canonical News
    • In the Blogosphere
    • Featured Audio and Video
    • Updates and Security for Ubuntu 22.04, 24.04, 24.10, and 25.04
    • And much more!

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • Din Mušić – LXD
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    .

    16 June, 2025 10:26PM

    hackergotchi for Purism PureOS

    Purism PureOS

    PureOS Crimson Development Report: May 2025

    Welcome back! If you've been following our PureOS Crimson milestones, you'll see that the few remaining tasks relate to providing ready-to-flash images for the Librem 5.

    The post PureOS Crimson Development Report: May 2025 appeared first on Purism.

    16 June, 2025 04:26PM by Purism

    Purism Liberty Phone free from tariffs, as reported by Yahoo Finance

    In a recent interview republished by Yahoo Finance, Purism CEO Todd Weaver explained why the Liberty Phone, Purism’s secure made in the USA smartphone, is exempt from U.S. tariffs targeting smartphones manufactured in China—such as Apple’s iPhone.

    The post Purism Liberty Phone free from tariffs, as reported by Yahoo Finance appeared first on Purism.

    16 June, 2025 04:03PM by Purism

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Paul Tagliamonte: The Promised LAN

    The Internet has changed a lot in the last 40+ years. Fads have come and gone. Network protocols have been designed, deployed, adopted, and abandoned. Industries have come and gone. The types of people on the internet have changed a lot. The number of people on the internet has changed a lot, creating an information medium unlike anything ever seen before in human history. There’s a lot of good things about the Internet as of 2025, but there’s also an inescapable hole in what it used to be, for me.

    I miss being able to throw a site up to send around to friends to play with without worrying about hordes of AI-feeding HTML combine harvesters DoS-ing my website, costing me thousands in network transfer for the privilege. I miss being able to put a lightly authenticated game server up and not worry too much at night – wondering if that process is now mining bitcoin. I miss being able to run a server in my home closet. Decades of cat and mouse games have rendered running a mail server nearly impossible. Those who are “brave” enough to try are met with weekslong stretches of delivery failures and countless hours yelling ineffectually into a pipe that leads from the cheerful lobby of some disinterested corporation directly into a void somewhere 4 layers below ground level.

    I miss the spirit of curiosity, exploration, and trying new things. I miss building things for fun without having to worry about being too successful, after which “security” offices start demanding my supplier paperwork in triplicate as heartfelt thanks from their engineering teams. I miss communities that are run because it is important to them, not for ad revenue. I miss community operated spaces and having more than four websites that are all full of nothing except screenshots of each other.

    Every other page I find myself on now has an AI generated click-bait title, shared for rage-clicks all brought-to-you-by-our-sponsors–completely covered wall-to-wall with popup modals, telling me how much they respect my privacy, with the real content hidden at the bottom bracketed by deceptive ads served by companies that definitely know which new coffee shop I went to last month.

    This is wrong, and those who have seen what was know it.

    I can’t keep doing it. I’m not doing it any more. I reject the notion that this is as it needs to be. It is wrong. The hole left in what the Internet used to be must be filled. I will fill it.

    What comes before part b?

    Throughout the 2000s, some of my favorite memories were from LAN parties at my friends’ places. Dragging your setup somewhere, long nights playing games, goofing off, even building software all night to get something working—being able to do something fiercely technical in the context of a uniquely social activity. It wasn’t really much about the games or the projects—it was an excuse to spend time together, just hanging out. A huge reason I learned so much in college was that campus was a non-stop LAN party – we could freely stand up servers, talk between dorms on the LAN, and hit my dorm room computer from the lab. Things could go from individual to social in the matter of seconds. The Internet used to work this way—my dorm had public IPs handed out by DHCP, and my workstation could serve traffic from anywhere on the internet. I haven’t been back to campus in a few years, but I’d be surprised if this were still the case.

    In December of 2021, three of us got together and connected our houses together in what we now call The Promised LAN. The idea is simple—fill the hole we feel is gone from our lives. Build our own always-on 24/7 nonstop LAN party. Build a space that is intrinsically social, even though we’re doing technical things. We can freely host insecure game servers or one-off side projects without worrying about what someone will do with it.

    Over the years, it’s evolved very slowly—we haven’t pulled any all-nighters. Our mantra has become “old growth”, building each layer carefully. As of May 2025, the LAN is now 19 friends running around 25 network segments. Those 25 networks are connected to 3 backbone nodes, exchanging routes and IP traffic for the LAN. We refer to the set of backbone operators as “The Bureau of LAN Management”. Combined decades of operating critical infrastructure has driven The Bureau to make a set of well-understood, boring, predictable, interoperable and easily debuggable decisions to make this all happen. Nothing here is exotic or even technically interesting.

    Applications of trusting trust

    The hardest part, however, is rejecting the idea that anything outside our own LAN is untrustworthy—nearly irreversible damage inflicted on us by the Internet. We have solved this by not solving it. We strictly control membership—the absolute hard minimum for joining the LAN requires 10 years of friendship with at least one member of the Bureau, with another 10 years of friendship planned. Members of the LAN can veto new members even if all other criteria is met. Even with those strict rules, there’s no shortage of friends that meet the qualifications—but we are not equipped to take that many folks on. It’s hard to join—-both socially and technically. Doing something malicious on the LAN requires a lot of highly technical effort upfront, and it would endanger a decade of friendship. We have relied on those human, social, interpersonal bonds to bring us all together. It’s worked for the last 4 years, and it should continue working until we think of something better.

    We assume roommates, partners, kids, and visitors all have access to The Promised LAN. If they’re let into our friends' network, there is a level of trust that works transitively for us—I trust them to be on mine. This LAN is not for “security”, rather, the network border is a social one. Benign “hacking”—in the original sense of misusing systems to do fun and interesting things—is encouraged. Robust ACLs and firewalls on the LAN are, by definition, an interpersonal—not technical—failure. We all trust every other network operator to run their segment in a way that aligns with our collective values and norms.

    Over the last 4 years, we’ve grown our own culture and fads—around half of the people on the LAN have thermal receipt printers with open access, for printing out quips or jokes on each other’s counters. It’s incredible how much network transport and a trusting culture gets you—there’s a 3-node IRC network, exotic hardware to gawk at, radios galore, a NAS storage swap, LAN only email, and even a SIP phone network of “redphones”.

    DIY

    We do not wish to, nor will we, rebuild the internet. We do not wish to, nor will we, scale this. We will never be friends with enough people, as hard as we may try. Participation hinges on us all having fun. As a result, membership will never be open, and we will never have enough connected LANs to deal with the technical and social problems that start to happen with scale. This is a feature, not a bug.

    This is a call for you to do the same. Build your own LAN. Connect it with friends’ homes. Remember what is missing from your life, and fill it in. Use software you know how to operate and get it running. Build slowly. Build your community. Do it with joy. Remember how we got here. Rebuild a community space that doesn’t need to be mediated by faceless corporations and ad revenue. Build something sustainable that brings you joy. Rebuild something you use daily.

    Bring back what we’re missing.

    16 June, 2025 03:58PM

    hackergotchi for GreenboneOS

    GreenboneOS

    Threat Report May 2025: Hack, Rinse, Repeat

    May 2025 was a volcanic month for cybersecurity news, including several large breaches and new critical severity vulnerabilities. The Greenbone blog has already covered some major events, such as new actively exploited vulnerabilities in SAP Netweaver, Commvault Command Center and Ivanti EPMM. In total 4,014 new vulnerabilities were added to MITRE’s CVE (Common Vulnerabilities and […]

    16 June, 2025 12:55PM by Joseph Lee

    June 12, 2025

    hackergotchi for ARMBIAN

    ARMBIAN

    Armbian Development Highlights: June 2–9, 2025

    This week’s Armbian updates focused on kernel improvements, bootloader modernization, and several core enhancements to the build infrastructure. Key work spanned platforms like Rockchip, Sunxi, and Odroid, emphasizing kernel stability and broader compatibility across boards.

    Kernel Upgrades

    Several boards received kernel updates:

    Patches also landed to adapt Wi-Fi drivers to 6.15-era changes, including fixes for xradio and uwe5622 on Sunxi, contributed by The-going:

    Bootloader and U-Boot Work

    Improvements were made to bootloader support:

    Build System Changes

    Several build system enhancements landed this cycle:

    • PR #8276 by Ayush1325 prevented unintentional conversion of built-in kernel drivers to modules.
    • A new helper function chroot_sdcard_custom_with_apt_logic was introduced in PR #8245 by rpardini, streamlining SD card build logic.
    • PR #8279 by igorpecovnik removed obsolete Trixie package references.

    Networking Improvements

    PR #8259 and commit cdf71df by djurny expanded DHCP configuration in netplan to automatically include interfaces matching lan* and wan*, simplifying initial setup across devices.

    Miscellaneous

    Contributors This Week


    The post Armbian Development Highlights: June 2–9, 2025 first appeared on Armbian.

    12 June, 2025 10:16PM by Michael Robinson

    hackergotchi for ZEVENET

    ZEVENET

    How to Choose the Right Application Delivery Controller

    Choosing an Application Delivery Controller (ADC) is not just about ticking boxes. It’s about making sure your infrastructure is prepared to deliver fast, secure, and resilient applications—without overengineering or overspending. From availability to security and automation, the ADC sits at the core of how your service behaves under pressure.

    In this guide, we’ll walk through the key criteria that define a capable ADC, explaining not just what to look for, but why each factor matters.

    1. Scalability and Performance

    An ADC should grow with your business—not become a bottleneck.

    When evaluating scalability, don’t just ask “how much traffic can it handle?” Consider how performance evolves as demand increases. Can the ADC handle thousands of concurrent sessions, millions of requests per second, or high-throughput SSL traffic without introducing latency?

    You’ll also want to know:

    • Is it horizontally scalable (via clustering) or vertically (with more CPU/RAM)?
    • Does it support autoscaling in virtual or cloud environments?
    • How well does it perform under Layer 4 (TCP/UDP) vs Layer 7 (HTTP/S) loads?

    An Application Delivery Controller that scales poorly can turn traffic spikes into outages. One that scales well can become the foundation for future growth.

    Look for: high throughput (Gbps), connection-per-second capacity, clustering support, autoscaling capabilities.

    2. Core Features and Application Intelligence

    Not all ADCs offer the same traffic management logic. Some only offer basic Layer 4 balancing (based on IP or port), while others support Layer 7 intelligence (routing based on URLs, cookies, headers…).

    Layer 7 capabilities are especially useful for:

    • Routing requests to specific servers based on user or device attributes.
    • Enforcing session persistence (so users don’t lose context when switching servers).
    • Offloading SSL tasks to improve backend performance.

    Application awareness also includes dynamic health checks to avoid sending users to unhealthy servers.

    Look for: Layer 4 and 7 balancing, persistence options, content-based routing, health checks, SSL offloading.

    3. Security Capabilities

    Modern ADCs are not just traffic routers—they’re the front line of your application security.

    A solid ADC should include:

    • Web Application Firewall (WAF): to block OWASP Top 10 attacks like SQLi or XSS.
    • Bot protection and DDoS mitigation: to identify abuse patterns and rate-limit abusive behavior.
    • API security: to protect RESTful services from misuse or overexposure.

    The key here is native integration. In many platforms, these are added as extra modules—sometimes from third-party vendors—making management more complex and pricing less predictable.

    Look for: integrated WAF, rule customization, L7 DDoS protection, bot mitigation, API traffic control.

    4. Deployment Flexibility

    Your ADC should adapt to your infrastructure—not the other way around.
    Whether you’re running on-premises, in the cloud, or in a hybrid setup, the ADC must support a variety of deployment methods:

    • Can it be installed on a virtual machine, bare metal, or cloud instance?
    • Does it integrate with platforms like VMware, Proxmox, OpenStack, or public clouds like AWS or Azure?

    Some vendors tie features or form factors to licensing restrictions—make sure the platform you choose works where you need it to.

    Look for: multiple deployment formats, public cloud compatibility, support for major hypervisors.

    5. Management, Automation and Observability

    Even the best features become frustrating if they’re hard to manage.

    Many ADCs suffer from steep learning curves, non-intuitive UIs, or missing automation options. Worse, some make their visual consoles or monitoring tools part of separate, paid modules—meaning essential functions like traffic monitoring or cert management come at an extra cost.
    You want:

    • A centralized and intuitive UI that gives quick access to logs, alerts, and configs.
    • Support for automation tools (REST APIs, Ansible, Terraform…) to simplify scaling and deployments.
    • Integration with observability tools like Grafana, Nagios or Splunk for seamless alerting.

    Look for: REST+JSON API, full-featured web console, real-time traffic visibility, external integrations.

    6. Compliance and Encryption Standards

    For industries like finance, healthcare, or eCommerce, security compliance is not optional.
    ADC encryption handling must meet modern standards, including:

    • Automated SSL/TLS management: to avoid expired certs and manual errors.
    • Strong cipher support: to comply with strict encryption policies.
    • Rating compliance: like A+ in SSL Labs or alignment with PCI DSS requirements.

    The ADC should simplify compliance, not add to the operational burden.

    Look for: automatic certificate renewal (e.g., via Let’s Encrypt), strong encryption policies, compliance certification.

    7. Licensing, Support and Total Cost of Ownership

    This is where many ADC vendors fall short. What looks like a complete product often turns out to be a basic package—missing essential features unless you purchase additional modules or licenses.

    • Need to enable WAF? That’s an extra module.
    • Want clustering or high availability? Another cost.
    • Looking for a proper dashboard or detailed logs? Possibly a premium feature, too.

    This modular approach makes it hard to estimate the actual cost of the solution over time. It also complicates procurement and makes pricing comparisons difficult, as vendors vary based on throughput, users, features, and support tiers.

    Support is another critical pain point.
    Many ADC vendors outsource their support to general helpdesk services operating through ticketing systems. The first-line staff often lack deep technical knowledge of the product, and response times can be slow—even for urgent issues. This doesn’t just delay resolution; it puts service continuity and customer trust at risk.

    Look for: all-inclusive pricing models, included updates, fast SLA response, expert technical support.

    Why SKUDONET ADC stands out

    SKUDONET Enterprise Edition is built for companies that want full control over application performance and security—without hidden costs or overcomplicated licensing.

    • Complete traffic management: Advanced L4-L7 load balancing, GSLB, persistence, content routing.
    • Integrated security: WAF with OWASP rules, bot filtering, SSL offloading, API protection.
    • Flexible deployment: Runs on virtual, cloud, or bare metal with support for BYOL (AWS, DigitalOcean) or SkudoCloud (SaaS).
    • Automatable and observable: REST+JSON API, Ansible support, integration with Grafana and Nagios.
    • No surprises: Transparent pricing, no feature lock-ins, no expensive modules to unlock basic capabilities.
    • Expert support: Direct assistance from the same engineers who build and maintain the product—ensuring deep technical understanding and fast, effective resolution.

    Ready to see it in action?

    Try SKUDONET Enterprise Edition free for 30 days and explore how a real ADC should work.

    12 June, 2025 06:04AM by Nieves Álvarez

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Podcast Ubuntu Portugal: E351 Beringela Nuclear

    Ainda a braços com livros electrónicos e cachuchos espertos, o Miguel e o Diogo dão belas lições sobre como Reduzir, Reutilizar e Reciclar que envolvem passarinhos e ovos estrelados; dizem mal do Windows 11 e como dizer adeus ao Windows 10 da melhor maneira - e ainda têm tempo, entre reuniões muito LoCo, para fazerem rebentar a última bomba da Canonical - que está a dar prémios! - mas também envolve deixar X.org para trás na berma da estrada. Depois revimos as novidades sobre cimeiras variadas, datas novas para as agendas e o que podemos esperar das novas versões de Ubuntu Touch e Questing Cueca (é assim que se diz, não é…?).

    Já sabem: oiçam, subscrevam e partilhem!

    Atribuição e licenças

    Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os efeitos sonoros deste episódio possuem as seguintes licenças: Risos de piadas secas; patrons laughing.mp3 by pbrproductions – https://freesound.org/s/418831/ – License: Attribution 3.0; Trombone: wah wah sad trombone.wav by kirbydx – https://freesound.org/s/175409/ – License: Creative Commons 0; Quem ganhou? 01 WINNER.mp3 by jordanielmills – https://freesound.org/s/167535/ – License: Creative Commons 0; Isto é um Alerta Ubuntu: Breaking news intro music by humanoide9000 – https://freesound.org/s/760770/ – License: Attribution 4.0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

    12 June, 2025 12:00AM

    June 11, 2025

    Salih Emin: Apple Unveils “Containerization” for macOS: A New Era for Linux Containers on macOS

    Apple has introduced a new open-source Swift framework named Containerization, designed to fundamentally reshape how Linux containers are run on macOS. In a detailed presentation, Apple revealed a new architecture that prioritizes security, privacy, and performance, moving away from traditional methods to offer a more integrated and efficient experience for developers.

    The new framework aims to provide each container with the same level of robust isolation previously reserved for large, monolithic virtual machines, but with the speed and efficiency of a lightweight solution.

    Here is the video:

    The Old Way: A Single, Heavy Virtual Machine

    • Resource Inefficiency: The large VM had resources like CPU and memory allocated to it upfront, regardless of how many containers were running.
    • Security & Privacy Concerns: Sharing files from the Mac with a container was a two-step process; files were first shared with the entire VM, and then to the specific container, potentially exposing data more broadly than intended.
    • Maintenance Overhead: The large VM contained a full Linux distribution with core utilities, dynamic libraries, and a libc implementation, increasing the attack surface and requiring constant updates.

    A New Vision: Security, Privacy, and Performance

    The Containerization framework was built with three core goals to address these challenges:

    1. Security: Provide every single container with its own isolated virtual machine. This dramatically reduces the attack surface by eliminating shared kernels and system utilities between containers.
    2. Privacy: Enable file and directory sharing on a strict, per-container basis. Only the container that requests access to a directory will receive it.
    3. Performance: Achieve sub-second start times for containers while respecting the user’s system resources. If no containers are running, no resources are allocated.

    Under the Hood: How Containerization Works

    Containerization is more than just an API; it’s a complete rethinking of the container runtime on macOS.

    Lightweight, Per-Container Virtual Machines

    The most significant architectural shift is that each container runs inside its own dedicated, lightweight virtual machine. This approach provides profound benefits:

    • Strong Isolation: Each container is sandboxed within its own VM, preventing processes in one container from viewing or interfering with the host or other containers.
    • Dedicated Networking: Every container gets its own dedicated IP address, which improves network performance and eliminates the cumbersome need for port mapping.
    • Efficient Filesystems: Containerization exposes the image’s filesystem to the Linux VM as a block device formatted with EXT4. Apple has even developed a Swift package to manage the creation and population of these EXT4 filesystems directly from macOS.

    vminitd: The Swift-Powered Heart of the Container

    Once a VM starts, a minimal initial process called vminitd takes over. This is not a standard Linux init system; it’s a custom-built solution with remarkable characteristics:

    • Built in Swift: vminitd is written entirely in Swift and runs as the first process inside the VM.
    • Extremely Minimal Environment: To maximize security, the filesystem vminitd runs in is barebones. It contains no core utilities (like ls, cp), no dynamic libraries, and no libc implementation.
    • Statically Compiled: To run in such a constrained environment, vminitd is cross-compiled from a Mac into a single, static Linux executable. This is achieved using Swift’s Static Linux SDK and musl, a libc implementation optimized for static linking.

    vminitd is responsible for setting up the entire container environment, including assigning IP addresses, mounting the container’s filesystem, and supervising all processes that run within the container.

    Getting Started: The container Command-Line Tool

    To showcase the power of the framework, Apple has also released an open-source command-line tool simply called container. This tool allows developers to immediately begin working with Linux containers in this new, secure environment.

    • Pulling an image:
    container image pull alpine:latest
    • Running an interactive shell:
    container run -ti alpine:latest sh

    Within milliseconds, the user is dropped into a shell running inside a fully isolated Linux environment. Running the ps aux command from within the container reveals only the shell process and the ps process itself, a clear testament to the powerful process isolation at work.

    Apple Unveils

    An Open Invitation to the Community

    Both the Containerization framework and the container tool are available on GitHub. Apple is inviting developers to explore the source code, integrate the framework into their own projects, and contribute to its future by submitting issues and pull requests.

    This move signals a strong commitment from Apple to making macOS a first-class platform for modern, Linux container-based development, offering a solution that is uniquely secure, private, and performant.

    Source:

    The post Apple Unveils “Containerization” for macOS: A New Era for Linux Containers on macOS appeared first on Utappia.

    11 June, 2025 10:06PM

    Scarlett Gately Moore: KDE Application snaps 25.04.2 released!

    KDE MascotKDE Mascot

    Release notes: https://kde.org/announcements/gear/25.04.2/

    Now available in the snap store!

    Along with that, I have fixed some outstanding bugs:

    Ark: now can open/save files in removable media

    Kasts: Once again has sound

    WIP: Updating Qt6 to 6.9 and frameworks to 6.14

    Enjoy everyone!

    Unlike our software, life is not free. Please consider a donation, thanks!

    11 June, 2025 01:14PM

    Ubuntu Blog: Canonical delivers Kubernetes platform and open-source security with NVIDIA Enterprise AI Factory validated design

    Reference architectures speed up time to market for agentic AI projects

    To ease the path of enterprise AI adoption and accelerate the conversion of AI insights into business value, NVIDIA recently published the NVIDIA Enterprise AI Factory validated design, an ecosystem of solutions that integrates seamlessly with enterprise systems, data sources, and security infrastructure.  The NVIDIA templates for hardware and software design are tailored for modern AI projects, including Physical AI & HPC with a focus on agentic AI workloads.     

    Canonical is proud to be included in the NVIDIA Enterprise AI Factory validated design. Canonical Kubernetes orchestration supports the process of efficiently building, deploying, and managing a diverse and evolving suite of AI agents on high-performance infrastructure.  The Ubuntu operating system is at the heart of NVIDIA Certified Systems across OEM partnerships like Dell. Canonical also collaborates with NVIDIA to ensure the stability and security of open-source software across AI Factory by securing agentic AI dependencies within NVIDIA’s artifact repository. 

    Canonical’s focus on open source, model-driven operations and ease of use offers enterprises flexible options for building their AI Factory on NVIDIA-accelerated infrastructure.

    Canonical Kubernetes

    Canonical Kubernetes is a securely designed and supported foundational platform. It unifies the management of a complex stack – including NVIDIA AI Enterprise, storage, networking, and observability tools – onto a single platform. 

    Within the NVIDIA Enterprise AI Factory validated design, Kubernetes is used to independently develop, update, and scale microservice-based agents, coupled with automated CI/CD pipelines. Kubernetes also handles the significant and often burstable compute demands for training AI models and scales inference services for deployed agents based on real-time needs. 

    Based on upstream Kubernetes, Canonical Kubernetes is integrated with the NVIDIA GPU and Networking Operators to leverage NVIDIA hardware acceleration and supports the deployment of NVIDIA AI Enterprise, enabling AI workloads with NVIDIA NIM and accelerated libraries. 

    Canonical Kubernetes provides full-lifecycle automation and has long-term support with recently announced 12-year security maintenance.

    Security updates for the OS and popular AI toolchains and libraries

    Ubuntu is the most widely used operating system for AI workloads. Choosing Ubuntu as the base OS for NVIDIA AI Factory gives organizations a trusted repository for all their open source, not just the OS. With Ubuntu Pro, customers get up to 12 years of security maintenance for thousands of open source packages including the most widely used libraries and toolchains, like Python, R and others. Organizations can complement that with Canonical’s Container Build Service to get custom containers built to spec, and security maintenance for their entire open source dependency tree. 

    To learn more about what the NVIDIA Enterprise AI Factory validated design could do for you, get in touch with our team – we’d love to hear about your project.

    Visit us at our booth E03 at NVIDIA GTC Paris on June 11-12 for an in-person conversation about how NVIDIA Enterprise AI Factory validated designs.

    Further reading

    11 June, 2025 11:04AM

    Ubuntu Blog: Canonical Kubernetes meets NVIDIA DOCA Platform Framework (DPF): building the future of DPU-driven infrastructure

    The combined solutions simplify infrastructure operations and accelerate time-to-value for AI, telecom, and enterprise computing workloads.

    Accelerate deployment and operations of BlueField DPUs with Canonical Kubernetes

    At GTC Paris today, Canonical announced support for the NVIDIA DOCA Platform Framework (DPF) with Canonical Kubernetes LTS. This milestone strengthens the strategic collaboration between the two companies and brings the benefits of NVIDIA BlueField DPU accelerations to cloud-native environments with end-to-end automation, open-source flexibility, and long-term support.

    DPF is NVIDIA’s software framework for managing and orchestrating NVIDIA BlueField DPUs at scale. NVIDIA BlueFieldenables advanced offloading of infrastructure services (such as networking, storage, and security) directly onto the DPU, freeing up host CPU resources and enabling secure, high-performance, zero-trust architectures. With Canonical Kubernetes now officially supporting DPF 25.1, developers and infrastructure teams can easily integrate these capabilities into their AI, telecom and enterprise computing workloads using Canonical’s proven tooling and automation stack.

    A joint solution built for scale and simplicity

    This integration enables organizations to deploy DPU-accelerated infrastructure across telco, enterprise, and edge use cases, with key benefits including:

    • Simplified provisioning and lifecycle management of BlueField DPUs using Canonical infrastructure products like MAAS, Juju, and Snap-based deployment flows on Ubuntu 24.04 LTS and Kubernetes 1.32.2.
    • Declarative deployment of DPU firmware and services using Kubernetes-native interfaces such as Helm charts and custom resource definitions (CRDs).
    • Out-of-the-box integration with NVIDIA Network Operator, including support for SR-IOV, Multus, and accelerated CNIs like OVN-Kubernetes.
    • Production-grade support with Canonical’s long term support, offering up to 12 years of security updates and enterprise SLAs.

    “This milestone marks a significant step forward in our collaboration with NVIDIA,” said Cedric Gegout, VP of Product at Canonical. “Together, we’re enabling a new class of infrastructure that combines the power of NVIDIA BlueField DPUs with the flexibility and automation of Canonical Kubernetes – laying the groundwork for secure, high-performance environments across AI, telecom, and enterprise use cases.”

    A platform for developers, operators, and innovators

    When combined, the NVIDIA DOCA software framework and Canonical Kubernetes empower cloud architects and platform engineers to design scalable, secure infrastructure while minimizing operational complexity. For DevOps and SRE teams, the integration of BlueField-accelerated  services into CI/CD pipelines becomes streamlined, enabling consistent, automated delivery of infrastructure components. Application developers gain access to offloaded services through the DOCA SDK and APIs, accelerating innovation without compromising performance. Meanwhile, IT decision makers benefit from enhanced efficiency, workload isolation, and built-in compliance as they modernize their infrastructure.

    Open source and cloud-native communities now have a powerful foundation to build on with Canonical Kubernetes and NVIDIA DPF. This integration enables contributors, researchers, and ecosystem partners to adopt and extend a truly open DPU architecture – one where offloaded networking, security, and observability services run independently of the host CPU. By leveraging DOCA’s modular approach and Canonical’s fully supported Kubernetes stack, developers can co-create a rich ecosystem of BlueField-accelerated functions that complement and enhance the performance, scalability, and resilience of applications running on the main CPU.

    Get started

    Canonical is enabling the next generation of composable, DPU-accelerated cloud infrastructure with NVIDIA. Canonical Kubernetes offers a robust, enterprise-grade platform to install, operate, and scale NVIDIA BlueField-accelerated services using NVIDIA DPF with full automation and long-term support. To get started, visit ubuntu.com/kubernetes for product details and support options, and explore the DPF documentation on GitHub for deployment guides and examples.

    If you have any questions about running DPF with Canonical Kubernetes, please stop by our booth #E03 at NVIDIA GTC Paris or contact us.

    11 June, 2025 11:04AM

    hackergotchi for Deepin

    Deepin

    June 10, 2025

    hackergotchi for Purism PureOS

    Purism PureOS

    Hidden Operating Systems in Chips vs. Secure, Auditable OSes: A Cybersecurity Comparison

    The Threat Beneath: The Invisible Operating Systems Inside Your Devices Beneath the surface of nearly every modern computer lies a hidden threat—one most users never see or control. Today’s Intel and AMD processors ship with embedded subsystems like the Intel Management Engine (ME) and AMD’s Platform Security Processor (PSP)—proprietary, low-level firmware environments that operate outside and below your main operating system.

    The post Hidden Operating Systems in Chips vs. Secure, Auditable OSes: A Cybersecurity Comparison appeared first on Purism.

    10 June, 2025 03:18PM by Rex M. Lee

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Blog: Apache Spark security: start with a solid foundation

    Everyone agrees security matters – yet when it comes to big data analytics with Apache Spark, it’s not just another checkbox. Spark’s open source Java architecture introduces special security concerns that, if neglected, can quietly reveal sensitive information and interrupt vital functions. Unlike standard software, Spark design permits user-provided code to execute with extensive control over cluster resources, thus requiring strong security measures to avoid unapproved access and information leaks. 

    Securing Spark is key to maintaining enterprise business continuity, safeguarding data in memory as well as at rest, and defending against emerging vulnerabilities unique to distributed, in-memory processing platforms. Unfortunately, securing Spark is far from a trivial task; in this blog we’ll take a closer look at what makes it so challenging, and the steps that enterprises can take to protect their big data platforms.

    Why enterprises struggle with Spark security

    Closing vulnerabilities in Java applications is very hard. Closing CVEs is fundamental for any software because it is one of the best ways to reduce the risk of being impacted by a cyber attack through known vulnerabilities. However, closing CVEs in Java applications like Spark is uniquely challenging for a number of reasons.

    The first issue is the complexity in managing dependencies: a typical Java app may include more than 100 third-party libraries, each with different versions and dependencies. When a vulnerability is found in one library, updating or downgrading it can break compatibility with other dependencies that rely on specific versions, making remediation complex and risky. This tangled nest of dependencies can make some vulnerabilities practically impossible to fix without extensive testing and refactoring.

    Apart from this, Java is very verbose and utilized greatly in corporate applications, typically in monolithic architectures of great complexity. Therefore, it is often the case that vulnerabilities affect millions of Java applications all over the world, creating a huge attack surface. The simplicity of exploitation and magnitude of these vulnerabilities make them challenging to eradicate entirely when impacted versions are deeply embedded in many systems. Consequently, developers are typically faced with a massive volume of CVE reports, which is challenging to prioritize and delays remediation.

    Research shows that delayed patch updates are a major cause of security breaches in enterprise environments, for example the IBM 2024 Cost of Data Breach report shows that Known Unpatched Vulnerabilities caused $4.33M damage, and the Canonical and IDC 2025 state of software supply chains report indicates that 60% of organizations have only basic or no security controls to safeguard their AI/ML systems. These challenges create significant risks because delays in applying security patches can leave systems exposed to known vulnerabilities, while compatibility issues can force organizations to choose between security and stability and finally widespread vulnerabilities in widely used Java components can compromise millions of applications simultaneously, causing disruptions due to the need of critical fixes needed right away.

    Java related challenges have a deep impact on Apache Spark. In the first place, Apache Spark has thousands of dependencies, so it becomes difficult to fix a CVE (both by pathing or bumping the version) because it is easy for the fix to break compatibility. This huge number of dependencies also has an impact on the number and severity of the vulnerabilities. In fact Spark has experienced several critical and high vulnerabilities over the years,  which are traceable to its Java origins. In 2022 developers discovered the command injection vulnerability in the Spark UI (CVE-2022-33891) which had a 94.2% of exploitation and was in the top 1% of known exploitable vulnerabilities in recent times, and in 2024 alone two new critical vulnerabilities came out, clearly showing the threat posed by slow patching adoption in Java. These issues are not only a security concern for Spark clusters, but also force companies to make hard choices between implementing the latest security updates and prioritizing stability of their infrastructure.

    Our effort on ensuring Spark’s security posture

    At Canonical, we believe that robust security should be an integral part of your data analytics platform, not a secondary element – and with Charmed Spark, we aim to address the traditional complexity of securing enterprise Spark deployments. 

    We maintain a steady release pace of roughly one new version per month, while simultaneously supporting two major.minor version tracks, which as of today are 3.4.x and 3.5.x. This dual-track support ensures stability for existing users while allowing for ongoing feature development and security improvements. In addition, our proactive vulnerability management has led us, in the past year, to close 10 critical CVEs, resolve 84 high-severity vulnerabilities, and address 161 medium-severity vulnerabilities in Spark and its dependencies, extending this focus to related projects such as Hadoop for its dependencies.

    By investing in automated, self-service security testing, we accelerate the detection and fixing of vulnerabilities, minimizing downtime and manual intervention. Our comprehensive approach to security includes static code analysis, continuous vulnerability scans, rigorous management processes, and detailed cryptographic documentation, as well as hardening guides to help you deploy Spark with security in mind from day one.

    Charmed Spark is a platform where security is a central element, which benefits users by reducing exposure to breaches related to known vulnerabilities through updates and timely fixes, and by giving access to useful tools and documentation for installing and operating Spark in a securely designed manner. In an environment in which Java applications are a frequent focus of attacks and dependency complexity can slow the deployment of patches, Canonical’s approach acts to maintain increased levels of protection from threats, with users able to analyze and use data without inappropriate levels of concern regarding security weakness. This ultimately enables enterprises to focus on their core business application and to provide value to their customers without having to worry about external threats.

    Canonical for your big data security

    While the complexity of Java applications and their extensive dependency ecosystems present ongoing challenges, Charmed Apache Spark gives you a securely designed open source analytics engine without the level of vulnerability challenges that typically come with such a large Java-based project. Moving forward, these foundational security practices will continue to play a vital role in protecting the Spark ecosystem and supporting the broader open source community.
    To learn more about securing your Spark operations, watch our webinar.

    To learn more about securing your Spark operations, watch our webinar:

    10 June, 2025 09:28AM

    hackergotchi for BunsenLabs Linux

    BunsenLabs Linux

    Some new packages in the Carbon repository

    There are some new packages available in the BunsenLabs Carbon apt repository:

    labbe-icons-bark
    labbe-icons-sage
    These will be used by default in Carbon.

    Also available are:
    labbe-icons-grey
    labbe-icons-oomox
    labbe-icons-telinkrin

    And there's a new wallpaper setter:
    xwwall
    and a gtk GUI builder that xwwall uses:
    gtk3dialog
    Both of those will be used in Carbon.

    All of these packages come thanks to the work of @micko01 cool

    10 June, 2025 12:00AM

    June 09, 2025

    hackergotchi for Ubuntu developers

    Ubuntu developers

    The Fridge: Ubuntu Weekly Newsletter Issue 895

    Welcome to the Ubuntu Weekly Newsletter, Issue 895 for the week of June 1 – 7, 2025. The full version of this issue is available here.

    In this issue we cover:

    • Call for nominations: DMB appointment process
    • Ubuntu Stats
    • Hot in Support
    • LXD: Weekly news #397
    • Other Meeting Reports
    • Upcoming Meetings and Events
    • LoCo Events
    • Ubuntu Server Gazette – Issue 4: Stable Release Updates – The Misunderstood process
    • Lubuntu Council Elections 2025
    • Kernel HWE update for the upcoming Noble 24.04.3 point release
    • Phasing out Bazaar code hosting
    • Canonical News
    • In the Press
    • In the Blogosphere
    • Featured Audio and Video
    • Updates and Security for Ubuntu 22.04, 24.04, 24.10, and 25.04
    • And much more!

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • Din Mušić – LXD
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    .

    09 June, 2025 10:46PM

    Simon Quigley: Thanks, Mailbox!

    https://medium.com/media/553e1df568153a684bfe861e27692fcb/href

    A gentleman by the name of Arif Ali reached out to me on LinkedIn. I won’t share the actual text of the message, but I’ll paraphrase:
    “I hope everything is going well with you. I’m applying to be an Ubuntu ‘Per Package Uploader’ for the SOS package, and I was wondering if you could endorse my application.”

    Arif, thank you! I have always appreciated our chats, and I truly believe you’re doing great work. I don’t want to interfere with anything by jumping on the wiki, but just know you have my full backing.

    “So, who actually lets Arif upload new versions of SOS to Ubuntu, and what is it?”
    Great question!

    Firstly, I realized that I needed some more info on what SOS is, so I can explain it to you all. On a quick search, this was the first result.

    Okay, so genuine question…

    Why does the first DuckDuckGo result for “sosreport” point to an article for a release of Red Hat Enterprise Linux that is two versions old? In other words, hey DuckDuckGo, your grass is starting to get long. Or maybe Red Hat? Can’t tell, I give you both the benefit of the doubt, in good faith.

    So, I clarified the search and found this. Canonical, you’ve done a great job. Red Hat, you could work on your SEO so I can actually find the RHEL 10 docs quicker, but hey… B+ for effort. ;)

    Anyway, let me tell you about Arif. Just from my own experiences.

    He’s incredible. He shows love to others, and whenever I would sponsor one of his packages during my time in Ubuntu, he was always incredibly receptive to feedback. I really appreciate the way he reached out to me, as well. That was really kind, and to be honest, I needed it.

    As for character, he has my +1. In terms of the members of the DMB (aside from one person who I will not mention by name, who has caused me immense trouble elsewhere), here’s what I’d tell you if you asked me privately…

    “It’s just PPU. Arif works on SOS as part of his job. Please, do still grill him. The test, and ensuring people know that they actually need to pass a test to get permissions, that’s pretty important.”

    That being said, I think he deserves it.

    Good luck, Arif. I wish you well in your meeting. I genuinely hope this helps. :)

    And to my friends in Ubuntu, I miss you. Please reach out. I’d be happy to write you a public letter, too. Only if you want. :)

    09 June, 2025 05:20PM

    Simon Quigley: People in the Arena

    Theodore Roosevelt is someone I have admired for a long time. I especially appreciate what has been coined the Man in the Arena speech.

    A specific excerpt comes to mind after reading world news over the last twelve hours:

    “It is well if a large proportion of the leaders in any republic, in any democracy, are, as a matter of course, drawn from the classes represented in this audience to-day; but only provided that those classes possess the gifts of sympathy with plain people and of devotion to great ideals. You and those like you have received special advantages; you have all of you had the opportunity for mental training; many of you have had leisure; most of you have had a chance for enjoyment of life far greater than comes to the majority of your fellows. To you and your kind much has been given, and from you much should be expected. Yet there are certain failings against which it is especially incumbent that both men of trained and cultivated intellect, and men of inherited wealth and position should especially guard themselves, because to these failings they are especially liable; and if yielded to, their- your- chances of useful service are at an end. Let the man of learning, the man of lettered leisure, beware of that queer and cheap temptation to pose to himself and to others as a cynic, as the man who has outgrown emotions and beliefs, the man to whom good and evil are as one. The poorest way to face life is to face it with a sneer. There are many men who feel a kind of twister pride in cynicism; there are many who confine themselves to criticism of the way others do what they themselves dare not even attempt. There is no more unhealthy being, no man less worthy of respect, than he who either really holds, or feigns to hold, an attitude of sneering disbelief toward all that is great and lofty, whether in achievement or in that noble effort which, even if it fails, comes to second achievement. A cynical habit of thought and speech, a readiness to criticise work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life’s realities — all these are marks, not as the possessor would fain to think, of superiority but of weakness. They mark the men unfit to bear their part painfully in the stern strife of living, who seek, in the affection of contempt for the achievements of others, to hide from others and from themselves in their own weakness. The rôle is easy; there is none easier, save only the rôle of the man who sneers alike at both criticism and performance.”

    The riots in LA are seriously concerning to me. If something doesn’t happen soon, this is going to get out of control.

    If you are participating in these events, or know someone who is, tell them to calm down. Physical violence is never the answer, no matter your political party.

    De-escalate immediately.

    Be well. Show love to one another!

    09 June, 2025 05:58AM

    June 08, 2025

    Colin Watson: Free software activity in May 2025

    My Debian contributions this month were all sponsored by Freexian. Things were a bit quieter than usual, as for the most part I was sticking to things that seemed urgent for the upcoming trixie release.

    You can also support my work directly via Liberapay or GitHub Sponsors.

    OpenSSH

    After my appeal for help last month to debug intermittent sshd crashes, Michel Casabona helped me put together an environment where I could reproduce it, which allowed me to track it down to a root cause and fix it. (I also found a misuse of strlcpy affecting at least glibc-based systems in passing, though I think that was unrelated.)

    I worked with Daniel Kahn Gillmor to fix a regression in ssh-agent socket handling.

    I fixed a reproducibility bug depending on whether passwd is installed on the build system, which would have affected security updates during the lifetime of trixie.

    I backported openssh 1:10.0p1-5 to bookworm-backports.

    I issued bookworm and bullseye updates for CVE-2025-32728.

    groff

    I backported a fix for incorrect output when formatting multiple documents as PDF/PostScript at once.

    debmirror

    I added a simple autopkgtest.

    Python team

    I upgraded these packages to new upstream versions:

    • automat
    • celery
    • flufl.i18n
    • flufl.lock
    • frozenlist
    • python-charset-normalizer
    • python-evalidate (including pointing out an upstream release handling issue)
    • python-pythonjsonlogger
    • python-setproctitle
    • python-telethon
    • python-typing-inspection
    • python-webargs
    • pyzmq
    • trove-classifiers (including a small upstream cleanup)
    • uncertainties
    • zope.testrunner

    In bookworm-backports, I updated these packages:

    • python-django to 3:4.2.21-1 (issuing BSA-124)
    • python-django-pgtrigger to 4.14.0-1

    I fixed problems building these packages reproducibly:

    I backported fixes for some security vulnerabilities to unstable (since we’re in freeze now so it’s not always appropriate to upgrade to new upstream versions):

    I fixed various other build/test failures:

    I added non-superficial autopkgtests to these packages:

    I packaged python-django-hashids and python-django-pgbulk, needed for new upstream versions of python-django-pgtrigger.

    I ported storm to Python 3.14.

    Science team

    I fixed a build failure in apertium-oci-fra.

    08 June, 2025 12:20AM

    June 07, 2025

    hackergotchi for Grml developers

    Grml developers

    Evgeni Golov: show your desk - 2025 edition

    Back in 2020 I posted about my desk setup at home.

    Recently someone in our #remotees channel at work asked about WFH setups and given quite a few things changed in mine, I thought it's time to post an update.

    But first, a picture! standing desk with a monitor, laptop etc (Yes, it's cleaner than usual, how could you tell?!)

    desk

    It's still the same Flexispot E5B, no change here. After 7 years (I bought mine in 2018) it still works fine. If I'd have to buy a new one, I'd probably get a four-legged one for more stability (they got quite affordable now), but there is no immediate need for that.

    chair

    It's still the IKEA Volmar. Again, no complaints here.

    hardware

    Now here we finally have some updates!

    laptop

    A Lenovo ThinkPad X1 Carbon Gen 12, Intel Core Ultra 7 165U, 32GB RAM, running Fedora (42 at the moment).

    It's connected to a Lenovo ThinkPad Thunderbolt 4 Dock. It just works™.

    workstation

    It's still the P410, but mostly unused these days.

    monitor

    An AOC U2790PQU 27" 4K. I'm running it at 150% scaling, which works quite decently these days (no comparison to when I got it).

    speakers

    As the new monitor didn't want to take the old Dell soundbar, I have upgraded to a pair of Alesis M1Active 330 USB.

    They sound good and were not too expensive.

    I had to fix the volume control after some time though.

    webcam

    It's still the Logitech C920 Pro.

    microphone

    The built in mic of the C920 is really fine, but to do conference-grade talks (and some podcasts 😅), I decided to get something better.

    I got a FIFINE K669B, with a nice arm.

    It's not a Shure, for sure, but does the job well and Christian was quite satisfied with the results when we recorded the Debian and Foreman specials of Focus on Linux.

    keyboard

    It's still the ThinkPad Compact USB Keyboard with TrackPoint.

    I had to print a few fixes and replacement parts for it, but otherwise it's doing great.

    Seems Lenovo stopped making those, so I really shouldn't break it any further.

    mouse

    Logitech MX Master 3S. The surface of the old MX Master 2 got very sticky at some point and it had to be replaced.

    other

    notepad

    I'm still terrible at remembering things, so I still write them down in an A5 notepad.

    whiteboard

    I've also added a (small) whiteboard on the wall right of the desk, mostly used for long term todo lists.

    coaster

    Turns out Xeon-based coasters are super stable, so it lives on!

    yubikey

    Yepp, still a thing. Still USB-A because... reasons.

    headphones

    Still the Bose QC25, by now on the third set of ear cushions, but otherwise working great and the odd 15€ cushion replacement does not justify buying anything newer (which would have the same problem after some time, I guess).

    I did add a cheap (~10€) Bluetooth-to-Headphonejack dongle, so I can use them with my phone too (shakes fist at modern phones).

    And I do use the headphones more in meetings, as the Alesis speakers fill the room more with sound and thus sometimes produce a bit of an echo.

    charger

    The Bose need AAA batteries, and so do some other gadgets in the house, so there is a technoline BC 700 charger for AA and AAA on my desk these days.

    light

    Yepp, I've added an IKEA Tertial and an ALDI "face" light. No, I don't use them much.

    KVM switch

    I've "built" a KVM switch out of an USB switch, but given I don't use the workstation that often these days, the switch is also mostly unused.

    07 June, 2025 03:17PM

    June 06, 2025

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Salih Emin: uCareSystem 25.05.06: Because Even Your WSL Deserves a Spa Day!

    Hey everyone,

    Get ready to dust off those virtual cobwebs and crack open a cold one (or a digital one, if you’re in a VM) because uCareSystem 25.05.06 has officially landed! And let me tell you, this release is so good, it’s practically a love letter to your Linux system – especially if that system happens to be chilling out in Windows Subsystem for Linux (WSL).

    That’s right, folks, the big news is out: WSL support for uCareSystem has finally landed! We know you’ve been asking, we’ve heard your pleas, and we’ve stopped pretending we didn’t see you waving those “Free WSL” signs.

    Now, your WSL instances can enjoy the same tender loving care that uCareSystem provides for your “bare metal” Ubuntu/Debian Linux setups. No more feeling left out, little WSLs! You can now join the cool kids at the digital spa.

    Here is a video of it:

    But wait, there’s more! (Isn’t there always?) We didn’t just stop at making friends with Windows. We also tackled some pesky gremlins that have been lurking in the shadows:

    • Apt-key dependency? Gone! We told it to pack its bags and hit the road. Less dependency drama, more system harmony.
    • Remember that time your internet check was slower than a sloth on a caffeine crash? We squashed that “Bug latency curl in internet check phase” bug. Your internet checks will now be snappier than a startled squirrel.
    • We fixed that “Wrong kernel cleanup” issue. Your kernels are now safe from accidental digital haircuts.
    • And for those of you who hit snags with Snap in WSL, kernel cleanup (again, because we’re thorough!), and other bits, we’ve applied some much-needed digital duct tape and elbow grease to fix and more.
    • We even gave our code a good scrub, fixing those annoying shellcheck warnings. Because nobody likes a messy codebase, especially not us!
    • Oh, and the -k option? Yeah, that’s gone too. We decided it was useless so we had to retire it to a nice, quiet digital farm upstate.
    • Finally, for all you newcomers and memory-challenged veterans, we’ve added install and uninstall instructions to the README. Because sometimes, even we forget how to put things together after we’ve taken them apart.

    So, what are you waiting for? Head over to utappia.org (or wherever you get your uCareSystem goodness) and give your system the pampering it deserves with uCareSystem 25.05.06. Your WSL instance will thank you, probably with a digital high-five.

    Download the latest release and give it a spin. As always, feedback is welcome.

    Acknowledgements

    Thanks to the following users for their support:

    • P. Loughman – Thanks for your continued support
    • D. Emge – Thanks for your continued support
    • W. Schreinemachers – Thanks for your continued support
    • W. Schwartz
    • D. e Swarthout
    • D. Luchini
    • M. Stanley
    • N. Evangelista

    Your involvement helps keep this project alive, evolving, and aligned with real-world needs. Thank you.

    Happy maintaining!

    Where can I download uCareSystem ?

    As always, I want to express my gratitude for your support over the past 15 years. I have received countless messages from inside and outside Greece about how useful they found the application. I hope you find the new version useful as well.

    If you’ve found uCareSystem to be valuable and it has saved you time, consider showing your appreciation with a donation. You can contribute via PayPal or Debit/Credit Card by clicking on the banner.

    Pay what you want Maybe next time
    Click the donate button and enter the amount you want to donate. Then you will be navigated to the page with the latest version to download the installer If you don’t want to Donate this time, just click the download icon to be navigated to the page with the latest version to download the installer
    btn_donateCC_LG ucare-system-download
       

    Once installed, the updates for new versions will be installed along with your regular system updates.

    The post uCareSystem 25.05.06: Because Even Your WSL Deserves a Spa Day! appeared first on Utappia.

    06 June, 2025 10:59PM

    Salih Emin: Hello world!

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

    06 June, 2025 08:22PM

    Ubuntu Blog: What is CMMC compliance?

    The Cybersecurity Maturity Model Certification, or CMMC for short, is a security framework for protecting Controlled Unclassified Information (CUI) in non-federal systems and organizations. The CMMC compliance requirements map to the set of controls laid out in the NIST SP 800-171 Rev 2 and NIST SP 800-172 families.

    CMMC version 2.0 came into effect on December 26, 2023, and is designed to ensure adherence to rigorous cybersecurity policies and practices within the public sector and amongst wider industry partners.

    Whilst many of the controls relate to how organizations conduct their IT operations, there are several specific technology requirements, and Ubuntu Pro includes features which meet these requirements head on.

    What are the CMMC maturity levels?

    CMMC has 3 levels, designed to meet increasing levels of security scrutiny:

    1. Safeguarding Federal Contract Information (FCI), with an annual self-assessment.
    2. Protection of Controlled Unclassified Information (CUI), with a triennial third-party assessment for critical national security information, and annual self-assessment for other cases.
    3. Enhanced Protection of Controlled Unclassified Information, with triennial government-led assessments.

    Most independent contractors and industry partners will use level 2, and perform an annual self-assessment of their security posture against the program requirements.

    When will CMMC compliance become a hard requirement?

    While the 2.0 standard has been live since December 2023, CMMC will become a contractual requirement after 3 years, which falls in 2026. However, it takes time to work through the controls and achieve the security requirements, and organizations may take anywhere from months to years to gain this level of maturity, depending on their size and agility. Undoubtedly, the best course of action is to start planning now in order to remain eligible for contracts and to keep winning business.

    How does CMMC compare to NIST SP 800-171? 

    CMMC is based on the NIST SP 800-171 security controls framework for handling Controlled Unclassified Information – similar to FedRAMP – and so anyone familiar with these publications will feel comfortable with the CMMC requirements. Whilst NIST SP 800-171 provides a wide range of security controls, the exact implementation can be left to the user’s discretion; CMMC gives exact requirements and provides a framework for self-assessment and auditing.

    5 steps to CMMC compliance

    In order to become CMMC compliant, you should be systematic in your approach. Here’s how to proceed:

    1. Determine your CMMC maturity level requirements: this depends on the type of information that you’ll be handling. Level 1 is just for federal contracts, whilst Level 2 – the most widely used level – is for CUI. Level 3 is for critical infrastructure.
    2. Determine your scope and assets: – it’s very important to work out the boundary of the systems that will handle the sensitive data, and keep track of which IT assets are included in this.
    3. Perform a gap analysis: go through the CMMC control framework and check which controls you currently meet, and which ones need attention; either for existing systems or for future designs.
    4. Choose the right technology platforms: such as Ubuntu Pro, a subscription for open source security that patches critical vulnerabilities within 24 hours.
    5. Complete a self-assessment.

    How Canonical can support your journey towards CMMC

    Patching security vulnerabilities

    Ubuntu Pro supports the CMMC requirement to remediate software vulnerabilities in a timely manner. Since starting out 20 years ago, Canonical has typically released patches for critical vulnerabilities within 24 hours. We provide 12 years of security patching for all the software applications and infrastructure components within the Ubuntu ecosystem. 

    FIPS-certified crypto modules

    Ubuntu Pro provides FIPS 140-2 and FIPS 140-3 certified cryptographic modules that you can deploy with a single command. These certified modules replace the standard cryptographic libraries which ship with Ubuntu by default, making the system FIPS 140 compliant, and allowing existing applications to make use of FIPS-approved cryptographic algorithms and ciphers without further certification or modification.

    System hardening

    DISA-STIG is a system hardening guide that describes how to configure an Ubuntu system to be maximally secure, by locking it down and restricting unnecessary privileges. The STIG for Ubuntu lists several hundred individual configuration steps to turn a generic Ubuntu installation into a fully secure environment. System hardening is an important CMMC requirement.

    You can simplify STIG hardening with the Ubuntu Security Guide (USG): the USG tool enables automated auditing and remediation of the individual configuration steps in order to comply with the STIG benchmark, and allows you to customize the hardening profile to meet individual deployment needs.

    Overview 

    Canonical is a software distributor rather than a service provider, and as such we are not CMMC certified ourselves, but through Ubuntu Pro we provide the tools that enable our customers to meet these specific technology requirements within the baseline controls. 

    As such, Ubuntu Pro provides an easy pathway to CMMC compliance. It delivers CVE patching for Ubuntu OS and Applications covering 36,000 packages, along with automated, unattended, and restartless updates, and the best tools to secure and manage your Ubuntu infrastructure, developed by the publisher of Ubuntu. Learn more about Ubuntu Pro on our explanatory web page.

    06 June, 2025 01:27PM

    hackergotchi for Deepin

    Deepin

    deepin Community Monthly Report for May 2025

    I. May Community Data Overview II. Community Products In May, our product iteration accelerated further, delivering a more stable, user-friendly system experience and enhanced application compatibility. deepin 25 Beta Release: Building on Stability, Opening Beta for Co-creation After months of dedicated development, deepin 25 Beta was released on May 23rd. Focusing on system stability, this version comprehensively resolves key issues from the Alpha phase, including abnormal upgrade booting, failure of the "Rock Solid System Protection" feature, and multi-screen scaling adaptation for the Desktop Environment (DDE). Hardware compatibility has also been optimized, significantly improving support for peripherals and SMB services. Functionally, ...Read more

    06 June, 2025 10:04AM by xiaofei

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Launchpad News: Phasing out Bazaar code hosting

    What is Bazaar code hosting?

    Bazaar is a distributed revision control system, originally developed by Canonical. It provides similar functionality compared to the now dominant Git.

    Bazaar code hosting is an offering from Launchpad to both provide a Bazaar backend for hosting code, but also a web frontend for browsing the code. The frontend is provided by the Loggerhead application on Launchpad.

    Sunsetting Bazaar

    Bazaar passed its peak a decade ago. Breezy is a fork of Bazaar that has kept a form of Bazaar alive, but the last release of Bazaar was in 2016. Since then the impact has declined, and there are modern replacements like Git.

    Just keeping Bazaar running requires a non-trivial amount of development, operations time, and infrastructure resources – all of which could be better used elsewhere.

    Launchpad will now begin the process of discontinuing support for Bazaar.

    Timelines

    We are aware that the migration of the repositories and updating workflows will take some time, that is why we planned sunsetting in two phases.

    Phase 1

    Loggerhead, the web frontend, which is used to browse the code in a web browser, will be shut down imminently. Analyzing access logs showed that there are hardly any more requests from legit users, but almost the entire traffic comes from scrapers and other abusers. Sunsetting Loggerhead will not affect the ability to pull, push and merge changes.

    Phase 2

    From September 1st, 2025, we do not intend to have Bazaar, the code hosting backend, any more. Users need to migrate all repositories from Bazaar to Git between now and this deadline.

    Migration paths

    The following blog post describes all the necessary steps on how to convert a Bazaar repository hosted on Launchpad to Git.

    Migrate a Repository From Bazaar to Git

    Call for action

    Our users are extremely important to us. Ubuntu, for instance, has a long history of Bazaar usage, and we will need to work with the Ubuntu Engineering team to find ways to move forward to remove the reliance on the integration with Bazaar for the development of Ubuntu. If you are also using Bazaar and you have a special use case, or you do not see a clear way forward, please reach out to us to discuss your use case and how we can help you.

    You can reach us in #launchpad:ubuntu.com on Matrix, or submit a question or send us an e-mail via feedback@launchpad.net.

    It is also recommended to join the ongoing discussion at https://discourse.ubuntu.com/t/phasing-out-bazaar-code-hosting/62189.

    06 June, 2025 09:26AM

    hackergotchi for Deepin

    Deepin

    June 05, 2025

    hackergotchi for Purism PureOS

    Purism PureOS

    Fortune.com Features Purism and the Made in America Liberty Phone

    The only ‘Made in America’ smartphone maker has a message for Apple about manufacturing in the Trump tariff era.

    The post Fortune.com Features Purism and the Made in America Liberty Phone appeared first on Purism.

    05 June, 2025 06:16PM by Purism

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Blog: What if your container images were security-maintained at the source?

    Software supply chain security has become a top concern for developers, DevOps engineers, and IT leaders. High-profile breaches and dependency compromises have shown that open source components can introduce risk if not properly vetted and maintained. Although containerization has become commonplace in contemporary development and deployment, it can have drawbacks in terms of reproducibility and security. 

    There is a dire need for container builds that are not only simple to deploy, but also safe, repeatable, and maintained long-term against new threats – and that’s why Canonical is introducing the Container Build Service.

    The open source security challenge

    The use of open source software (OSS) is becoming more and more prevalent in enterprise environments. With analyses showing that it makes up around 70% of all software in use, OSS is no longer considered a supplementary element but rather the foundation of modern applications. What’s even more interesting is that 97% of commercial codebases are reported to have integrated some OSS components, highlighting how fundamental it has truly become. However, we’re also seeing that with this growing use of OSS, open source vulnerabilities are frequently discovered. Research indicates that 84% of codebases contain at least one known open source vulnerability, with almost half of those vulnerabilities being categorized as high-severity. Black Duck’s 2025 Open Source Security and Risk Analysis (OSSRA) report showed that this risk has been increased due to by the sheer number of open source files used by applications -, which have tripled in just four years, this number has nearly tripled, from an average of 5,300 in 2020 to over 16,000 in 2024. This increased attack surface is directly correlated with this rise. 

    According to a report from Canonical and IDC, organizations are adopting OSS primarily to reduce costs (44%), accelerate development (36%), and increase reliability (31%). Despite nine out of ten organizations expressing a preference to source packages from trusted OS repositories like those in their OS, most still pull directly from upstream registries. This means that the responsibility for patching falls heavily on IT teams. The report found that seven in ten teams dedicate over six hours per week (almost a full working day) to sourcing and applying security updates., and tThe same proportion mandates that high and critical-severity vulnerabilities arebe patched within 24 hours, yet only 41% feel confident they can meet that SLA. What’s also interesting is that more than half of organizations do not automatically upgrade their in-production systems or applications to the newest versions, leaving them exposed to known vulnerabilities.

    Supply chain attacks are also becoming more frequent. A study conducted by Sonatype displayed how the number of software supply chain attacks doubled in 2024 alone, and according to a study done by Blackberry over 75% of organizations experienced a supply chain-related attack in the previous year. The Sonatype study also highlighted how malicious packages became highly prevalent in the last 12 months, with more than 500,000 malicious packages being found in public repositories – a 156% increase from the previous year. This highlights how attackers target upstream open source in order to compromise downstream users. 

    In light of these trends, development teams are seeking ways to ensure the integrity of their container images. Practices like reproducible builds and signed images are gaining popularity as defenses against tampering, while minimal images promise fewer vulnerabilities. However, implementing these measures requires significant effort and expertise. This is where Canonical’s latest offering comes in.

    Canonical’s Container Build Service:  reproducible, hardened and security-maintained images

    Canonical has launched a new Container Build Service designed to meet the above challenges head-on. In essence, through this service, Canonical’s engineers will custom-build container images for any open source project or stack, with security and longevity as primary features. Whether it’s an open source application or a custom base image containing all the dependencies for your app, Canonical will containerize it according to your specifications and harden the image for production. The resulting container image is delivered in the Open Container Initiative (OCI) format and comes with up to 12 years of security maintenance.

    The entire container supported for up to 12 years

    Every package and library in the container – even those not originally in Ubuntu’s repositories – is covered under Canonical’s security maintenance commitment. We have a track record of patching critical vulnerabilities within an average of 24 hours on average, ensuring quick remediation of emerging threats. Unlike standard base images that cover only OS components, Canonical’s service will include all required upstream open source components in the container build. In other words, your entire open source dependency tree is kept safe – even if some parts of it were not packaged in Ubuntu before. This means teams can confidently use the latest frameworks, AI/ML libraries, or niche utilities, knowing Canonical will extend Ubuntu’s famous long-term support to those pieces as well.

    Each container image build comes with a guaranteed security updates period of up to 12 years. This far outlasts the typical support window for community container images. It ensures that organizations in regulated or long-lived environments can run containers in production for a decade or more with ongoing patching. 

    Truly portable

    The hardened images are designed to run on any popular Linux host or Kubernetes platform. Whether your infrastructure is Ubuntu, RHEL, VMware, or a public cloud Kubernetes service, Canonical will support these images on that platform. This broad compatibility means you don’t have to be running Ubuntu on the host to benefit: the container images are truly portable and backed by Canonical across environments.

    Long-term reproducibility and automation

    Canonical’s build pipeline emphasizes reproducibility and automation. Once your container image is designed and built, an automated pipeline takes over to continuously rebuild and update the image with the latest security patches. This ensures the image remains up to date over time without manual intervention, and it provides a reproducible build process (verifiable by Canonical) to guarantee that the image you run in production exactly matches the source and binaries that were vetted.

    In short, the new Container Build Service delivers secure, reproducible, and highly dependable container images, tailor-made for your applications by the experts behind Ubuntu. It effectively offloads the heavy lifting of container security maintenance to Canonical, so your teams can focus on writing code and deploying features and not constantly chasing the next vulnerability in your container image.

    Minimal footprint, optimal performance

    A standout aspect of Canonical’s approach is the use of chiseled Ubuntu container images. Chiseled images are Canonical’s take on the “distroless” container concept – ultra-minimal images that include only the essential runtime components needed by your application and nothing more. By stripping away unnecessary packages, utilities, and metadata, chiseled images dramatically reduce image size and attack surface.

    What exactly are chiseled images? They are built using an open source tool called Chisel which effectively sculpts down an application to its bare essentials. A chiseled Ubuntu container image  still originates from the Ubuntu base you know, but with all surplus components carved away.

    Chiseled images include only the files and libraries strictly required to run your application, excluding surplus distro metadata, shells, package managers, and other tools not needed in production. Because of this minimalist approach, chiseled images are significantly smaller than typical Ubuntu images. This not only means less storage and faster transfer, but also inherently fewer places for vulnerabilities to hide. In a .NET container optimization exercise done by the ACA team at Microsoft, chiseling reduced the Backend API image size from 226 MB to 119 MB, a 56.6% reduction, and slashed CVEs from 25 to just 2 meaning a 92% decrease. Packages also dropped from 451 to 328, offering far fewer potential vulnerabilities to manage. 

    With less bloat, chiseled containers start up faster and use less memory. They have only the essentials, so pulling images and launching containers is quicker. For example, chiseling the .NET runtime images trimmed about 100 MB from the official image and produced a runtime base as small as 6 MB (compressed) for self-contained apps. Such a tiny footprint translates to faster network transfers and lower memory overhead at scale.

    By using chiseled Ubuntu images in its container builds, Canonical ensures that each container is as small and locked down as possible, while still being based on the world’s most popular Linux distribution for developers. It’s a combination that delivers strong security out of the box. And because these images are built on Ubuntu, they inherit Ubuntu’s long-term support policies. Our container images align with Ubuntu LTS release cycles and receive the same five years of free security updates, extended to ten years with Ubuntu Pro, for the core components. In the new build service, that support can stretch to 12 years for enterprise customers, keeping even the minimal runtime components patched against CVEs over the long term.

    Built on Ubuntu Pro 

    Canonical coined the term “Long Term Support (LTS)” back in 2006 with Ubuntu 6.06 LTS, pioneering the idea of stable OS releases with 5 years of guaranteed updates. Since then, Ubuntu LTS has become a byword for reliability in enterprises. In 2019, Canonical introduced Ubuntu Pro, which expanded on this foundation by providing comprehensive security maintenance not just for Ubuntu’s core system, but for thousands of community (universe) packages as well, along with enterprise features like FIPS 140 certified cryptography. Today, Ubuntu Pro is a very comprehensive open source security offering, covering over 36,000 packages with 10-year maintenance. 

    This background matters because the new Container Build Service is essentially Ubuntu Pro for your container images. Canonical is extending its expertise in automated patching, vulnerability remediation, and long-term maintenance to the full stack inside your containers. By having Canonical design and maintain your container image, you’re effectively gaining a dedicated team to watch over your software supply chain. Every upstream project included in your container is continually monitored for security issues. If a new vulnerability emerges in any layer of your stack – whether it’s in the OS, a shared library, or an obscure Python package –  Canonical will proactively apply the patch and issue an updated image through the automated pipeline. All of this happens largely behind the scenes, and you receive notifications or can track updates as needed for compliance. It’s a level of diligence that would be costly and difficult to replicate in-house.

    Furthermore, Canonical’s involvement provides a chain of custody and trust that is hard to achieve with self-built images. The containers are built and signed by Canonical using the same infrastructure that builds official Ubuntu releases, ensuring integrity. Canonical and its partners have even established a zero-distance supply chain for critical assets – meaning there’s tight integration and verification from source code to the final container artefact. This approach greatly reduces the risk of tampering or hidden malware in the supply chain.

    Because Ubuntu is so widely trusted, Canonical’s container images come pre-approved for use in highly regulated environments. Notably, hardened Ubuntu container images are already certified and available in the U.S. Department of Defense’s “Iron Bank” repository, which is a collection of hardened containers for government use. By leveraging Canonical’s service, organizations inherit this level of credibility and compliance. It’s easier to meet standards like FedRAMP, DISA-STIG, or the upcoming EU Cyber Resilience Act when your base images and components are backed by Ubuntu Pro’s security regime and provide auditable evidence of maintenance.

    In summary, the Container Build Service stands on the shoulders of Ubuntu Pro and Canonical’s long experience in  open source security. Your custom container isn’t just another bespoke image, it becomes an enterprise-grade artifact, with clear maintenance commitments and security SLAs that auditors and IT governance teams will appreciate.

    Canonical’s container build service aims to have every layer of the container stack – from OS to app dependencies – maintained. With optimized chiseled sizes, a decade of updates, and Canonical’s support, these images are crafted for production. 

    Learn more about Canonical’s Container build Service >

    Get in touch to discuss securing your container stack today > 

    05 June, 2025 01:00PM

    Stéphane Graber: Announcing Incus 6.13

    The Incus team is pleased to announce the release of Incus 6.13!

    This is a VERY busy release with a lot of new features of all sizes
    and for all kinds of different users, so there should be something for
    everyone!

    The highlights for this release are:

    • Windows agent support
    • Improvements to incus-migrate
    • SFTP on custom volumes
    • Configurable instance external IP address on OVN networks
    • Ability to pin gateway MAC address on OVN networks
    • Clock handling in virtual machines
    • New get-client-certificate and get-client-token commands
    • DHCPv6 support for OCI
    • Network host tables configuration for routed NICs
    • Support for split image publishing
    • Preseed of certificates
    • Configuration of list format in the CLI
    • Add CLI aliases for create/add and delete/remove/rm
    • OS metrics are now included in Incus metrics when running on Incus OS
    • Converted more database logic to generated code
    • Converted more CLI list functions to using server side filtering
    • Converted more documentation to be generated from the code

    The full announcement and changelog can be found here.
    And for those who prefer videos, here’s the release overview video:

    You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

    And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

    Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

    Enjoy!

    05 June, 2025 05:07AM

    hackergotchi for Maemo developers

    Maemo developers

    Mobile blogging, the past and the future

    This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what’s common is that at almost all points there was a mechanism to publish while on the move.

    Psion, documents over FTP

    In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.

    Psion S5, also known as the Ancestor

    The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.

    Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.

    In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.

    If we wanted to include photos to the stories, we’d have to find an Internet cafe.

    SMS and MMS

    For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST, an empty blog post would be created with the rest of that message’s text as the title.

    As I got into neogeography, I could also send a NEWPOSITION message. This would update my position on the map, connecting weather metadata to the posts.

    As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.

    Photos over email

    A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.

    And so my blog included a new “moblog” section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.

    My blog from that era

    Pause

    Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.

    In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.

    And I traveled off-the-grid rarely enough that it didn’t make sense to develop a system.

    But now that we’re sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?

    Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.

    However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.

    But despite what you’d think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.

    Inreach, texting with the cloud

    Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.

    When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.

    I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.

    One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it’d even enable rudimentary communications if we found ourselves in a liferaft.

    Sailmail and email over HF radio

    The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.

    Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.

    Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.

    With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We’d then need a mailbox that can receive these mails, and some automation to fetch and publish.

    • Sailmail and No Foreign Land support structured data via email to update position. Their formats could be useful inspiration
    0 Add to favourites0 Bury

    05 June, 2025 12:00AM by Henri Bergius (henri.bergius@iki.fi)

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Podcast Ubuntu Portugal: E350 a Senhora Dos Anéis

    De regresso triunfal do Oppidum Sena, onde apanharam uma onda de calor e uma barrigada de cabrito, queijo e vinho, os nossos heróis trazem novidades da Wikicon Portugal 2025 e contam-nos as suas aventuras tecnológicas, que incluem despir um Cervantes e apanhar felinos chamados Felicity em sítios estranhos da Internet. Para recebê-los esteve presente a Princesa Leia, a.k.a., Joana Simões, a.k.a. A Senhora dos Anéis, regressada de uma gloriosa missão em Tóquio e de partida para o México - a conversa fará tremer a terra debaixo dos vossos pés e os satélites acima das vossas cabeças!

    Já sabem: oiçam, subscrevam e partilhem!

    Atribuição e licenças

    Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

    05 June, 2025 12:00AM

    June 04, 2025

    Faizul "Piju" 9M2PJU: Building a Plex Media Server with Raspberry Pi

    If you’re looking for a low-power, always-on solution for streaming your personal media library, the Raspberry Pi makes a great Plex server. It’s compact, quiet, affordable, and perfect for handling basic media streaming—especially for home use.

    In this post, I’ll guide you through setting up Plex Media Server on a Raspberry Pi, using Raspberry Pi OS (Lite or Full) or Debian-based distros like Ubuntu Server.


    🧰 What You’ll Need

    • Raspberry Pi 4 or 5 (at least 2GB RAM, 4GB+ recommended)
    • microSD card (32GB+), or SSD via USB 3.0
    • External storage for media (USB HDD/SSD or NAS)
    • Ethernet or Wi-Fi connection
    • Raspberry Pi OS (Lite or Desktop)
    • A Plex account (free is enough)

    ⚙ Step 1: Prepare the Raspberry Pi

    1. Flash Raspberry Pi OS using Raspberry Pi Imager
    2. Enable SSH and set hostname (optional)
    3. Boot the Pi, log in, and update:
    sudo apt update && sudo apt upgrade -y
    

    📦 Step 2: Install Plex Media Server

    Plex is available for ARM-based devices via their official repository.

    1. Add Plex repo and key:
    curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo apt-key add -
    echo deb https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list
    sudo apt update
    
    1. Install Plex:
    sudo apt install plexmediaserver -y
    

    🔁 Step 3: Enable and Start the Service

    Enable Plex on boot and start the service:

    sudo systemctl enable plexmediaserver
    sudo systemctl start plexmediaserver
    

    Make sure it’s running:

    sudo systemctl status plexmediaserver
    

    🌐 Step 4: Access Plex Web Interface

    Open your browser and go to:

    http://<your-pi-ip>:32400/web
    

    Log in with your Plex account and begin the setup wizard.


    📂 Step 5: Add Your Media Library

    Plug in your external HDD or mount a network share, then:

    sudo mkdir -p /mnt/media
    sudo mount /dev/sda1 /mnt/media
    

    Make sure Plex can access it:

    sudo chown -R plex:plex /mnt/media
    

    Add the media folder during the Plex setup under Library > Add Library.


    💡 Optional Tips

    • Transcoding: The Pi can handle direct play (no transcoding) well, but struggles with transcoding large files. Use compatible formats like H.264 (MP4).
    • USB Boot: For better performance, boot the Pi from an SSD instead of a microSD card.
    • Power Supply: Use a proper 5V/3A PSU to avoid crashes under heavy disk load.
    • Thermal: Add a heatsink or fan for the Pi if using Plex for long sessions.

    🔐 Secure Your Server

    • Use your router to forward port 32400 only if you want remote access.
    • Set a strong Plex password.
    • Enable Tailscale or WireGuard for secure remote access without exposing ports.

    ✅ Conclusion

    A Raspberry Pi might not replace a full-blown NAS or dedicated server, but for personal use or as a secondary Plex node, it’s surprisingly capable. With low energy usage and silent operation, it’s the perfect DIY home media solution.

    If you’re running other services like Pi-hole or Home Assistant, the Pi can multitask well — just avoid overloading it with too much transcoding.

    The post Building a Plex Media Server with Raspberry Pi appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    04 June, 2025 09:30PM

    hackergotchi for Purism PureOS

    Purism PureOS

    Google Restricts Android Sideloading—What It Means for User Autonomy and the Future of Mobile Freedom

    Google has recently implemented new restrictions on sideloading Android apps, citing growing security concerns. In a pilot program launched in Singapore, the tech giant now blocks the installation of certain sideloaded apps—particularly those requesting sensitive permissions such as SMS access or accessibility services—if they are downloaded via web browsers, messaging apps, or file managers. The move, developed in partnership with Singapore's Cyber Security Agency, is designed to prevent fraud and malware-enabled scams.

    The post Google Restricts Android Sideloading—What It Means for User Autonomy and the Future of Mobile Freedom appeared first on Purism.

    04 June, 2025 06:40PM by Purism

    hackergotchi for Deepin

    Deepin

    hackergotchi for ARMBIAN

    ARMBIAN

    Armbian Development Highlights: End of May 2025

    As the Armbian project transitions from spring into summer, the final week of May 2025 brought a dense flurry of development activity, delivering improvements across kernel support, bootloader updates, system performance, and user experience enhancements. With over 35 pull requests merged, this week showcased the Armbian community’s continued dedication to modernizing and stabilizing its build framework and board support packages.

    Performance & Build System Optimizations

    A notable performance enhancement arrived via #8248, where build engineer @rpardini delivered a major speed-up in Docker extension handling, cutting processing time by over 50%. Complementing this, PR #8249 addressed inefficiencies in rootfs-to-image by avoiding --sparse, significantly improving I/O speeds on various filesystems.

    Kernel version parsing and custom kernel description functionality also landed with #8152, thanks to @Grippy98, enabling displaying kernel versioning within build branches.

    Board Support Enhancements & Bootloader Upgrades

    A slew of boards received attention this week. The NanoPC-T6 series saw a key modernization in #8219 and #8239, switching to mainline Arm Trusted Firmware and bumping U-Boot to v2025.04 final. The Quartz64A board followed suit in #8250, while the Odroid HC4Khadas VIM3, and Mixtile Blade3 all received U-Boot updates or reverts to improve stability.

    Legacy and edge kernel support was also improved. Notably, Rockchip64 edge kernel configuration gained CONFIG_NETKIT=y (#8237), and fixes for display mode handling on RK3588 boards were added (#8253).

    Meanwhile, the Orangepi 5 Ultra switched to a mainline kernel source (#8252), reinforcing Armbian’s ongoing effort to shed legacy components and embrace upstream compatibility.

    Infrastructure & Usability Improvements

    Behind the scenes, @igorpecovnik contributed multiple usability tweaks, including a fix for HiDPI detection (#8236) and @rpardini added improved serial console fallback behavior in GRUB (#8247). The GPG key placement was standardized across distros (#8128), simplifying build reproducibility.

    Device Tree and Service Fixes

    The smart am40 received a long-needed RTC node and U-Boot bump (#8214), while the Helios4‘s wake-on-LAN service was fixed (#8235), reinforcing Armbian’s commitment to community-requested board maintenance.

    Wrapping Up

    This week’s burst of activity highlights the Armbian community’s tireless commitment to refinement and modernization. Whether through performance enhancements, kernel bumps, or quality-of-life fixes, the project continues to evolve rapidly. Users can expect a more responsive, stable, and future-proof experience across a growing roster of supported hardware.

    Stay tuned for further updates as June unfolds.

    The post Armbian Development Highlights: End of May 2025 first appeared on Armbian.

    04 June, 2025 05:05AM by Didier Joomun

    hackergotchi for Qubes

    Qubes

    Qubes Canary 043

    We have published Qubes Canary 043. The text of this canary and its accompanying cryptographic signatures are reproduced below. For an explanation of this announcement and instructions for authenticating this canary, please see the end of this announcement.

    Qubes Canary 043

    
                        ---===[ Qubes Canary 043 ]===---
    
    
    Statements
    -----------
    
    The Qubes security team members who have digitally signed this file [1]
    state the following:
    
    1. The date of issue of this canary is June 03, 2025.
    
    2. There have been 107 Qubes security bulletins published so far.
    
    3. The Qubes Master Signing Key fingerprint is:
    
           427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
    
    4. No warrants have ever been served to us with regard to the Qubes OS
       Project (e.g. to hand out the private signing keys or to introduce
       backdoors).
    
    5. We plan to publish the next of these canary statements in the first
       fourteen days of September 2025. Special note should be taken if no new
       canary is published by that time or if the list of statements changes
       without plausible explanation.
    
    
    Special announcements
    ----------------------
    
    None.
    
    
    Disclaimers and notes
    ----------------------
    
    We would like to remind you that Qubes OS has been designed under the
    assumption that all relevant infrastructure is permanently compromised.
    This means that we assume NO trust in any of the servers or services
    which host or provide any Qubes-related data, in particular, software
    updates, source code repositories, and Qubes ISO downloads.
    
    This canary scheme is not infallible. Although signing the declaration
    makes it very difficult for a third party to produce arbitrary
    declarations, it does not prevent them from using force or other means,
    like blackmail or compromising the signers' laptops, to coerce us to
    produce false declarations.
    
    The proof of freshness provided below serves to demonstrate that this
    canary could not have been created prior to the date stated. It shows
    that a series of canaries was not created in advance.
    
    This declaration is merely a best effort and is provided without any
    guarantee or warranty. It is not legally binding in any way to anybody.
    None of the signers should be ever held legally responsible for any of
    the statements made here.
    
    
    Proof of freshness
    -------------------
    
    Tue, 03 Jun 2025 06:49:12 +0000
    
    Source: DER SPIEGEL - International (https://www.spiegel.de/international/index.rss)
    Interview with Danish Prime Minister Frederiksen: "If Trump Wants a Trade War, We Will Respond"
    A Difficult Relationship: How the Gaza War Is Changing Germany's View of Israel
    The German Chancellor's Diplomacy Offensive: Can Friedrich Merz Help Keep Trump from Abandoning Europe?
    Death Zone Drama on K2: "Brother, It's Just Me and You"
    Ruqqia Fights for Survival: Children in Gaza Facing Malnourishment as Humanitarian Situation Worsens
    
    Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
    Gaza Cease-Fire Negotiations Hit A New Impasse Over An Old Dispute
    South Korea’s New President Will Lead A Country More Divided Than Ever
    Ukraine and Russia Met for 2nd Round of Talks as Attacks Escalate
    In Drone Attacks on Russia, Ukraine Aims for Strategic and Symbolic Blow
    Poland Election: Karol Nawrocki Wins Presidential Vote
    
    Source: BBC News (https://feeds.bbci.co.uk/news/world/rss.xml)
    Suspect in Colorado fire attack planned for a year, FBI says
    Russia and Ukraine fail again to agree ceasefire but commit to prisoner swap
    How Ukraine carried out daring 'Spider Web' attack on Russian bombers
    Disney makes hundreds more layoffs as it cuts costs
    Mount Etna erupts as large plumes rise from volcano
    
    Source: Blockchain.info
    00000000000000000000ad08ae4d191a62914466df62c55d1785bea4b4fd2b01
    
    
    Footnotes
    ----------
    
    [1] This file should be signed in two ways: (1) via detached PGP
    signatures by each of the signers, distributed together with this canary
    in the qubes-secpack.git repo, and (2) via digital signatures on the
    corresponding qubes-secpack.git repo tags. [2]
    
    [2] Don't just trust the contents of this file blindly! Verify the
    digital signatures! Instructions for doing so are documented here:
    https://www.qubes-os.org/security/pack/
    
    --
    The Qubes Security Team
    https://www.qubes-os.org/security/
    
    

    Source: canary-043-2025.txt

    Marek Marczykowski-Górecki’s PGP signature

    -----BEGIN PGP SIGNATURE-----
    
    iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmg+28MACgkQ1lWk8hgw
    4GoYZw/7BDoro5qo2ZXxLO3IxeA6jomDhyVWgL/neAe+Y6QH2AZ8sLPewtZ9MJ7f
    bll8DXLosz+WdWYbFJTnYvsSzZETPq/UQIWc88FXEVcaGAAEUtULDD0PvW0XK2J5
    kd1kQiFUij+zWCGHcLU89ZeaCHIWgDMpdDputc8KmLsT2twLFn7sqCPT3K9l9iNY
    7Az4wGrAtffVVQt//lU98AS6bsvuPf+J8LdoUm4P/PUGDdr7P8/B7OvXX59bYXaj
    3G74LzcXVpesfcGapZbSztnxst7JfYK3N+CVaKaWNiF/hWRdrIYnLkTHZzJspB3m
    nxaiTOgTIq7uxwmtenkurMCcyzNEEx6qsaAhkmc+lrqTqnNP2PpRR6EIIr6ROmPL
    ev/1pBW/epsZFK6qOiaHxpyId8BTMSmYGzw5jJ1Bh70g8qg8XTNynv3w4tPAXS76
    eU2bW5yPYvgmpI5vK4rmro+3G6GeqMJhZCBBGywUABwzbBG+cp9Vpc7zaSCqE1q3
    SPxGopOEOPN65sUHVkqKoOpofYFB20IEF9bXV9Q343FFqn0zRpuOr/FxlYh1JFoL
    1JUsJ3FBR3b/x93pddMvzBr0/nZBbrGns1SYTL7lKYGBG5h+cnq5FU+cIKlxdfks
    j1QEO5yCwyFviqcVT/GyOresRiVORuRRc9XscN8d7ktMXQqKd7g=
    =xzVH
    -----END PGP SIGNATURE-----
    

    Source: canary-043-2025.txt.sig.marmarek

    Simon Gaiser (aka HW42)’s PGP signature

    -----BEGIN PGP SIGNATURE-----
    
    iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmg+8loACgkQSsGN4REu
    FJCYxA/+NveIrCGg+GKe2nA0SrfpsPSjV3p2HCJeYjUpNVLihKWK4HKPS6seQAeq
    2Q5a/E+8YqR7mnnTXOQotyhpKS5+MJ5rmMt3u6wshiYrFzXZ/GB0PmTH3qUPnyFW
    +hdxmiFAgYHKdriYWFddMOKwNcvlR+gcK0x9QsIRctv4P/Cak/ggumP5435qKUIl
    n+/qK5mzWP8etlJadOVzyybLAhwbzkRhVRnGDZdUQJMukCeerlAOGEHKl0jq7Wxx
    MCbNExtodjF6Dh7Tn4Kw2Oy2WdaTLqvWDFLtmnJUSReeJopZ8qHSknLHdwj4kKzm
    EnqwaazzBcKBSq0GwnjJ4hKnUanvAApaHs/lv83oGkb8Kynhvf+bHATvKe7bxy++
    dQjI+zkD3ZuFcTQ0pPrnMlXOPgj+/UZIINlmKmCNnUQdGLUP+IQnxGQD76xkekBu
    6dMAUDWPhzJWlQiPPs1Rwb4huXIS78+fUc8fpjVS6uT1AzSqMLVFFSuRmyLm+I1X
    u+SWgTGxo+7xDhbXaVJoEVB9bEkUMnBxCLxVtRsRSnI6TXFLdpxnuLg4bn6Ojxw5
    EmnxHN/LUZbY7ZL3cyQNu+gYm/BPp4phs3peP+C2o/HLVgQf7rUWGEIgUOoyMxyH
    s0gM/I7fDV37IT1wBGY/pCIGd4b0AkdFw77IL4H+nj44IBE1G8Q=
    =2iCq
    -----END PGP SIGNATURE-----
    

    Source: canary-043-2025.txt.sig.simon

    What is the purpose of this announcement?

    The purpose of this announcement is to inform the Qubes community that a new Qubes canary has been published.

    What is a Qubes canary?

    A Qubes canary is a security announcement periodically issued by the Qubes security team consisting of several statements to the effect that the signers of the canary have not been compromised. The idea is that, as long as signed canaries including such statements continue to be published, all is well. However, if the canaries should suddenly cease, if one or more signers begin declining to sign them, or if the included statements change significantly without plausible explanation, then this may indicate that something has gone wrong. A list of all canaries is available here.

    The name originates from the practice in which miners would bring caged canaries into coal mines. If the level of methane gas in the mine reached a dangerous level, the canary would die, indicating to miners that they should evacuate. (See the Wikipedia article on warrant canaries for more information, but bear in mind that Qubes Canaries are not strictly limited to legal warrants.)

    Why should I care about canaries?

    Canaries provide an important indication about the security status of the project. If the canary is healthy, it’s a strong sign that things are running normally. However, if the canary is unhealthy, it could mean that the project or its members are being coerced in some way.

    What are some signs of an unhealthy canary?

    Here is a non-exhaustive list of examples:

    • Dead canary. In each canary, we state a window of time during which you should expect the next canary to be published. If no canary is published within that window of time and no good explanation is provided for missing the deadline, then the canary has died.
    • Missing statement(s). Canaries include a set of numbered statements at the top. These statements are generally the same across canaries, except for specific numbers and dates that have changed since the previous canary. If an important statement was present in older canaries but suddenly goes missing from new canaries with no correction or explanation, then this may be an indication that the signers can no longer truthfully make that statement.
    • Missing signature(s). Qubes canaries are signed by the members of the Qubes security team (see below). If one of them has been signing all canaries but suddenly and permanently stops signing new canaries without any explanation, then this may indicate that this person is under duress or can no longer truthfully sign the statements contained in the canary.

    No, there are many canary-related possibilities that should not worry you. Here is a non-exhaustive list of examples:

    • Unusual reposts. The only canaries that matter are the ones that are validly signed in the Qubes security pack (qubes-secpack). Reposts of canaries (like the one in this announcement) do not have any authority (except insofar as they reproduce validly-signed text from the qubes-secpack). If the actual canary in the qubes-secpack is healthy, but reposts are late, absent, or modified on the website, mailing lists, forum, or social media platforms, you should not be concerned about the canary.
    • Last-minute signature(s). If the canary is signed at the last minute but before the deadline, that’s okay. (People get busy and procrastinate sometimes.)
    • Signatures at different times. If one signature is earlier or later than the other, but both are present within a reasonable period of time, that’s okay. (For example, sometimes one signer is out of town, but we try to plan the deadlines around this.)
    • Permitted changes. If something about a canary changes without violating any of the statements in prior canaries, that’s okay. (For example, canaries are usually scheduled for the first fourteen days of a given month, but there’s no rule that says they have to be.)
    • Unusual but planned changes. If something unusual happens, but it was announced in advance, and the appropriate statements are signed, that’s okay (e.g., when Joanna left the security team and Simon joined it).

    In general, it would not be realistic for an organization to exist that never changed, had zero turnover, and never made mistakes. Therefore, it would be reasonable to expect such events to occur periodically, and it would be unreasonable to regard every unusual or unexpected canary-related event as a sign of compromise. For example, if something usual happens with a canary, and we say it was a mistake and correct it (with valid signatures), you will have to decide for yourself whether it’s more likely that it really was just a mistake or that something is wrong and that this is how we chose to send you a subtle signal about it. This will require you to think carefully about which among many possible scenarios is most likely given the evidence available to you. Since this is fundamentally a matter of judgment, canaries are ultimately a social scheme, not a technical one.

    What are the PGP signatures that accompany canaries?

    A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all canaries so that Qubes users have a reliable way to check whether canaries are genuine. The only way to be certain that a canary is authentic is by verifying its PGP signatures.

    Why should I care whether a canary is authentic?

    If you fail to notice that a canary is unhealthy or has died, you may continue to trust the Qubes security team even after they have signaled via the canary (or lack thereof) that they been compromised or coerced.

    Alternatively, an adversary could fabricate a canary in an attempt to deceive the public. Such a canary would not be validly signed, but users who neglect to check the signatures on the fake canary would not be aware of this, so they may mistakenly believe it to be genuine, especially if it closely mimics the language of authentic canaries. Such falsified canaries could include manipulated text designed to sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.

    How do I verify the PGP signatures on a canary?

    The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)

    1. Obtain the Qubes Master Signing Key (QMSK), e.g.:

      $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
      gpg: directory '/home/user/.gnupg' created
      gpg: keybox '/home/user/.gnupg/pubring.kbx' created
      gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
      gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
      gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
      gpg: Total number processed: 1
      gpg:               imported: 1
      

      (For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)

    2. View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)

      $ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
      gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
         
         
      pub  rsa4096/DDFA1A3E36879494
           created: 2010-04-01  expires: never       usage: SC
           trust: unknown       validity: unknown
      [ unknown] (1). Qubes Master Signing Key
         
      gpg> fpr
      pub   rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
       Primary key fingerprint: 427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
      
    3. Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.

      Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.

    4. Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.

      gpg> trust
      pub  rsa4096/DDFA1A3E36879494
           created: 2010-04-01  expires: never       usage: SC
           trust: unknown       validity: unknown
      [ unknown] (1). Qubes Master Signing Key
         
      Please decide how far you trust this user to correctly verify other users' keys
      (by looking at passports, checking fingerprints from different sources, etc.)
         
        1 = I don't know or won't say
        2 = I do NOT trust
        3 = I trust marginally
        4 = I trust fully
        5 = I trust ultimately
        m = back to the main menu
         
      Your decision? 5
      Do you really want to set this key to ultimate trust? (y/N) y
         
      pub  rsa4096/DDFA1A3E36879494
           created: 2010-04-01  expires: never       usage: SC
           trust: ultimate      validity: unknown
      [ unknown] (1). Qubes Master Signing Key
      Please note that the shown key validity is not necessarily correct
      unless you restart the program.
         
      gpg> q
      
    5. Use Git to clone the qubes-secpack repo.

      $ git clone https://github.com/QubesOS/qubes-secpack.git
      Cloning into 'qubes-secpack'...
      remote: Enumerating objects: 4065, done.
      remote: Counting objects: 100% (1474/1474), done.
      remote: Compressing objects: 100% (742/742), done.
      remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
      Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
      Resolving deltas: 100% (1910/1910), done.
      
    6. Import the included PGP keys. (See our PGP key policies for important information about these keys.)

      $ gpg --import qubes-secpack/keys/*/*
      gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
      gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
      gpg: no valid OpenPGP data found.
      gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
      gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
      gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
      gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
      gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
      gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
      gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
      gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
      gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
      gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
      gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
      gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
      gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
      gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
      gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
      gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
      gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
      gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
      gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
      gpg: no valid OpenPGP data found.
      gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
      gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
      gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
      gpg: no valid OpenPGP data found.
      gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
      gpg: Total number processed: 17
      gpg:               imported: 16
      gpg:              unchanged: 1
      gpg: marginals needed: 3  completes needed: 1  trust model: pgp
      gpg: depth: 0  valid:   1  signed:   6  trust: 0-, 0q, 0n, 0m, 0f, 1u
      gpg: depth: 1  valid:   6  signed:   0  trust: 6-, 0q, 0n, 0m, 0f, 0u
      
    7. Verify signed Git tags.

      $ cd qubes-secpack/
      $ git tag -v `git describe`
      object 266e14a6fae57c9a91362c9ac784d3a891f4d351
      type commit
      tag marmarek_sec_266e14a6
      tagger Marek Marczykowski-Górecki 1677757924 +0100
         
      Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
      gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
      gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
      gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
      

      The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.

    8. Verify PGP signatures, e.g.:

      $ cd QSBs/
      $ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
      gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
      gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
      gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
      $ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
      gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
      gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
      gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
      $ cd ../canaries/
      $ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
      gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
      gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
      gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
      $ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
      gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
      gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
      gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
      

      Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.

    For this announcement (Qubes Canary 043), the commands are:

    $ gpg --verify canary-043-2025.txt.sig.marmarek canary-043-2025.txt
    $ gpg --verify canary-043-2025.txt.sig.simon canary-043-2025.txt
    

    You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the Qubes Canary 043 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.

    04 June, 2025 12:00AM

    June 03, 2025

    hackergotchi for Pardus

    Pardus

    Pardus 23 için Yeni Güncellemeler Yayımlandı

    Pardus 23 yüklü sisteminizi güncel tutmanız, yapılan değişiklikleri görmeniz için yeterlidir.

    03 June, 2025 05:01PM by Hace İbrahim Özbal

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    Digital Education in Oldenburg: Ensuring Future-Proof School IT with UCS@school

    In this guest article, we would like to report on how we have future-proofed the IT infrastructure of around 50 schools in Oldenburg, how we have overcome challenges and which solutions have convinced us, and which exciting projects still lie ahead of us.

    As the school IT department for the city of Oldenburg, we look after the IT infrastructure of 48 general and vocational schools with around 28,000 pupils and 2,300 teachers. With around 10,000 mobile and 2,500 stationary devices, our administrative workload is constantly growing. In order to support the increasing use of digital media in teaching, we had to comprehensively modernize the IT infrastructure in Oldenburg to make it fit for the future, streamline its administration and simplify the use of IT in schools.

    Initial Situation: Rethinking School IT

    Until a few years ago, most schools administered their IT infrastructure themselves and were also supported by external service providers, which led to major differences in technical equipment. Only the vocational schools had their own IT administration and therefore an expert contact person on site. As digitalization progressed, it quickly became clear that the previous structures were not sufficient. A modern, future-proof solution was urgently needed to meet the increasing requirements and provide the schools with optimum support.

    Requirements: What Our School IT Needs to Achieve

    Moin Zukunft in OldenburgIn 2018, we kicked off the realignment of our school IT with a standardized media development concept. Our aim was to standardize the heterogeneous IT structures and provide central support for all educational establishments. It was particularly important to us to find a reliable, modular and user-friendly solution that would meet the technological and educational requirements of all school locations.

    We were looking for a data protection-compliant and secure on-premises solution that we could operate centrally in the city’s data center, which would provide central user IDs and offer the option of connecting other IT services via secure interfaces. It should also support the use of modern forms of learning with mobile devices and offer personalized access to a school Wi-Fi network for all users.

    Central Concept for More Flexibility and Security

    We decided on the UCS@school open source platform because its centralized identity and access management significantly simplifies user administration. We also found its pedagogical functions such as exam mode and room management as well as administrative features such as automated user import, self-service for account adjustments and password resets very useful in order to relieve teachers of IT administration tasks as far as possible. With the new solution, we can administer and provide the entire IT environment centrally in the municipal data center. UCS@school also enables other services such as software distribution, mobile device management, mail and learning platforms to be connected to the central Identity & Access Management system. Thanks to the platform’s adaptability, we can tailor and expand the system to meet current requirements. A central IT service desk and service level agreements guarantee a stable and reliable IT infrastructure.

    UCS@school-Schulportal der Stadt Oldenburg

    Efficient Administration and Seamless Integration

    Our UCS@school environment in Oldenburg comprises 12 servers with different roles, which provide a stable, scalable basic IT structure. User data is maintained in the DaNiS school administration system, synchronized via the moin.schule state directory service and finally transferred to the central identity management system via the Schulconnex interface, where it is automatically imported. This simplifies administration, reduces errors and enables an efficient rollout of user accounts without having to manually import CSV files.

    The UCS@school portal serves as a central platform for accessing web services, instructions and admin tools. Keycloak, installed as an app on two UCS backup servers, acts as an identity provider and enables Single Sign-On (SSO) for all connected services and external applications. Our Nextcloud instance, for collaborative work on documents, runs on several virtual servers for load balancing and reliability and is integrated into Identity & Access Management via SAML and LDAP so that teachers and students can access it via SSO in the future. Preconfigured groups automate the creation of class folders and storage space allocations.

    For digital learning, Moodle and BigBlueButton are connected, both integrated via OpenID Connect (OIDC), which enables automatic course assignment and virtual classrooms in Moodle. Other services such as the timetabling software WebUntis, BILDUNGSLOGIN for digital educational media and Microsoft 365 are also connected via OIDC or SAML so that users can log in centrally with a single password.

    Anbindung UCS@school Oldenburg

    Challenges During Implementation

    The first project phase for the introduction of the IT infrastructure at five pilot schools went smoothly at the beginning, but we faced challenges: Our team was heavily involved in supporting the vocational schools, which meant that capacity for onboarding was limited.

    The Covid-19 pandemic, supply bottlenecks and the “DigitalPakt Push” required a reprioritization, which is why not all pilot schools could be connected by 2021. The network renewal at all schools, the increased use of mobile devices and the shortage of specialists also presented us with additional challenges.

    Milestones Achieved - What's Next

    The network expansion is well advanced and numerous services have been rolled out centrally. A mobile device management system with around 10,000 devices has been set up and the school IT team has been expanded to 24 employees. A central service desk and a hotline offer comprehensive support, and SLAs have been agreed for 41 of 48 schools that transparently regulate rollout and support. Media concepts and LAN and Wi-Fi connections are to be completed by the end of 2024.

    Oldenburg

    Around 6,000 users at vocational schools and 7,200 at general education schools are already using the central identity management and new functions. Despite the challenges, we have created a solid foundation for the digital transformation.

    Another milestone has been reached with a Microsoft FWU framework agreement for vocational schools. The focus is now on expanding central services and providing additional user accounts. For 2025, we are planning an email and messenger solution as well as a standardized solution in the administration network, which will be tested in a pilot phase.

    You can find out more details about our project in our presentation Oldenburg: Zentrales IAM mit Keycloak & moin.schule , which we gave at the Univention Summit in January 2025.

    Der Beitrag Digital Education in Oldenburg: Ensuring Future-Proof School IT with UCS@school erschien zuerst auf Univention.

    03 June, 2025 11:15AM by Alice Horstmann

    June 02, 2025

    hackergotchi for Ubuntu

    Ubuntu

    Ubuntu Weekly Newsletter Issue 894

    Welcome to the Ubuntu Weekly Newsletter, Issue 894 for the week of May 25 – 31, 2025. The full version of this issue is available here.

    In this issue we cover:

    • Extended Security Maintenance for Ubuntu 20.04 (Focal Fossa) began on May 29, 2025
    • Ubuntu Studio 22.04 LTS has reached End-Of-Life (EOL)
    • Bringing together 20 years of documentation
    • Ubuntu Stats
    • Hot in Support
    • LXD: Weekly news #396
    • Rocks Public Journal; 2025-05-30
    • Other Meeting Reports
    • Upcoming Meetings and Events
    • Successful Training Session by Ubuntu Nepal in Collaboration with GNOME Nepal
    • Web Frontend deployment with Ubuntu Workshop @ Pohang
    • LoCo Events
    • LXD 6.4 has been released
    • Dependency merging in APT 3.1.0ubuntu2
    • Supercharging Ubuntu Releases: Monthly Snapshots & Automation
    • Ubuntu Summit 25.10 is coming to your circle of friends, from London
    • The 2025 Frankfurt Engineering Sprint: What did you miss?
    • Apport local information disclosure vulnerability fixes available
    • Other Community News
    • Ubuntu Cloud News
    • In the Blogosphere
    • Other Articles of Interest
    • Featured Audio and Video
    • Updates and Security for Ubuntu 20.04, 22.04, 24.04, 24.10, and 25.04
    • And much more!

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • Din Mušić – LXD
    • Cristovao Cordeiro (cjdc) – Rocks
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    .

    02 June, 2025 10:07PM by wildmanne39

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Simon Quigley: Willpower is a muscle

    “The most frustrating thing in the world for me, is when normal people judge a man like myself on what it really takes to extract greatness from nothing. It takes every bit of who you are.”
     — David Goggins

    Willpower is a muscle, just like lifting weights. Just like a muscle, in order to build it up, you must actively do things you do not want to do.

    If you’re an athlete, you already understand this to an extent. When you’re training to run a marathon, you’re not training your physical muscles, as much as you are training your mind to overcome that barrier. When you progressively run more distance, and perhaps some days take it a bit easy, that’s a muscle.

    Still don’t believe me? Don’t worry, the neuroscience is still new. It’s entirely possible I’m wrong. :)

    I’ll start by sharing the podcast which made me realize this is an actual muscle:

    https://medium.com/media/8ee564a66edae48d09412033fdadecf2/href

    I’d encourage you to take the time and watch that podcast in its entirety. It will answer any remaining questions you have about David Goggins.

    In case Goggins is a bit too crude for you, I’ll share the following quote from Dr. Huberman in that exact episode:

    “In humans, not some mouse, when people do something they don’t want to do, it’s not about adding more work. It’s about adding more work that you don’t want to do.”

    He also introduces us to the Anterior Mid-Cingulate Cortex.

    “But Simon, you don’t have a doctorate. How do you know about this?”
    I just showed you. The video is linked above. :)

    The Anterior Mid-Cingulate Cortex is the part of your brain dealing with willpower. We now know we can measure it, how it’s related to the pleasure center of the brain, and even that it’s related to neuroticism. Decision-making skills? Yep.

    As you can probably see, the science on this is incredibly new. If you’re in a position to fund academic research, please, graciously go fund research on this part of the brain. I genuinely want to see if someone can give me an agreeable dissent.

    In the meantime, I’ll lean on psychology here:

    https://medium.com/media/fcadeff588931b9ed5982bce2e058a3b/href

    I’m human, just like you. I can’t tell you to do things you don’t want to do. I’m also not right 100% of the time. In fact, if you’re okay reading the Bible, I’d recommend you read Matthew 18:3–5. It will explain a lot.

    In short, your motivation and willpower has to come from you.
    Not me, not Goggins, not Jocko, not Peterson. It has to come from within. I’m not talking spiritually here (again, does not replace the underlying spiritual motivators), but rather, mentally.

    I walked 15 miles last week.

    Did I want to walk 15 miles? Nope. It hurt.
    Did I do it anyway, because it was sunny outside and I’ve been sincerely focused on improving my habit structure? Yes.
    Did I inspire tons of people to start walking in a healthy way, just by leading by example? Yes.

    My advice to you all is, remember to take your day off. That doesn’t mean a lazy day:

    https://medium.com/media/e4eff5dc25bd5d9dfdb27acfd29b8be4/href

    Meaning, be responsible. Just know your own limits. Know your own personal local minimums and maximums. Know your absolutes, too. Show love to others; don’t hit them with a hammer from my toolbox, please.

    That’s all for now. Be well. This week doesn’t have a theme; now I can finally write about more of the technical topics! :D

    New month, new mixtape, by the way.

    02 June, 2025 10:04PM

    hackergotchi for GreenboneOS

    GreenboneOS

    Greenbone Audits for Compliance with the CIS Windows 11 Enterprise Benchmark”

    Microsoft Windows remains the most widely used desktop operating system in enterprise environments – and also one of the most targeted by threat actors. Insecure configurations are a leading source of security breaches [1][2][3], often exploited to gain initial access [TA0001], escalate privileges [TA0004], steal credentials [TA0006], establish persistent access [TA0003], and move laterally within […]

    02 June, 2025 11:21AM by Greenbone AG

    hackergotchi for ZEVENET

    ZEVENET

    Application Delivery for Cloud Providers

    Launching a cloud service used to be about uptime and bandwidth. Today, the landscape has evolved. Clients expect resilience under pressure, protection against attacks, and seamless performance—even during traffic surges or degraded conditions.

    For cloud and hosting providers, delivering raw infrastructure is no longer enough. Security and availability have become essential components of a modern service offering. Providers that include these capabilities as part of their stack gain a significant edge in a competitive market.

    Challenges cloud providers face today

    Whether it’s a regional hosting service or a public cloud platform, most providers face similar obstacles:

    • High availability is no longer optional
      Delivering 99.9% uptime is not enough. Clients expect automated traffic distribution and seamless failover. In the absence of load balancing, unpredictable workloads can quickly lead to performance bottlenecks and outages.
    • Security threats are constant
      Providers must defend against everything from L7 application attacks to DDoS floods and automated bot traffic. Without built-in Web Application Firewalls (WAFs), rate limiting, or bot filtering, vulnerabilities in one tenant can affect others—jeopardizing SLAs and brand reputation.
    • Cloud services are increasingly commoditized
      Competing on price alone is not sustainable. Customers are placing more value on performance, manageability, and security. Features like SSL offloading, L7 routing, or integrated WAFs differentiate a provider’s offer from “just another cloud.”
    • Lack of native tooling
      Many infrastructure providers rely on basic tools for WAF, traffic distribution, or API protection—but these are often limited in scope, difficult to customize, or not designed for complex multi-tenant environments. While cloud hyperscalers offer integrated services, unlocking advanced control or segmentation typically requires separate configurations, pricing tiers, or third-party tools. This creates fragmentation and overhead, especially for providers aiming to deliver premium services.

    Why security and traffic management should be part of the core offering

    Including application delivery and security capabilities directly in the infrastructure stack is a strategic advantage. It helps providers deliver:

    • Stronger SLAs
      Load balancing and traffic intelligence reduce downtime and improve service reliability. This makes it possible to offer higher SLAs (Service Level Agreement), which builds trust with enterprise clients.
    • Lower operational costs
      Blocking malicious traffic before it reaches the application layer reduces emergency support tickets and unplanned intervention. ADC-level filtering helps keep the platform stable and clean.
    • Higher value for clients
      Many customers lack the expertise or resources to configure WAFs or failover mechanisms. Offering these as built-in services improves retention by delivering out-of-the-box protection and performance.
    • New business models
      Providers can offer tiered plans that include advanced traffic control or managed security services. These features become new sources of revenue and help move beyond simple resource resale.

    Integrating advanced delivery and security features is no longer a luxury reserved for large clouds—it’s an opportunity for providers of all sizes to offer high-value services without relying solely on generic cloud-native tools. With solutions like SKUDONET, these capabilities can be tailored, isolated per tenant, and scaled with the provider’s infrastructure—whether on-premise, hybrid, or public cloud.

    How SKUDONET integrates into provider infrastructures

    SKUDONET Enterprise Edition is built to meet the operational and business needs of infrastructure providers—from hosting companies to public cloud platforms.

    Key integration benefits:

    • Flexible deployment
      SKUDONET supports virtual machines, bare metal, public and private clouds, and hybrid environments. It integrates easily into existing infrastructures, without vendor lock-in.
    • Multi-tenancy and delegated control
      Providers can create isolated environments for different clients or departments. Each tenant can have dedicated WAF policies, SSL certificates, and monitoring—ideal for MSPs or ISPs serving multiple clients from a shared platform.
    • Centralized control and advanced policies
      The platform offers a unified console to manage L3–L7 security rules, WAF configurations, alerts, and SSL/TLS certificates. Complex tasks—like traffic throttling, cert rotation, or custom rule creation—are simplified through the UI or API.
    • Automation and scalability
      SKUDONET supports autoscaling, clustering, and full automation via REST+JSON API. It integrates with orchestration tools like Ansible, allowing dynamic provisioning of customers, rules, or traffic routes.
    • Enterprise-grade support
      With SLAs (Service Level Agreement) as fast as 4 hours, SKUDONET offers expert assistance during deployment, scaling, or incident response—helping providers maintain service continuity and customer satisfaction.

    SKUDONET is compatible with a wide variety of infrastructure strategies. It can run as a standalone ADC on virtual machines, bare metal, or cloud instances using BYOL (Bring Your Own License) models—currently available on AWS and DigitalOcean. It’s also available as a cloud-native option via Cloud, for those who prefer managed, on-demand delivery. This flexibility makes it suitable for hosting providers, enterprise platforms, or service integrators looking for advanced capabilities without being tied to vendor-specific limitations.

    Cloud and hosting providers are under pressure to deliver more than just uptime. Clients today expect performance, resilience, and built-in security from their infrastructure vendors.

    By integrating a platform like SKUDONET, providers can evolve their offering from basic infrastructure to value-added service—strengthening their brand, improving customer retention, and unlocking new growth opportunities.

    02 June, 2025 08:56AM by Nieves Álvarez

    hackergotchi for GreenboneOS

    GreenboneOS

    Dwell time: Attackers Are Striking Faster and Disguising Themselves Better

    Security experts are observing a worrying trend: the time to exploit (TTE), i.e. the time between a security vulnerability becoming known and being exploited by malicious actors, has been falling dramatically in recent times. At the same time, attackers are becoming increasingly skilled at concealing their presence in a successfully hacked network. Experts refer to […]

    02 June, 2025 08:34AM by Markus Feilner

    June 01, 2025

    hackergotchi for SparkyLinux

    SparkyLinux

    Sparky news 2025/05

    The 5th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.15.0, 6.12.31-LTS, 6.6.92-LTS – added to repos: Meru (it replaced gmail-desktop), Sparky Package Tool – fixed sddm-theme2-sparky which works with KF6 on testing trixie now (it is based on Breeze) Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and…

    Source

    01 June, 2025 05:29PM by pavroo

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Faizul "Piju" 9M2PJU: 0 A.D. on Linux: A Stunning, Free RTS Experience That Rivals the Best

    If you’re a Linux user craving a real-time strategy (RTS) game with the polish of Age of Empires and the historical depth of a university textbook—yet entirely free and open source—then you need to try 0 A.D.. This epic project by Wildfire Games is not just an open-source alternative to mainstream RTS games—it’s a serious contender in its own right, crafted with passion, precision, and community spirit.

    🎮 What is 0 A.D.?

    0 A.D. (Zero Anno Domini) is a free, open-source, cross-platform RTS game that takes players deep into ancient history, allowing them to build and battle with civilizations from 500 B.C. to 500 A.D. The game is built using the custom Pyrogenesis engine, a modern 3D engine developed from scratch for this purpose, and available under the GPL license—yes, you can even tinker with the code yourself.

    It’s not just a clone. 0 A.D. sets itself apart with:

    • 🛡 Historically accurate civilizations
    • 🗺 Dynamic and random map generation
    • ⚔ Tactical land and naval combat
    • 🏗 City-building with tech progression
    • 🧠 AI opponents and multiplayer support
    • 💬 Modding tools and community-created content

    🐧 Why It’s Perfect for Linux Users

    Linux gamers often get the short end of the stick when it comes to big-name games—but 0 A.D. feels like it was made for us. Here’s why Linux users should care:

    ✔ Native Linux Support

    0 A.D. runs natively on Linux without the need for Wine, Proton, or compatibility layers. You can install it directly from your distro’s package manager or build it from source if you like full control.

    For example:

    # On Debian/Ubuntu
    sudo apt install 0ad
    
    # On Arch Linux
    sudo pacman -S 0ad
    
    # On Fedora
    sudo dnf install 0ad
    

    No weird dependencies. No workarounds. Just pure, native performance.

    🎨 Vulkan Renderer and FSR Support

    With Alpha 27 “Agni”, 0 A.D. now supports Vulkan, giving Linux users much better graphics performance, lower CPU overhead, and compatibility with modern GPU features. Plus, it includes AMD FidelityFX Super Resolution (FSR)—which boosts frame rates and visual quality even on low-end hardware.

    This makes 0 A.D. one of the few FOSS games optimized for modern Linux graphics stacks like Mesa, Wayland, and PipeWire.

    🔄 Rolling Updates and Dev Engagement

    The development team and community are highly active, with new features, bug fixes, and optimizations arriving steadily. You don’t need to wait years for meaningful updates—0 A.D. grows with each alpha release, and Linux users are treated as first-class citizens.

    Want to contribute a patch or translate the UI into Malay? You can. Everything is transparent and accessible.


    🏛 What Makes the Gameplay So Good?

    Let’s dive deeper into why the gameplay itself shines.

    🏗 Realistic Economy and Base Building

    Unlike many fast-paced arcade RTS games, 0 A.D. rewards planning and resource management. You’ll manage four resources—food, wood, stone, and metal—to construct buildings, raise armies, and advance through phases that represent a civilization’s growth. Advancing from village phase to town phase to city phase unlocks more units and structures.

    Each civilization has unique architectural styles, tech trees, and military units. For example:

    • Romans have disciplined legionaries and siege weapons.
    • Persians boast fast cavalry and majestic palaces.
    • Athenians excel in naval warfare.

    ⚔ Intense Tactical Combat

    Units in 0 A.D. aren’t just damage sponges. There’s formation control, terrain advantage, flanking tactics, and unit counters. The AI behaves strategically, and in multiplayer, experienced players can pull off devastating maneuvers.

    Naval combat has received significant improvements recently, with better ship handling and water pathfinding—something many commercial RTS games still struggle with.

    🗺 Endless Map Variety and Mod Support

    0 A.D. includes:

    • Skirmish maps
    • Random maps (with different biomes and elevation)
    • Scenario maps (with scripted events)

    And thanks to the integrated mod downloader, you can browse, install, and play with community mods in just a few clicks. Want to add new units, tweak balance, or add fantasy elements? You can.


    🕹 Multiplayer and Replays

    Play with friends over LAN, the Internet, or against the built-in AI. The game includes:

    • 🧭 Multiplayer save and resume support
    • 👁 Observer tools (with flares, commands, and overlays)
    • ⏪ Replay functionality to study your tactics or cast tournaments

    There’s even an in-game lobby where players coordinate matches across all platforms.


    👥 Community and Contribution

    The 0 A.D. project thrives because of its community:

    • Developers contribute code via GitHub.
    • Artists create stunning 3D models and animations.
    • Historians help ensure cultural accuracy.
    • Translators localize the game into dozens of languages.
    • Players write guides, tutorials, and strategy posts.

    If you’re a Linux user and want to contribute to an ambitious FOSS project, this is the perfect gateway into game development, design, or open collaboration.


    🧑‍💻 How to Install on Linux

    Here’s a quick reference:

    Option 1: Package Manager (Recommended)

    • Debian/Ubuntu: sudo apt install 0ad
    • Arch Linux: sudo pacman -S 0ad
    • Fedora: sudo dnf install 0ad
    • openSUSE: sudo zypper install 0ad

    Option 2: Compile from Source

    Follow the official instructions at https://trac.wildfiregames.com/wiki/BuildInstructions


    🎯 Final Thoughts

    0 A.D. is more than just a game—it’s a testament to what free and open-source software can achieve. For Linux gamers, it’s a rare gem: a game that respects your platform, performs well, and lets you own your experience entirely.

    So whether you’re a seasoned general or a curious strategist, download 0 A.D. today and relive history—on your terms.

    👉 Visit https://play0ad.com to download and start playing.

    The post 0 A.D. on Linux: A Stunning, Free RTS Experience That Rivals the Best appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    01 June, 2025 04:14PM

    Faizul "Piju" 9M2PJU: Exploring Oh My BASH: The Bash Framework You Might Be Missing Out On

    When it comes to customizing your shell experience, most people are familiar with Oh My Zsh, the popular Zsh configuration framework. But what if you’re a Bash user—loyal to the world’s most widely used shell? Enter Oh My BASH, a community-driven framework that brings power, flexibility, and style to Bash.

    Whether you’re a casual Linux user, a seasoned sysadmin, or a developer who loves to tinker with terminal aesthetics and productivity tools, Oh My BASH might just be the productivity upgrade you’re looking for. In this post, we’ll dive deep into what Oh My BASH is, its origins, features, pros and cons, and who it’s best suited for.


    🔍 What is Oh My BASH?

    Oh My BASH is an open-source, community-maintained framework for managing your Bash shell configuration. Much like its inspiration, Oh My Zsh, this project aims to simplify the process of customizing and enhancing your Bash environment.

    With features like themes, plugins, and modular configuration, Oh My BASH turns the plain, default Bash shell into a more vibrant, powerful, and efficient tool.


    🧬 Origin and History

    The project was created as a direct counterpart to Oh My Zsh, for users who prefer sticking with Bash rather than switching shells. Bash remains the default shell on many Unix-like systems, including most Linux distributions and macOS (until Catalina).

    Oh My BASH started as a fork of Oh My Zsh and was adapted for Bash users by enthusiasts in the open-source community. Its development continues on GitHub:
    👉 https://github.com/ohmybash/oh-my-bash

    Though not as widely adopted as Oh My Zsh, it has built a niche community of loyal users who appreciate what it offers without requiring them to abandon Bash.


    🎯 Purpose of Oh My BASH

    The primary goal of Oh My BASH is to:

    • Make Bash customization easier.
    • Provide a consistent and powerful shell experience.
    • Add features like themes, aliases, and plugins without cluttering your .bashrc.

    It acts as a productivity booster and aesthetic enhancer for your command-line interface, all while staying true to Bash’s simplicity and compatibility.


    ✅ Advantages of Oh My BASH

    Here are some of the standout benefits of using Oh My BASH:

    1. 🖌 Theming and Aesthetics

    Oh My BASH includes a variety of prompt themes. These themes add colors, Git status indicators, timestamps, and other useful elements to your prompt, making it visually appealing and informative.

    2. ⚙ Modular Configuration

    Rather than dumping all settings into a monolithic .bashrc, Oh My BASH supports a modular structure. You can organize custom aliases, environment variables, and plugin settings in separate files.

    3. 🔌 Plugin Support

    It comes with a set of built-in plugins that extend Bash’s functionality. These include helpful aliases and functions for Git, Docker, system info, SSH, and more.

    4. 🧠 Predefined Aliases and Functions

    Oh My BASH comes packed with smart defaults, including common aliases and shell functions that can save you typing time and reduce mistakes.

    5. 👨‍👩‍👧‍👦 Community-Driven

    It’s open-source and maintained by contributors around the world. You can create your own plugins or themes, or contribute back to the main repository.

    6. 🛠 Easy Installation and Reset

    Installing Oh My BASH is straightforward and reversible. You can easily uninstall it if needed, reverting to your old .bashrc setup.


    ❌ Disadvantages of Oh My BASH

    While Oh My BASH offers many benefits, it’s not without drawbacks. Here are some things to consider:

    1. 🐌 Slower Shell Startup

    As with many feature-rich shell frameworks, Oh My BASH can slightly increase your shell’s startup time, especially if you enable many plugins and themes.

    2. 🧳 Less Active than Oh My Zsh

    Oh My BASH is not as actively maintained or as widely used as Oh My Zsh. You may find fewer themes, plugins, and community tutorials available.

    3. 🧩 Plugin Limitations

    The plugin ecosystem is smaller compared to other frameworks. Some power users may find it limited for more advanced workflows.

    4. 🌀 Not Meant for Minimalists

    If you prefer a minimalist setup or enjoy hand-tweaking your .bashrc, this framework might feel like overkill or too opinionated.


    👥 Who Should Use Oh My BASH?

    Oh My BASH is ideal for:

    • Bash Users Who Want More – If you like Bash and don’t want to switch to Zsh or Fish, but still want themes and productivity tools.
    • Developers and Sysadmins – Who work heavily in the terminal and appreciate added Git info, aliases, and clear prompts.
    • New Linux Users – Who want to make their terminal more approachable and user-friendly.
    • Customizers – If you enjoy tweaking your environment but don’t want to write everything from scratch.

    However, it may not be the best fit for:

    • Hardcore Minimalists – Who prefer plain Bash with minimal external dependencies.
    • Performance Purists – Who want the fastest shell startup time possible.
    • Zsh/Fish Advocates – Users already invested in other modern shells with more robust ecosystems.

    🚀 Getting Started with Oh My BASH

    To install Oh My BASH, run this command in your terminal:

    bash -c "$(curl -fsSL https://raw.githubusercontent.com/ohmybash/oh-my-bash/master/tools/install.sh)"
    

    This will back up your existing .bashrc and replace it with a new one using the Oh My BASH structure. You can start exploring themes by editing your .bashrc file and changing the OSH_THEME variable.

    Example:

    OSH_THEME="font"
    

    To list available themes, check the themes/ directory in the cloned .oh-my-bash folder.


    🧰 Tips and Tricks

    • Use alias commands in your .bash_aliases.custom file.
    • Create your own theme or modify existing ones for full control.
    • Load additional scripts via the custom/ directory.
    • Keep your .bashrc clean by leveraging modular files in .oh-my-bash/custom.

    🏁 Final Thoughts

    Oh My BASH is a solid choice for anyone looking to improve their Bash shell experience. It brings modern features, customization, and user-friendliness to a traditional and powerful shell without forcing you to switch ecosystems.

    It may not have the flashiest bells and whistles of Oh My Zsh or Fish, but for many users, its blend of power and simplicity hits the sweet spot. If you’re someone who enjoys working in the terminal and wants a smoother, more expressive experience—give Oh My BASH a try.


    Have you tried Oh My BASH? Got a favorite theme or plugin? Share your thoughts or custom setups in the comments below!

    The post Exploring Oh My BASH: The Bash Framework You Might Be Missing Out On appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    01 June, 2025 01:10PM

    Faizul "Piju" 9M2PJU: Plank: The Minimalist Dock for a Clean Linux Desktop

    If you’re looking for a sleek, lightweight, and no-nonsense dock for your Linux desktop, Plank might just be your perfect match. Built for simplicity and performance, Plank provides just what you need — and nothing you don’t.


    🌟 What is Plank?

    Plank is a minimalist dock designed for Linux desktop environments. Its goal is to be the simplest dock on the planet. It’s lightweight, visually clean, and designed to stay out of your way while still being highly functional.

    Plank is also the core backend used by Docky (from version 3.0.0 onward), meaning it can be extended to create more feature-rich docks if needed — but Plank itself keeps things lean.


    🔧 Key Features

    • Ultra Lightweight: Uses minimal system resources — great for low-end or older hardware.
    • Minimal UI: No clutter, no distractions — just a beautiful dock.
    • Extensible: Acts as a backend library for more advanced dock implementations.
    • Theming Support: Easily change its appearance with custom themes.

    🛠 How to Install Plank on Debian/Ubuntu

    To install Plank on Debian-based distributions:

    sudo add-apt-repository ppa:ricotz/docky
    sudo apt-get update
    sudo apt-get install plank
    

    Launch it by typing plank in a terminal or adding it to your startup applications.


    🎨 Customization with Themes

    You can tweak Plank’s appearance using community-made themes.

    1. Download a theme from repositories like:
    2. Copy the theme to:
      • ~/.local/share/plank/themes (per user)
      • or /usr/share/plank/themes (system-wide)
    3. Open Plank settings by holding Ctrl + right-click on the dock, then choose your theme.

    🧠 Final Thoughts

    If you want a beautiful, no-frills dock for your Linux desktop, Plank nails the balance between functionality and simplicity. It just works — and looks good doing it.

    👉 Check out the source code or contribute at:
    🔗 github.com/ricotz/plank

    The post Plank: The Minimalist Dock for a Clean Linux Desktop appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    01 June, 2025 04:16AM

    Simon Quigley: Trains and Cats: The Trolley Problem

    “Do not forget about the Trolley Problem,” a middle school teacher once told us. “You’ll look back on this moment and be glad you studied this specific dilemma early.”

    In hindsight, she was right. In fact, I had no idea I would need to use that knowledge until I was directly in the middle of the situation.

    Imagine this: you’re standing by a railroad track. There’s a lever in front of that railroad track, next to a button. The button sends an affirmative signal to a waiting train, so it can proceed. The lever allows the operators to switch between the primary track, and a secondary track, just in case.

    Before pressing the button, standard operating procedure is to manually inspect the tracks, to ensure no debris exists. The operators even run a small cart through, to ensure the tracks themselves are not in bad shape.

    All of the operators are out for the weekend. That’s okay, they just helped prepare the train for departure. They deserve the time off.

    Except, someone bumped the big red button on the way out, without realizing it.

    The train starts to build momentum. There’s no stopping it now. Everyone was so busy preparing the train, that checking the tracks to see if they were ready… not on their mind.

    How could they forget? Well, the cart was out for repair. It passed the last three dozen times, why would it fail this time? Why would there suddenly be debris on the tracks?

    Imagine you’re an engineer who just happened to walk by the button and lever. You see the train is starting to move. There’s debris even before the junction, and you can hear it. It’s loud.

    But again, the operators are off-site. It’s not technically “your job” to even touch the lever. You’re just an engineer, you’re not an operator. Why would you flip that switch?

    Well, because there’s a large rock straight ahead. If nobody flips the lever, it will hurdle right towards it and crash.

    So, last second, you walk up to the lever, and flip it. That being said, in order to clear the last bit of debris off that secondary track to avoid the rock, you have to sacrifice your favorite wooden stick.

    You spent decades whittling that stick until it was perfect. Except now, it’s the only thing you have to stop the train.

    Would you sacrifice your favorite stick to save the train from the rock?

    Let’s entertain both scenarios.

    What if you just “let it be,” and pretended you didn’t see it? That would probably make some feel guilty, as if they were the person that pressed the button.

    The other option would be to sacrifice your stick. It’s a material object, but you put years of effort into it. That would make you feel a different set of emotions.

    That’s why the Trolley Problem is so difficult. You’re either sacrificing one person, or multiple. You have to choose one, you can’t choose both.

    For the sake of argument, let’s say there’s a perfect 50% chance that you walk by the train. There are security cameras, but they can’t be viewed off-site due to a technical outage.

    A few people in management heard the sound of the train from a great distance away, which made them nervous. They might be asking several questions, such as:
    * Is the train intact?
    * Which track did the train go down?
    * Was there actually any debris in the way?
    * Who caused the train to start moving prematurely?

    It was an honest mistake, someone just pushed the button. All of these questions though, they almost remind me of a different dilemma of sorts:

    https://medium.com/media/4ec6b678f4c67437bd9f844bd50b9acb/href

    What would you do?

    Talk to you all on Monday.

    01 June, 2025 03:30AM

    Simon Quigley: Learn Math Effectively

    “What if there was a way to learn critical thinking and logical reasoning skills that are actually applicable in real life?” Well, there is.

    Many people are required to learn mathematics in school. For some, it’s an arduous task requiring days to wrap their head around, and for others, it’s as simple as finding the time to practice and do math. (I would highly recommend Atomic Habits by James Clear.)

    A common misconception exists that “you’ll never need to use math in real life.” My question to the people spreading that misconception would be, “what part of math?” People are commonly taught Algebra, working up to subjects like Trigonometry and Pre-Calculus.

    That’s where the misconception comes from. People are taught the foundational skills, but are many people taught the origins of math, why it exists, and what you can use some niche parts of math for?

    See, many people are taught about what you’re supposed to use math for. But did anyone actually make it interesting for you? Did you feel engaged during your math classes? Have you always wanted to study it further on your own, with no idea where to start?

    That’s where this post comes in. Not only that, you will be shocked at the amount of places you’ll be able to use math in your everyday life, once you understand the basics of what makes each part unique:

    https://medium.com/media/53fe357d3e25e4ce4ad5b38275b29093/href

    With this being said, I’m not going to focus on Algebra. I’m going to focus on Calculus.

    What is Calculus? Take a look at this video for an easy explanation:

    https://medium.com/media/2c45d941b4b5f173974315f1b7ea3254/href

    Don’t believe there are real world examples?

    https://medium.com/media/27e9889dffb96e460a763293407f7bfc/href

    If you’re now inspired to continue studying Calculus like I am, then I would personally recommend Khan Academy. I’ve used that website for 15/23 years of my life with great success, and it’s certainly not limited to mathematics.

    “Okay, so are there any concepts that you believe everyone should know besides Calculus?”

    Proofs. They’ll help you think in a more logical manner. If you understand the basics of logic gates and/or conditional statements, you’ll find this video fairly easy to understand:

    https://medium.com/media/037d22e51aa2fe41360e276123026b39/href

    If you’re learning math, study proofs. Especially if you’d like to make it a career, learning to prove your work (whether in text or via actual written math) is a great tool to have in your toolbox. I’ve been formally taught how to write mathematical proofs (occasionally you’ll see QED in my writing), and from experience, I can tell you, it was worth every second.

    I’ll let you all know when I reach 100% on Calculus in Khan Academy. In the meantime, I’m just going to keep doing math. And, I’m just going to keep writing.

    Be well. Talk soon.

    01 June, 2025 01:03AM

    May 31, 2025

    Faizul "Piju" 9M2PJU: Rediscover Classic RTS with OpenRA on Linux

    If you’re a fan of real-time strategy (RTS) games and use Linux, OpenRA is a must-have. This open-source project breathes new life into classic Westwood titles like Command & Conquer: Red Alert, Tiberian Dawn, and Dune 2000, offering modern enhancements while preserving the nostalgic gameplay.


    🛠 What Is OpenRA?

    OpenRA is a community-driven initiative that reimagines classic RTS games for contemporary platforms. It’s not just a remake; it’s a complete overhaul that introduces:

    • Modernized Interfaces: Updated sidebars and controls for improved usability.
    • Enhanced Gameplay Mechanics: Features like fog of war, unit veterancy, and attack-move commands.
    • Cross-Platform Support: Runs seamlessly on Linux, Windows, macOS, and *BSD systems.
    • Modding Capabilities: A built-in SDK allows for the creation of custom mods and maps.

    These improvements ensure that both veterans and newcomers can enjoy a refined RTS experience.


    🚀 Latest Features and Updates

    The March 2025 release brought significant enhancements:

    • New Missions: Two additional Red Alert missions with improved objectives.
    • Persistent Skirmish Options: Settings now carry over between matches.
    • Balance Tweaks: Refinements for Red Alert and Dune 2000 to ensure fair play.
    • Asset Support: Compatibility with The Ultimate Collection and C&C Remastered Collection.
    • Language Support: Progress towards multilingual capabilities.

    These updates demonstrate OpenRA’s commitment to evolving and enhancing the RTS genre.


    🧰 Installation on Linux

    Installing OpenRA on Linux is straightforward:

    1. Download AppImages: Visit the official download page to get the AppImage for your desired mod.
    2. Make Executable: Right-click the AppImage, select ‘Properties,’ and enable execution permissions.
    3. Launch: Double-click the AppImage to start the game.

    Alternatively, you can install OpenRA via:

    • Snap: sudo snap install openra
    • Flatpak: flatpak install flathub net.openra.OpenRA

    These methods ensure that OpenRA integrates smoothly with your system.


    🌟 Why Choose OpenRA?

    OpenRA stands out in the Linux gaming landscape due to:

    • Community Engagement: Regular updates and active forums foster a vibrant player base.
    • Modding Scene: A robust SDK encourages creativity and customization.
    • Cross-Platform Play: Enjoy multiplayer matches with friends on different operating systems.
    • Educational Value: An in-game encyclopedia provides insights into units and strategies.

    These features make OpenRA not just a game but a platform for learning and community interaction.


    🎥 See OpenRA in Action

    For a visual overview, check out this review:


    🏆 Other Notable Strategy Games for Linux

    If you’re exploring more strategy titles, consider:

    • 0 A.D.: A historical RTS focusing on ancient civilizations.
    • The Battle for Wesnoth: A turn-based strategy game with a rich fantasy setting.
    • Freeciv: A free Civilization-inspired game with extensive customization.

    Each offers unique gameplay experiences and is well-supported on Linux platforms.


    OpenRA exemplifies how classic games can be revitalized for modern audiences. Its blend of nostalgia and innovation makes it a standout choice for strategy enthusiasts on Linux.

    The post Rediscover Classic RTS with OpenRA on Linux appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    31 May, 2025 11:29PM

    Simon Quigley: Reading Helps

    Everyone has a point of view. When you’re writing a story, notes for a presentation, or even non-fiction, you always have a point of view.

    31 May, 2025 11:06PM

    Faizul "Piju" 9M2PJU: 10 Simple and Useful Projects Anyone Can Build with Ubuntu Server

    Are you looking to learn more about servers but don’t know where to start? Ubuntu Server provides the perfect foundation for beginners and experienced users alike. With its reliability, security features, and extensive community support, Ubuntu Server makes it easy to create useful projects that enhance your home network or provide valuable skills for your career.

    In this guide, I’ll walk you through 10 practical projects that anyone can set up using Ubuntu Server. Each project requires minimal hardware and provides real-world benefits. Let’s get started!

    1. Network-Attached Storage (NAS) Server

    What it is: A centralized location to store and access your files from any device on your network.

    Why Ubuntu Server is perfect: Ubuntu’s stability ensures your data remains safe, while its lightweight nature means even older hardware can serve as a reliable NAS.

    Setup overview:

    1. Install Ubuntu Server on your hardware
    2. Install and configure Samba for file sharing:
    sudo apt update
    sudo apt install samba -y
    
    1. Create a directory for your shared files:
    sudo mkdir -p /media/nas
    sudo chmod 777 /media/nas
    
    1. Configure Samba by editing its configuration file:
    sudo nano /etc/samba/smb.conf
    
    1. Add the following at the end of the file:
    [NASShare]
    path = /media/nas
    browseable = yes
    read only = no
    force create mode = 0660
    force directory mode = 2770
    valid users = @users
    
    1. Set a Samba password for your user:
    sudo smbpasswd -a yourusername
    
    1. Restart Samba:
    sudo systemctl restart smbd
    

    Benefits: Access your files from any device, centralize your backups, and stream media throughout your home.

    2. Personal Cloud Storage (NextCloud)

    What it is: Your own personal cloud storage solution similar to Dropbox or Google Drive but hosted on your own hardware.

    Why Ubuntu Server is perfect: Ubuntu’s package management makes installing dependencies straightforward, while LTS releases ensure long-term stability.

    Setup overview:

    1. Install LAMP stack:
    sudo apt update
    sudo apt install apache2 mariadb-server libapache2-mod-php php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip -y
    
    1. Secure your MariaDB installation:
    sudo mysql_secure_installation
    
    1. Create a database:
    sudo mysql -u root -p
    CREATE DATABASE nextcloud;
    CREATE USER 'nextclouduser'@'localhost' IDENTIFIED BY 'your_password';
    GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextclouduser'@'localhost';
    FLUSH PRIVILEGES;
    EXIT;
    
    1. Download and install NextCloud:
    cd /tmp
    wget https://download.nextcloud.com/server/releases/latest.zip
    unzip latest.zip
    sudo mv nextcloud /var/www/html/
    sudo chown -R www-data:www-data /var/www/html/nextcloud/
    
    1. Configure Apache:
    sudo nano /etc/apache2/sites-available/nextcloud.conf
    
    1. Add the following configuration:
    <VirtualHost *:80>
        DocumentRoot /var/www/html/nextcloud/
        ServerName your_domain_or_IP
    
        <Directory /var/www/html/nextcloud/>
            Options +FollowSymlinks
            AllowOverride All
            Require all granted
        </Directory>
    
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
    </VirtualHost>
    
    1. Enable the site and required modules:
    sudo a2ensite nextcloud.conf
    sudo a2enmod rewrite headers env dir mime
    sudo systemctl restart apache2
    
    1. Access NextCloud through your browser at http://your_server_IP_or_domain and complete the setup wizard.

    Benefits: Maintain control over your data, avoid subscription fees, and get unlimited storage based on your hardware.

    3. Media Server with Plex

    What it is: A powerful media server that organizes your movies, TV shows, music, and photos, making them accessible from anywhere.

    Why Ubuntu Server is perfect: Ubuntu’s efficiency means more resources are available for transcoding media, and its compatibility with Plex is excellent.

    Setup overview:

    1. Add the Plex repository:
    sudo apt update
    sudo apt install apt-transport-https curl -y
    curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo apt-key add -
    echo deb https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list
    
    1. Install Plex Media Server:
    sudo apt update
    sudo apt install plexmediaserver -y
    
    1. Create directories for your media:
    sudo mkdir -p /opt/plexmedia/{movies,tv,music,photos}
    sudo chown -R plex:plex /opt/plexmedia
    
    1. Access the Plex web interface at http://your_server_IP:32400/web and follow the setup wizard
    2. Add your media libraries pointing to the directories you created

    Benefits: Stream your media collection to any device, automatic metadata fetching, and smart organization of your content.

    4. Home Automation Server with Home Assistant

    What it is: A central hub to control and automate your smart home devices.

    Why Ubuntu Server is perfect: Ubuntu’s reliability ensures your home automation stays running, while its hardware compatibility supports various IoT devices.

    Setup overview:

    1. Install Docker (the easiest way to run Home Assistant):
    sudo apt update
    sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    sudo apt update
    sudo apt install docker-ce -y
    
    1. Install Docker Compose:
    sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
    
    1. Create a Docker Compose file:
    mkdir ~/homeassistant
    cd ~/homeassistant
    nano docker-compose.yml
    
    1. Add the following content:
    version: '3'
    services:
      homeassistant:
        container_name: homeassistant
        image: ghcr.io/home-assistant/home-assistant:stable
        volumes:
          - ./config:/config
        environment:
          - TZ=YOUR_TIME_ZONE
        restart: always
        network_mode: host
    
    1. Start Home Assistant:
    sudo docker-compose up -d
    
    1. Access Home Assistant through your browser at http://your_server_IP:8123

    Benefits: Centralized control of all smart devices, powerful automation capabilities, and reduced dependence on cloud services.

    5. Personal VPN Server with WireGuard

    What it is: Your own VPN server that allows secure remote access to your home network and protects your privacy when using public Wi-Fi.

    Why Ubuntu Server is perfect: Ubuntu’s security focus makes it ideal for VPN services, and recent kernels include built-in WireGuard support.

    Setup overview:

    1. Install WireGuard:
    sudo apt update
    sudo apt install wireguard -y
    
    1. Generate private and public keys:
    wg genkey | sudo tee /etc/wireguard/private.key
    sudo chmod 600 /etc/wireguard/private.key
    sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key
    
    1. Create a WireGuard configuration:
    sudo nano /etc/wireguard/wg0.conf
    
    1. Add the following (substituting your own values):
    [Interface]
    PrivateKey = YOUR_SERVER_PRIVATE_KEY
    Address = 10.0.0.1/24
    ListenPort = 51820
    PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
    
    # Client configuration example
    [Peer]
    PublicKey = CLIENT_PUBLIC_KEY
    AllowedIPs = 10.0.0.2/32
    
    1. Enable IP forwarding:
    echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p
    
    1. Start and enable the WireGuard service:
    sudo systemctl enable wg-quick@wg0
    sudo systemctl start wg-quick@wg0
    
    1. Generate configurations for your clients and distribute them securely

    Benefits: Secure remote access to your home network, enhanced privacy on public networks, and better control over your internet connection.

    6. Web Server for Hosting Your Own Website

    What it is: A server to host your personal website, blog, or web application.

    Why Ubuntu Server is perfect: Ubuntu’s robust LAMP stack support makes it the go-to choice for web hosting environments.

    Setup overview:

    1. Install LAMP stack:
    sudo apt update
    sudo apt install apache2 mariadb-server php libapache2-mod-php php-mysql -y
    
    1. Secure MariaDB:
    sudo mysql_secure_installation
    
    1. Create a website directory:
    sudo mkdir -p /var/www/yourwebsite
    sudo chown -R $USER:$USER /var/www/yourwebsite
    
    1. Create a simple index.php file:
    echo '<?php phpinfo(); ?>' > /var/www/yourwebsite/index.php
    
    1. Configure Apache virtual host:
    sudo nano /etc/apache2/sites-available/yourwebsite.conf
    
    1. Add the following configuration:
    <VirtualHost *:80>
        ServerName yourwebsite.local
        ServerAlias www.yourwebsite.local
        DocumentRoot /var/www/yourwebsite
        ErrorLog ${APACHE_LOG_DIR}/yourwebsite_error.log
        CustomLog ${APACHE_LOG_DIR}/yourwebsite_access.log combined
    </VirtualHost>
    
    1. Enable the site and restart Apache:
    sudo a2ensite yourwebsite.conf
    sudo systemctl restart apache2
    

    Benefits: Full control over your web presence, no monthly hosting fees, and valuable skills for web development.

    7. Pi-hole Ad Blocker

    What it is: A network-wide ad blocker that improves browsing speed and privacy by blocking ads at the DNS level.

    Why Ubuntu Server is perfect: Ubuntu’s efficiency means Pi-hole can run alongside other services without issues, making it a perfect addition to any home server.

    Setup overview:

    1. Install required packages:
    sudo apt update
    sudo apt install curl -y
    
    1. Run the Pi-hole installer:
    curl -sSL https://install.pi-hole.net | bash
    
    1. Follow the on-screen instructions (accept most defaults)
    2. Note your admin password at the end of installation
    3. Configure your router to use your Ubuntu Server as the DNS server, or configure individual devices
    4. Access the Pi-hole admin interface at http://your_server_IP/admin

    Benefits: Faster browsing, reduced bandwidth usage, enhanced privacy, and protection from malicious domains.

    8. Git Server with Gitea

    What it is: A lightweight, self-hosted Git service similar to GitHub but running on your own hardware.

    Why Ubuntu Server is perfect: Ubuntu’s package management and system resource efficiency make it ideal for hosting developer tools like Git services.

    Setup overview:

    1. Install required packages:
    sudo apt update
    sudo apt install git curl sqlite3 -y
    
    1. Create a user for Gitea:
    sudo adduser --system --group --disabled-password --shell /bin/bash --home /home/git git
    
    1. Download and install Gitea:
    VERSION=$(curl -s https://api.github.com/repos/go-gitea/gitea/releases/latest | grep tag_name | cut -d '"' -f 4)
    sudo wget -O /tmp/gitea https://dl.gitea.io/gitea/${VERSION}/gitea-${VERSION}-linux-amd64
    sudo chmod +x /tmp/gitea
    sudo mv /tmp/gitea /usr/local/bin/gitea
    
    1. Create required directories:
    sudo mkdir -p /var/lib/gitea/{custom,data,log}
    sudo chown -R git:git /var/lib/gitea
    sudo chmod -R 750 /var/lib/gitea
    sudo mkdir /etc/gitea
    sudo chown root:git /etc/gitea
    sudo chmod 770 /etc/gitea
    
    1. Create a systemd service:
    sudo nano /etc/systemd/system/gitea.service
    
    1. Add the following content:
    [Unit]
    Description=Gitea (Git with a cup of tea)
    After=syslog.target
    After=network.target
    
    [Service]
    RestartSec=2s
    Type=simple
    User=git
    Group=git
    WorkingDirectory=/var/lib/gitea
    ExecStart=/usr/local/bin/gitea web --config /etc/gitea/app.ini
    Restart=always
    Environment=USER=git HOME=/home/git GITEA_WORK_DIR=/var/lib/gitea
    
    [Install]
    WantedBy=multi-user.target
    
    1. Start and enable Gitea:
    sudo systemctl daemon-reload
    sudo systemctl enable --now gitea
    
    1. Access Gitea through your browser at http://your_server_IP:3000 and complete the initial setup

    Benefits: Full control over your code repositories, no limitations on private repositories, and integrated issue tracking.

    9. Game Server Host

    What it is: A dedicated server for hosting multiplayer games like Minecraft, Terraria, or Counter-Strike.

    Why Ubuntu Server is perfect: Ubuntu’s stability and resource efficiency allow game servers to run smoothly and consistently, even on modest hardware.

    Setup overview for Minecraft Server:

    1. Install required packages:
    sudo apt update
    sudo apt install openjdk-17-jre-headless screen -y
    
    1. Create a minecraft user:
    sudo adduser --system --home /opt/minecraft-server minecraft
    sudo addgroup --system minecraft
    sudo adduser minecraft minecraft
    
    1. Switch to the minecraft user:
    sudo su - minecraft
    
    1. Download the Minecraft server:
    mkdir -p ~/server
    cd ~/server
    wget https://piston-data.mojang.com/v1/objects/8f3112a1049751cc472ec13e397eade5336ca7ae/server.jar -O minecraft_server.jar
    
    1. Accept the EULA:
    echo "eula=true" > eula.txt
    
    1. Create a start script:
    echo '#!/bin/sh
    cd /opt/minecraft-server/server
    java -Xmx2G -Xms1G -jar minecraft_server.jar nogui' > start.sh
    chmod +x start.sh
    
    1. Exit the minecraft user and create a systemd service:
    exit
    sudo nano /etc/systemd/system/minecraft.service
    
    1. Add the following content:
    [Unit]
    Description=Minecraft Server
    After=network.target
    
    [Service]
    User=minecraft
    Nice=5
    KillMode=none
    SuccessExitStatus=0 1
    InaccessibleDirectories=/root /sys /srv /media -/lost+found
    NoNewPrivileges=true
    WorkingDirectory=/opt/minecraft-server/server
    ReadWriteDirectories=/opt/minecraft-server/server
    ExecStart=/opt/minecraft-server/server/start.sh
    ExecStop=/usr/bin/screen -p 0 -S minecraft -X eval 'stuff "say SERVER SHUTTING DOWN IN 10 SECONDS. SAVING ALL MAPS..."\015'
    ExecStop=/bin/sleep 10
    ExecStop=/usr/bin/screen -p 0 -S minecraft -X eval 'stuff "save-all"\015'
    ExecStop=/usr/bin/screen -p 0 -S minecraft -X eval 'stuff "stop"\015'
    
    [Install]
    WantedBy=multi-user.target
    
    1. Enable and start the service:
    sudo systemctl enable minecraft.service
    sudo systemctl start minecraft.service
    

    Benefits: Host your favorite games with friends without subscription fees, customize server settings, and install mods freely.

    10. Docker Host for Containerized Applications

    What it is: A platform for running containerized applications, making it easy to deploy and manage various services.

    Why Ubuntu Server is perfect: Ubuntu has excellent Docker support, regular updates, and a well-maintained Docker repository.

    Setup overview:

    1. Install Docker:
    sudo apt update
    sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    sudo apt update
    sudo apt install docker-ce -y
    
    1. Install Docker Compose:
    sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
    
    1. Add your user to the Docker group (to avoid using sudo with Docker commands):
    sudo usermod -aG docker $USER
    
    1. Log out and back in for the changes to take effect
    2. Test Docker:
    docker run hello-world
    
    1. Create a sample Docker Compose project:
    mkdir ~/docker-test
    cd ~/docker-test
    nano docker-compose.yml
    
    1. Add the following content for a simple web server:
    version: '3'
    services:
      web:
        image: nginx:latest
        ports:
          - "8080:80"
        volumes:
          - ./html:/usr/share/nginx/html
        restart: always
    
    1. Create a test HTML file:
    mkdir -p html
    echo "<html><body><h1>My Docker Container is Working!</h1></body></html>" > html/index.html
    
    1. Start the container:
    docker-compose up -d
    
    1. Access your test site at http://your_server_IP:8080

    Benefits: Easily deploy complex applications, maintain isolated environments, and simplify updates and maintenance.

    Why Ubuntu Server is the Perfect Choice

    Throughout these projects, Ubuntu Server demonstrates its incredible versatility and power. Here’s why Ubuntu Server stands out from other options:

    1. Stability: Ubuntu Server LTS releases are supported for 5 years, ensuring long-term reliability
    2. Security: Regular security updates keep your server and data protected
    3. Huge Community: Extensive documentation and community support make troubleshooting easy
    4. Package Management: The APT package system simplifies software installation and updates
    5. Resource Efficiency: Works well even on older or limited hardware
    6. No License Fees: Completely free to use, even in commercial environments
    7. Regular Updates: Stay current with the latest technologies and improvements

    Getting Started with Ubuntu Server

    Ready to begin? Here’s how to get started:

    1. Download Ubuntu Server from ubuntu.com/download/server
    2. Install it on your preferred hardware (old PC, Raspberry Pi, or virtual machine)
    3. Choose one of the projects above and follow the step-by-step instructions
    4. Join the Ubuntu community for support and to share your experiences

    Remember, these projects are just the beginning. As you become more comfortable with Ubuntu Server, you’ll discover countless more possibilities for creating valuable services for your home or small business.

    Have you built any interesting projects with Ubuntu Server? Share your experiences in the comments below!


    This guide was created to help newcomers explore the capabilities of Ubuntu Server. For enterprise environments, consider Ubuntu’s commercial support options.

    The post 10 Simple and Useful Projects Anyone Can Build with Ubuntu Server appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    31 May, 2025 08:07AM

    Faizul "Piju" 9M2PJU: Swap Partition vs Swap File on Linux: Everything You Need to Know

    When installing or managing a Linux system, one of the most debated topics is whether to use a swap partition or a swap file—or even use swap at all.

    In this post, we’ll go back to the origin of swap, explore why swap was needed, how modern systems use (or avoid) it, and the advantages and disadvantages of both swap partitions and swap files.


    🔄 What is Swap?

    Swap is disk space used by the operating system when physical RAM is full. It acts as an extension of RAM to allow the system to offload memory pages that are not immediately needed, keeping critical applications running smoothly.


    🧓 The Origin of Swap

    Swap originated in the early days of computing, when:

    • RAM was expensive and limited.
    • Storage (although slower) was more plentiful.
    • Systems needed a way to “extend” memory to run more processes than RAM allowed.

    Unix systems implemented swap space as a way to avoid running out of memory entirely—this idea carried over to Linux.


    🧠 Why You Might Still Need Swap Today

    Even with modern hardware, swap still has roles:

    1. Prevent Out of Memory (OOM) crashes: If your system runs out of RAM, swap provides a safety net.
    2. Hibernation (suspend-to-disk): Requires swap equal to or greater than your RAM size.
    3. Memory balancing: Swap allows the kernel to move idle pages out of RAM, freeing up space for active applications or disk cache.
    4. Low-memory devices: On systems like Raspberry Pi or small VPS servers, swap helps compensate for limited RAM.

    🤷 Why You Might Not Need Swap

    On the other hand:

    1. Lots of RAM: If your system rarely uses all available memory, swap may never be touched.
    2. SSD wear concerns: Excessive swapping can reduce SSD lifespan (though this is largely exaggerated with modern SSDs).
    3. Performance-critical applications: Swap is much slower than RAM. If you’re running performance-sensitive workloads, using swap can be a bottleneck.
    4. Modern alternatives: Features like zram and zswap offer compressed RAM swap spaces, reducing or eliminating the need for disk-based swap.

    🗃 Swap Partition

    ✔ Advantages

    • Stability: Less prone to fragmentation.
    • Predictable performance: Constant location on disk can be slightly faster on spinning HDDs.
    • Used by default in many legacy systems.
    • Can be used even if root filesystem becomes read-only.

    ❌ Disadvantages

    • Inflexible size: Hard to resize without repartitioning.
    • Occupies a dedicated partition: Not space-efficient, especially on SSDs.
    • Inconvenient for virtualized or cloud instances.

    📁 Swap File

    ✔ Advantages

    • Flexible: Easy to resize or remove.
    • No need for a separate partition.
    • Supported by all modern Linux kernels (since 2.6).
    • Works well with most filesystems including ext4, XFS, Btrfs (with limitations).

    ❌ Disadvantages

    • Can be slower on heavily fragmented file systems.
    • Doesn’t work with hibernation on some setups.
    • Needs correct permissions and configuration (e.g., no copy-on-write or compression with Btrfs unless configured properly).

    🧪 Performance Considerations

    CriteriaSwap PartitionSwap File
    Resize Flexibility❌ Hard✅ Easy
    Setup Complexity⚠ Medium✅ Easy
    Performance (HDD)✅ Slightly better⚠ Slightly worse
    Performance (SSD)⚖ Similar⚖ Similar
    Works with Hibernate✅ Yes⚠ Depends on setup
    Dynamic Management❌ Manual✅ Resizable on-the-fly

    🛠 When to Use What?

    Use a Swap Partition if:

    • You’re setting up a traditional desktop or dual-boot Linux system.
    • You plan to use hibernation reliably.
    • You prefer separating system components into strict partitions.

    Use a Swap File if:

    • You’re on a modern system with lots of RAM and SSD.
    • You want to add swap after install easily.
    • You’re using cloud or VPS environments with flexible resources.
    • You don’t plan to use hibernation.

    💡 Bonus: zram and zswap

    Modern Linux kernels support zram and zswap, which compress memory pages before swapping to disk:

    • zram creates a compressed RAM-based block device as swap.
    • zswap is a compressed cache for swap pages before writing to disk.

    These are great for low-memory systems like Raspberry Pi or embedded devices.


    🧾 Conclusion

    Swap is not dead—it’s evolved.

    Whether you choose a swap partition or a swap file depends on your needs:

    • Flexibility? Go for swap file.
    • Predictability and hibernation? Use a swap partition.
    • Want better performance with low RAM? Consider zram.

    As always with Linux, the choice is yours—and that’s the power of open systems.


    ✅ TL;DR

    • Swap partition: Reliable, but rigid.
    • Swap file: Flexible and modern.
    • No swap: Fine if you have lots of RAM and don’t use hibernation.
    • zram/zswap: Smart memory compression alternatives.

    The post Swap Partition vs Swap File on Linux: Everything You Need to Know appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    31 May, 2025 07:52AM

    Faizul "Piju" 9M2PJU: Exploring gLinux: Google’s In-House Debian-Based Operating System

    In the world of tech giants, custom tools and platforms are often developed in-house to meet unique needs at scale. One such example is gLinux, Google’s internal Linux distribution, tailored specifically for the company’s engineering workflows and infrastructure. While it’s not available to the public, gLinux plays a critical role in powering the work of thousands of Googlers worldwide.

    Let’s dive deep into the origin, evolution, advantages, and usage of gLinux to understand why Google made the switch and how it leverages the power of open-source Linux in its operations.


    📜 Origin and History

    Before gLinux, Google engineers used a Linux distribution called Goobuntu—a customized version of Ubuntu LTS (Long Term Support). Goobuntu was maintained internally and closely followed Ubuntu’s upstream releases, offering the stability and security required for large-scale engineering work. However, it had some limitations, particularly in terms of release cycles and package control.

    In 2018, Google publicly revealed that it had begun transitioning from Goobuntu to a new internal distribution: gLinux. This new system was based on Debian Testing, chosen for its balance between cutting-edge features and system stability.

    Timeline of Events:

    • Pre-2010s: Google internally adopts Ubuntu LTS and customizes it into Goobuntu.
    • 2012–2017: Goobuntu evolves with each new LTS release, but internal friction with Ubuntu’s release cadence begins to show.
    • 2018: Google announces its shift to gLinux, built on Debian Testing.
    • Post-2018: gLinux becomes the standard desktop OS for Google engineers, with updates and packages managed through a custom internal build system.

    This move also reflected Google’s broader support for the Debian community and allowed greater control over updates, security, and tooling.


    💡 Why the Switch? Goobuntu vs. gLinux

    Here are some of the core reasons why Google replaced Goobuntu with gLinux:

    1. More Predictable Upgrade Path

    Ubuntu’s release cycle meant that Google had to perform large-scale migrations every two years. Debian Testing, in contrast, allows for rolling updates, which are easier to manage incrementally.

    2. Greater Flexibility and Control

    Debian gives Google more control over its packages and dependencies. With gLinux, Google can apply custom patches, test packages internally, and push updates as needed—without waiting for upstream Ubuntu changes.

    3. Security and Compliance

    With a custom Debian base, gLinux could be tailored to meet strict internal security standards. This includes features like secure boot, sandboxed applications, and custom kernel modules.

    4. Alignment with Upstream Open Source

    Google has a long-standing relationship with open-source communities. Debian’s transparent development and packaging philosophy made it a better match for Google’s collaborative approach.


    ⚙ How gLinux Works Internally

    While the exact details of gLinux’s architecture and infrastructure are proprietary, some high-level features are known:

    • Based on Debian Testing: It pulls from the Debian Testing branch, allowing for a balance of new features and reasonable stability.
    • Custom Internal Tooling: Google uses its own build systems to manage, test, and deploy gLinux packages.
    • Tight Integration with Google’s Workspace: Everything from developer tools to secure network access is configured and maintained through gLinux-specific policies.
    • Support for Internal Software Development: gLinux is optimized for Google’s software stack, including tools for coding, code reviews, builds, and containerization.

    ✅ Advantages of gLinux

    Here are some of the key benefits of gLinux as seen internally at Google:

    🔐 Enhanced Security

    gLinux is hardened to comply with Google’s internal security policies. This includes kernel-level protections, patch management, encryption, and advanced access controls.

    🚀 Faster and Flexible Updates

    By using Debian Testing and an internal update mechanism, gLinux allows Google to push out security patches and software upgrades quickly without being tied to third-party timelines.

    🛠 Optimized Developer Experience

    gLinux comes preloaded with Google’s development environment tools, code review systems, and internal APIs—ready to go out of the box.

    🧩 Modular and Customizable

    Google engineers can request and deploy specific packages, and gLinux supports modular installations for different roles—whether it’s frontend development, backend infrastructure, or machine learning research.

    🌱 Open Source Friendly

    Because Debian is a fully community-driven project, Google benefits from a robust and widely supported ecosystem—while also contributing fixes and enhancements back to Debian in some cases.


    🧰 Usage and Deployment

    gLinux is used across Google by thousands of employees, particularly software engineers and infrastructure developers. It powers:

    • Workstations and laptops for developers.
    • Build environments for large-scale codebases.
    • Testing and automation frameworks for QA and CI/CD.
    • Internal security scanning and policy enforcement systems.

    It’s important to note that gLinux is not meant for production servers—Google uses other systems like gVisor, Borg, and containerized environments for running its services at scale.


    🔒 Is gLinux Available to the Public?

    No, gLinux is strictly an internal operating system. Unlike Ubuntu or Debian, you can’t download gLinux or install it on your own machine. It’s built to interface with Google’s internal tools and infrastructure, making it unsuitable for general use outside the company.

    That said, Google’s choice to base it on Debian means that some of their contributions may benefit the wider Debian community over time.


    🧩 Conclusion

    While gLinux remains hidden behind Google’s walls, it represents a powerful example of how large organizations can adapt open-source software to meet enterprise-scale demands. By choosing Debian over Ubuntu, Google prioritized long-term flexibility, internal control, and deeper open-source alignment.

    For Linux enthusiasts and enterprise IT architects, gLinux is a testament to the power of customizing open-source platforms for specific organizational needs. It might not be available to the public—but its influence on how Linux can scale in enterprise environments is undeniable.

    The post Exploring gLinux: Google’s In-House Debian-Based Operating System appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    31 May, 2025 07:01AM

    Faizul "Piju" 9M2PJU: Essential Linux Commands Every System Administrator Should Know

    As a system administrator, mastering the command line is critical. Whether you’re maintaining servers, managing users, monitoring performance, or securing your system, knowing the right tools can make your job faster, easier, and more efficient.

    Here’s a comprehensive list of the most important Linux commands every sysadmin should know — organized by category, explained in plain language, and ready to turn you into a command-line ninja 🥷.


    🖥 System Monitoring & Performance

    Keeping your system healthy starts with knowing what’s going on behind the scenes.

    • top / htop
      View real-time system processes, CPU, and memory usage.
      htop is an enhanced version of top with a cleaner UI.
    • uptime
      Shows how long the system has been running and the average load.
    • vmstat
      Displays information about memory, processes, I/O, and CPU.
    • iostat
      Useful for monitoring disk I/O stats and CPU load.
    • free -h
      Human-readable memory usage summary (RAM + swap).
    • sar
      Historical system activity reports — useful for spotting trends.
    • ps aux
      List all running processes with their CPU and memory usage.
    • lsof
      List open files. Great for checking what’s locking a file or port.
    • strace
      Debugging tool to trace system calls and signals.

    📁 Filesystem & Disk Usage

    Disk space issues are common — be ready to investigate and clean up.

    • df -h
      Shows disk usage for all mounted filesystems in human-readable form.
    • du -sh *
      Quickly estimate the size of directories/files in the current folder.
    • lsblk
      Displays block devices and their mount points.
    • mount / umount
      Mount or unmount filesystems.
    • fdisk -l / parted -l
      Inspect disk partitions.
    • blkid
      Shows UUIDs and labels of block devices — handy for /etc/fstab.
    • find / -name filename
      Searches the entire system for a file.
    • file
      Determines a file’s type — especially useful for unknown extensions.
    • stat
      Displays detailed file metadata including timestamps and permissions.

    🧑‍💻 User & Permission Management

    Managing users and access rights is at the heart of system security.

    • adduser / useradd
      Create new users (note: adduser is more user-friendly).
    • passwd
      Set or change a user’s password.
    • usermod
      Modify a user’s attributes, like group or shell.
    • deluser / userdel
      Remove users from the system.
    • groupadd, groupdel, gpasswd
      Manage user groups.
    • chmod
      Change file permissions (e.g. chmod 755).
    • chown / chgrp
      Change file owner or group.
    • id
      Show a user’s UID, GID, and group memberships.
    • who, w, last
      Show active users and login history.
    • sudo
      Run commands with elevated (root) privileges.

    🌐 Networking

    Networking is critical on any server. These tools help diagnose and configure network connections.

    • ip a / ip link
      Show all network interfaces and IP addresses.
    • ip r
      View the routing table.
    • ss -tuln / netstat -tuln
      Show open ports and listening services.
    • ping / traceroute
      Test network connectivity and route paths.
    • dig / nslookup
      Perform DNS lookups to debug name resolution.
    • curl / wget
      Download files or make web/API requests from the command line.
    • nmap
      Network scanner for discovering hosts and open ports.
    • tcpdump
      Capture and inspect network packets.
    • hostname
      View or set the system’s hostname.

    🔐 Security & Access Control

    Security is non-negotiable. These commands help you lock things down.

    • ufw / iptables
      Configure and manage firewall rules.
    • fail2ban-client
      Control Fail2Ban — protects against brute-force attacks.
    • auditctl, ausearch
      View or search audit logs to monitor system access.
    • getenforce, setenforce
      Manage SELinux modes.
    • ssh / sshd
      Secure shell access and SSH server management.
    • scp, rsync
      Securely copy files between systems.

    📦 Package Management

    Installing and managing software is essential. Use the right tool based on your distro:

    Debian/Ubuntu

    • apt, dpkg, apt-cache

    RedHat/CentOS

    • yum, dnf, rpm

    Arch Linux

    • pacman

    Universal

    • snap, flatpak

    Examples:

    apt update && apt upgrade
    dnf install nginx
    pacman -S htop
    

    🛠 System Maintenance & Logs

    Keep your system running smoothly by managing services and watching logs.

    • journalctl
      View systemd logs.
    • dmesg
      Kernel ring buffer — shows hardware and boot messages.
    • systemctl
      Manage services on systemd systems (start, stop, enable, etc.).
    • service
      Older init-based service management.
    • crontab -e
      Edit scheduled tasks (cron jobs).
    • at
      Run one-off tasks at a specific time.
    • logrotate
      Manages log file rotation to prevent disk overuse.
    • shutdown, reboot
      Schedule or perform system reboots/shutdowns.

    🔄 Backup & Automation

    Protect data and automate your tasks for efficiency.

    • rsync -avh
      Sync directories or backup data.
    • tar -czf archive.tar.gz folder/
      Create compressed archive.
    • scp user@host:file .
      Secure file copy over SSH.
    • cron, anacron
      Automate repetitive tasks.
    • bash / sh
      Write scripts to automate system administration tasks.

    📎 Bonus Tools & Utilities

    • tmux / screen
      Terminal multiplexers — resume sessions, split terminals.
    • ncdu
      Disk usage visualizer. Much better than du for quick inspection.
    • glances
      Real-time monitoring of CPU, RAM, disk, and more.
    • nc (netcat)
      Versatile networking tool — useful for debugging or testing.
    • alias
      Create custom shortcuts for your most used commands.

    📝 Final Thoughts

    These Linux commands are not just helpful — they’re the foundation of any good system administrator’s toolbox. Mastering them will give you confidence to manage, troubleshoot, and optimize Linux systems whether you’re working with a single server or an entire fleet of machines.

    The post Essential Linux Commands Every System Administrator Should Know appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    31 May, 2025 06:48AM

    May 30, 2025

    Faizul "Piju" 9M2PJU: Understanding the Linux Boot Process: From Power On to Login

    Whether you’re a seasoned sysadmin or just diving into Linux, understanding the boot process is key to mastering how your system starts up. The Linux boot process is a fascinating journey that transforms powered-off hardware into a fully operational system. In this post, we’ll walk through the entire boot sequence, breaking down each stage with technical clarity.


    🔌 Stage 1: BIOS or UEFI – The System’s First Breath

    The process begins the moment you press the power button.

    BIOS (Legacy Systems)

    • POST (Power-On Self Test) is triggered to check RAM, CPU, keyboard, and basic hardware.
    • Searches for a bootable device by scanning the boot order (HDD, SSD, USB, etc.).
    • Once a bootable device is found, BIOS reads the Master Boot Record (MBR), which contains the bootloader.

    UEFI (Modern Systems)

    • Replaces BIOS with a more advanced firmware interface.
    • Reads the EFI System Partition (ESP), which contains EFI applications like GRUB.efi.
    • Supports Secure Boot, GUID Partition Table (GPT), and faster booting.

    📝 Note: UEFI is now the standard for most modern hardware.


    💽 Stage 2: Bootloader – The Linux Gatekeeper

    The bootloader is the program that loads and starts the Linux kernel.

    Common Bootloaders:

    • GRUB (GRand Unified Bootloader) – Most common in Linux systems.
    • systemd-boot – Lightweight bootloader for UEFI systems.
    • LILO (older systems) – Largely deprecated.

    The bootloader:

    • Loads the selected kernel image (e.g., /boot/vmlinuz-linux).
    • Loads the initramfs/initrd – a temporary root filesystem used during early boot.
    • Passes control and parameters (e.g., root device path, kernel options) to the kernel.

    Example of GRUB config:

    linux /boot/vmlinuz-6.1.0 root=/dev/sda2 ro quiet splash
    initrd /boot/initrd.img-6.1.0
    

    🧠 Stage 3: Kernel Initialization – The Heart of Linux

    Now, the Linux kernel takes control.

    What the Kernel Does:

    • Sets up low-level system components: memory management, I/O scheduling, and CPU initialization.
    • Loads drivers for essential hardware (from initramfs).
    • Mounts the real root filesystem (e.g., from ext4, btrfs, XFS).
    • Starts the init process (PID 1) – the first user-space program.

    If anything goes wrong here (like missing root filesystem), you’ll see a kernel panic.


    ⚙ Stage 4: Init System – Orchestrating the System Startup

    The init system is the “conductor” that starts all necessary services.

    Common Init Systems:

    • systemd (default on most modern distros like Debian, Ubuntu, Fedora)
    • SysVinit (traditional)
    • OpenRC (used in Alpine, Gentoo)

    If using systemd, it:

    • Reads unit files from /etc/systemd/system/ and /usr/lib/systemd/system/.
    • Mounts local filesystems, activates swap, configures networking.
    • Starts system services like sshd, NetworkManager, cron, and more.

    You can inspect boot performance using:

    systemd-analyze
    

    🔐 Stage 5: Login Prompt – Ready for Action

    Once all services are up and running:

    • CLI systems: getty spawns login prompts on virtual terminals (e.g., tty1–tty6).
    • GUI systems: A Display Manager (GDM, LightDM, SDDM) launches, leading to your graphical desktop environment (GNOME, KDE, etc.).

    After login, the system is fully operational, ready for your commands or applications.


    🗺 Visual Summary of the Linux Boot Flow

    [ Power On ]
         ↓
    [ BIOS / UEFI ]
         ↓
    [ Bootloader (GRUB/systemd-boot) ]
         ↓
    [ Kernel + initramfs ]
         ↓
    [ Init system (systemd, etc.) ]
         ↓
    [ System Services + Targets ]
         ↓
    [ Login Prompt / GUI ]
    

    🛠 Bonus: Useful Commands to Explore Boot

    • View last boot duration: systemd-analyze
    • See the breakdown of each service’s boot time: systemd-analyze blame
    • Inspect boot logs: journalctl -b

    🧠 Final Thoughts

    The Linux boot process may seem complex, but each stage is logically structured to ensure a flexible, powerful, and modular startup system. Whether you’re debugging a failed boot or optimizing your boot time, understanding this process equips you with the tools to handle your system like a pro.

    If you’re using Linux in embedded projects, servers, or even on low-power SBCs like Raspberry Pi, this knowledge becomes even more critical.

    The post Understanding the Linux Boot Process: From Power On to Login appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    30 May, 2025 03:02PM

    Ubuntu Blog: Apport local information disclosure vulnerability fixes available

    Qualys discovered two vulnerabilities in various Linux distributions which allow a local attacker with permission to create user namespaces to leak core dumps for processes of suid executables. These affect both apport, the Ubuntu default core dump handler (CVE-2025-5054), and systemd-coredump, the default core dump handler in Red Hat Enterprise Linux, Fedora and other Linux distributions (CVE-2025-4598). CVE-2025-4598 does not affect default Ubuntu installations, unless it was manually set as the core dump handler by installing the systemd-coredump package.

    The vulnerabilities have a CVSS score of 4.7 (MEDIUM), requiring local access and high attack complexity. The impact is restricted to the confidentiality of the memory space of invoked suid executables. The exploit demonstrated by Qualys can leak hashed user passwords and, as such, has limited real-world impact.

    Canonical’s security team has released updates for the apport package for all affected Ubuntu releases. This package is installed by default in all Ubuntu Server images and Ubuntu Desktop installations. The update remediates CVE-2025-5054. Information on the affected versions can be found in the CVE page linked above. If you have any of these installed, our recommendation is to update as soon as possible.

    How the exploit works

    Qualys discovered that, when analyzing application crashes, apport attempts to detect if the crashing process was running inside a container before performing consistency checks on it. This means that if a local attacker manages to induce a crash in a privileged process and quickly replaces it with another one with the same process ID that resides inside a mount and pid namespace, apport will attempt to forward the core dump (which might contain sensitive information belonging to the original, privileged process) into the namespace.

    In order to successfully carry out the exploit, an attacker must have permissions to create user, mount and pid namespaces with full capabilities.

    Affected releases

    ReleasePackage NameFixed Version
    Trusty (14.04)apportNot affected
    Xenial (16.04)apport2.20.1-0ubuntu2.30+esm5
    Bionic (18.04)apport2.20.9-0ubuntu7.29+esm1
    Focal (20.04)apport2.20.11-0ubuntu27.28
    Jammy (22.04)apport2.20.11-0ubuntu82.7
    Noble (24.04)apport2.28.1-0ubuntu3.6
    Oracular (24.10)apport2.30.0-0ubuntu4.3
    Plucky (25.04)apport2.32.0-0ubuntu5.1
    Trusty (14.04)systemd-coredumpNot present
    Xenial (16.04)systemd-coredumpNot affected
    Bionic (18.04)systemd-coredumpNot affected
    Focal (20.04)systemd-coredumpNot released
    Jammy (22.04)systemd-coredumpNot released
    Noble (24.04)systemd-coredumpNot released
    Oracular (24.10)systemd-coredumpNot released
    Plucky (25.04)systemd-coredumpNot released

    How to check if you are impacted

    On your system, run the following command and compare the listed version to the table above.

    apt list --installed | grep "^\(apport\|systemd-coredump\)"

    How to address

    We recommend you upgrade all packages:

    sudo apt update && sudo apt upgrade

    If this is not possible, the affected component can be targeted. If using apport (the default):

    # on focal and newer:
    
    sudo apt update && sudo apt install --only-upgrade apport python3-apport
    
    # on xenial and bionic:
    
    sudo apt update && sudo apt install --only-upgrade apport python3-apport python-apport

    If using system-coredump:

    sudo apt update && sudo apt install --only-upgrade systemd-coredump

    The unattended-upgrades feature is enabled by default for Ubuntu 16.04 LTS onwards. This service:  

    • Applies new security updates every 24 hours automatically.
    • If you have this enabled, the patches above will be automatically applied within 24 hours of being available.

    Mitigation

    The strongest protection is to apply the security updates. The following mitigations have also been explored. If security updates cannot be applied, you should only apply the following steps as a last resort and revert the configuration once updates are applied.

    The ability to obtain core dumps of suid executable processes can be completely disabled by setting the fs.suid_dumpable sysctl to 0. While this completely neutralizes the vulnerability, it also restricts the ability to investigate crashes for all processes which change privileges upon execution..

    echo "fs.suid_dumpable = 0" | sudo tee /etc/sysctl.d/10-cve-2025-5054.conf
    
    sudo sysctl -p /etc/sysctl.d/10-cve-2025-5054.conf

    Acknowledgements

    We would like to thank Qualys for their excellent reporting and for inviting the Ubuntu Security Team to coordinate this issue.

    Timeline

    • A correction to this blog post was implemented on 2 June 2025 to clarify that CVE-2025-4598 may affect Ubuntu installations where systemd-coredump was manually installed to replace apport as the default core dump handler.

    References

    https://www.qualys.com/apport-systemd-info-disclosure-vulnerabilities-discovered-by-qualys-tru

    https://www.qualys.com/2025/05/29/apport-coredump/apport-coredump.txt

    https://ubuntu.com/security/notices/USN-7545-1

    https://ubuntu.com/security/CVE-2025-5054

    https://ubuntu.com/security/CVE-2025-4598

    30 May, 2025 12:42PM