January 26, 2026

hackergotchi for Deepin

Deepin

hackergotchi for Maemo developers

Maemo developers

Igalia Multimedia contributions in 2025

Now that 2025 is over, it’s time to look back and feel proud of the path we’ve walked. Last year has been really exciting in terms of contributions to GStreamer and WebKit for the Igalia Multimedia team.

With more than 459 contributions along the year, we’ve been one of the top contributors to the GStreamer project, in areas like Vulkan Video, GstValidate, VA, GStreamer Editing Services, WebRTC or H.266 support.

Pie chart of Igalia's contributions to different areas of the GStreamer project: other (30%) vulkan (24%) validate (7%) va (6%) ges (4%) webrtc (3%) h266parse (3%) python (3%) dots-viewer (3%) tests (2%) docs (2%) devtools (2%) webrtcbin (1%) tracers (1%) qtdemux (1%) gst (1%) ci (1%) y4menc (1%) videorate (1%) gl (1%) alsa (1%)Igalia’s contributions to the GStreamer project

In Vulkan Video we’ve worked on the VP9 video decoder, and cooperated with other contributors to push the AV1 decoder as well. There’s now an H.264 base class for video encoding that is designed to support general hardware-accelerated processing.

GStreaming Editing Services, the framework to build video editing applications, has gained time remapping support, which now allows to include fast/slow motion effects in the videos. Video transformations (scaling, cropping, rounded corners, etc) are now hardware-accelerated thanks to the addition of new Skia-based GStreamer elements and integration with OpenGL. Buffer pool tuning and pipeline improvements have helped to optimize memory usage and performance, enabling the edition of 4K video at 60 frames per second. Much of this work to improve and ensure quality in GStreamer Editing Services has also brought improvements in the GstValidate testing framework, which will be useful for other parts of GStreamer.

Regarding H.266 (VVC), full playback support (with decoders such as vvdec and avdec_h266, demuxers and muxers for Matroska, MP4 and TS, and parsers for the vvc1 and vvi1 formats) is now available in GStreamer 1.26 thanks to Igalia’s work. This allows user applications such as the WebKitGTK web browser to leverage the hardware accelerated decoding provided by VAAPI to play H.266 video using GStreamer.

Igalia has also been one of the top contributors to GStreamer Rust, with 43 contributions. Most of the commits there have been related to Vulkan Video.

Pie chart of Igalia's contributions to different areas of the GStreamer Rust project: vulkan (28%) other (26%) gstreamer (12%) ci (12%) tracer (7%) validate (5%) ges (7%) examples (5%)Igalia’s contributions to the GStreamer Rust project

In addition to GStreamer, the team also has a strong presence in WebKit, where we leverage our GStreamer knowledge to implement many features of the web engine related to multimedia. From the 1739 contributions to the WebKit project done last year by Igalia, the Multimedia team has made 323 of them. Nearly one third of those have been related to generic multimedia playback, and the rest have been on areas such as WebRTC, MediaStream, MSE, WebAudio, a new Quirks system to provide adaptations for specific hardware multimedia platforms at runtime, WebCodecs or MediaRecorder.

Pie chart of Igalia's contributions to different areas of the WebKit project: Generic Gstreamer work (33%) WebRTC (20%) Regression bugfixing (9%) Other (7%) MSE (6%) BuildStream SDK (4%) MediaStream (3%) WPE platform (3%) WebAudio (3%) WebKitGTK platform (2%) Quirks (2%) MediaRecorder (2%) EME (2%) Glib (1%) WTF (1%) WebCodecs (1%) GPUProcess (1%) Streams (1%) Igalia Multimedia Team’s contributions to different areas of the WebKit project

We’re happy about what we’ve achieved along the year and look forward to maintaining this success and bringing even more exciting features and contributions in 2026.

0 Add to favourites0 Bury

26 January, 2026 09:34AM by Enrique Ocaña González (eocanha@igalia.com)

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

[PLACEHOLDER] BL Carbon Release Notes

This post is a placeholder so that links can be added to BL live config popups.

The data will be added at release time.

26 January, 2026 12:00AM

January 23, 2026

hackergotchi for ZEVENET

ZEVENET

High availability is not redundancy — it’s operational decision-making

For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.

The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.

From redundancy to operational continuity

Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:

  • Failure (binary crash)
  • Partial failure (works, but incompletely)
  • Brownout (responds, but not on time)
  • Silent drop (no error, but traffic is lost)
  • Control-plane stall (decisions arrive too late)
  • Data-plane stall (traffic is blocked in-path)

The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.

In real incidents, operational ambiguity surfaces like this:

Phenomenon Failure type Detected by health-check User impact LB decision Real complexity
Backend down Binary Yes High Immediate failover Low
Backend slow Brownout Partial High Late / None High
Intermittent timeouts Brownout Not always Medium/High Ambiguous High
WAF blocking Security No High None High
Slow TLS handshake TLS layer Partial Medium N/A Medium
Session saturation Stateful No High Unknown High
Session transfer Operational No Medium Late Medium
DB degradation Backend Partial High Not correlated High

There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.

Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.

The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.

So what does high availability actually mean in 2026?

For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:

  1. understanding degradation vs true unavailability
  2. basing decisions on traffic and context, not just checks
  3. coordinating security, inspection and session
  4. having observability at the same plane that decides failover
  5. treating availability as an operational problem, not as hardware redundancy

Where does SKUDONET fit in this model?

SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.

In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.

A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.

23 January, 2026 10:39AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

Breaking: XDG Adds Native Support for Linyaps

In the world of Linux desktop computing, there exists a foundational "common language" that underpins all interoperability—the XDG specifications, developed and maintained by the freedesktop.org organization. XDG is the critical standard for solving Linux's ecosystem fragmentation and establishing unified resource access protocols. Whether you are an application developer or a distribution maintainer, ensuring your product runs well on a modern Linux desktop necessitates adherence to the XDG standard. It is the key cornerstone enabling the Linux desktop to evolve from "working in silos" to "unified collaboration." From desktop icons and application menus to system notifications and file dialogs, XDG specifications permeate every facet ...Read more

23 January, 2026 09:54AM by xiaofei

January 22, 2026

hackergotchi for ZEVENET

ZEVENET

High availability is not redundancy — it’s operational decision-making

For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.

The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.

From redundancy to operational continuity

Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:

  • Failure (binary crash)
  • Partial failure (works, but incompletely)
  • Brownout (responds, but not on time)
  • Silent drop (no error, but traffic is lost)
  • Control-plane stall (decisions arrive too late)
  • Data-plane stall (traffic is blocked in-path)

The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.

In real incidents, operational ambiguity surfaces like this:

Phenomenon Failure type Detected by health-check User impact LB decision Real complexity
Backend down Binary Yes High Immediate failover Low
Backend slow Brownout Partial High Late / None High
Intermittent timeouts Brownout Not always Medium/High Ambiguous High
WAF blocking Security No High None High
Slow TLS handshake TLS layer Partial Medium N/A Medium
Session saturation Stateful No High Unknown High
Session transfer Operational No Medium Late Medium
DB degradation Backend Partial High Not correlated High

There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.

Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.

The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.

So what does high availability actually mean in 2026?

For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:

  1. understanding degradation vs true unavailability
  2. basing decisions on traffic and context, not just checks
  3. coordinating security, inspection and session
  4. having observability at the same plane that decides failover
  5. treating availability as an operational problem, not as hardware redundancy

Where does SKUDONET fit in this model?

SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.

In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.

A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.

22 January, 2026 08:27AM by Nieves Álvarez

January 21, 2026

hackergotchi for Grml developers

Grml developers

Evgeni Golov: Validating cloud-init configs without being root

Somehow this whole DevOps thing is all about generating the wildest things from some (usually equally wild) template.

And today we're gonna generate YAML from ERB, what could possibly go wrong?!

Well, actually, quite a lot, so one wants to validate the generated result before using it to break systems at scale.

The YAML we generate is a cloud-init cloud-config, and while checking that we generated a valid YAML document is easy (and we were already doing that), it would be much better if we could check that cloud-init can actually use it.

Enter cloud-init schema, or so I thought. Turns out running cloud-init schema is rather broken without root privileges, as it tries to load a ton of information from the running system. This seems like a bug (or multiple), as the data should not be required for the validation of the schema itself. I've not found a way to disable that behavior.

Luckily, I know Python.

Enter evgeni-knows-better-and-can-write-python:

#!/usr/bin/env python3

import sys
from cloudinit.config.schema import get_schema, validate_cloudconfig_file, SchemaValidationError

try:
    valid = validate_cloudconfig_file(config_path=sys.argv[1], schema=get_schema())
    if not valid:
        raise RuntimeError("Schema is not valid")
except (SchemaValidationError, RuntimeError) as e:
    print(e)
    sys.exit(1)

The canonical1 version if this lives in the Foreman git repo, so go there if you think this will ever receive any updates.

The hardest part was to understand thevalidate_cloudconfig_file API, as it will sometimes raise an SchemaValidationError, sometimes a RuntimeError and sometimes just return False. No idea why. But the above just turns it into a couple of printed lines and a non zero exit code, unless of course there are no problems, then you get peaceful silence.

21 January, 2026 07:42PM

hackergotchi for Deepin

Deepin

January 20, 2026

hackergotchi for GreenboneOS

GreenboneOS

CVE-2025-64155: In the Wild Exploitation of FortiSIEM for Unauthenticated Root-Level RCE

On January 13th, 2026, Fortinet publicly disclosed and patched CVE-2025-64155 (CVSS 9.8) affecting FortiSIEM along with five additional vulnerabilities across its product line [1][2][3][4][5]. In particular, CVE-2025-64155 represents high-risk exposure; immediately after its release, active exploitation was reported. The flaw was responsibly disclosed to Fortinet almost six months ago (August 2025), by Horizon3.ai. Greenbone includes […]

20 January, 2026 07:53AM by Joseph Lee

January 19, 2026

hackergotchi for Deepin

Deepin

deepin 25.0.10 Release Note

In order to further optimize the deepin 25 system update experience and enhance stability, the deepin 25.0.10 image is now officially released. This update focuses on system installation experience, file management, system interaction, and stability, optimizing multiple high-frequency usage scenarios, fixing a large number of known issues, and improving system smoothness and reliability.   Key Updates in This Release System Installer: Optimized the prompt text for data formatting during full-disk installation, now supporting the option to retain user data and reuse the original account data, configurations, and files. Comprehensive Upgrade of File Manager: Added practical features such as automatic scrolling during file ...Read more

19 January, 2026 05:41AM by xiaofei

January 15, 2026

hackergotchi for Tails

Tails

Tails 7.4

New feature

Persistent language and keyboard layout

You can now save your language and keyboard layout from the Welcome Screen to the USB stick. These settings will be applied automatically when restarting Tails.

If you turn on this option, your language and keyboard layout are saved unencrypted on the USB stick to help you type the passphrase of your Persistent Storage more easily.

Changes and updates

  • Update Tor Browser to 15.0.4.

  • Update Thunderbird to 140.6.0.

  • Update the Linux kernel to 6.12.63.

  • Drop support for BitTorrent download.

    With the ongoing transition from BitTorrent v1 to v2, the BitTorrent v1 files that we provided until now can become a security concern. We don't think that updating to BitTorrent v2 is worth the extra migration and maintenance cost for our team.

    Direct download from one of our mirrors is usually faster.

Fixed problems

  • Fix opening .gpg encrypted files in Kleopatra when double-clicking or selecting Open with Kleopatra from the shortcut menu. (#21281)

  • Fix the desktop crashing when unlocking VeraCrypt volumes with a wrong password. (#21286)

  • Use 24-hour time format consistently in the top navigation bar and the lock screen. (#21310)

For more details, read our changelog.

Get Tails 7.4

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.4.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.4 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.4 directly:

15 January, 2026 12:00AM

January 14, 2026

hackergotchi for ZEVENET

ZEVENET

Cloud security works, but not as a unified system

Talking about cloud today is no longer about a technological trend, but about a central piece of the business. More and more companies are moving their infrastructure to cloud providers under the promise of less hardware, less maintenance, fewer licenses and less time spent on activities that do not generate value.

Much of that promise has been fulfilled. Cloud has democratized capabilities that only large organizations could access a few years ago. Launching a service, increasing capacity or deploying a new region is now easier, faster and more accessible.

However, as often happens with technology, the story changes when we zoom into operations. Cloud simplifies infrastructure, but it does not always simplify how that infrastructure is operated. And that nuance affects not only technical teams, but also the business itself.

Cloud providers don’t sell “solutions” — they sell components

The first point of friction does not appear in compute or storage, but in the services that accompany the infrastructure. This includes security, load balancing, TLS certificates, application firewalls, monitoring and observability.

In the cloud provider’s catalog, the technology is there, but it is sold as separate components. Security on one side, certificates on another, observability on another, and advanced capabilities billed as add-ons. The customer does not go without service, but is left with a recurring question: what exactly must be purchased to remain protected and operate reliably?

A less visible aspect also emerges: security is billed per event, per inspection or per volume of traffic. What used to be a hardware expense becomes a bill based on requests, analysis and certificates. Cloud solved hardware, but externalized the operational complexity of security.

Metrics and logs exist, but they are often fragmented, sampled and weakly correlated. Understanding what happened during an incident may require navigating multiple services and data models. Cloud promises security, but it rarely promises explanations.

And at its core this is not a technical problem, but a model problem. Cloud security is commercialized as a product but consumed as a service. And when there is a mismatch between how something is purchased and how it is used, friction eventually appears.

SkudoCloud as an example of the managed approach

This is the context in which SkudoCloud emerges — not to replace the cloud provider or compete as infrastructure, but to resolve the operational coherence between load balancing, security and visibility.

SkudoCloud is a SaaS platform that enables companies to deploy advanced load balancing and application protection without assembling separate modules, tools or services. From a single interface, organizations can:

  • manage SSL/TLS certificates
  • inspect encrypted traffic
  • apply WAF rules
  • distribute load across backends
  • and monitor application behavior

The most evident difference appears in security. In the modular cloud model, the customer must decide what to purchase, which rules to enable, how to correlate logs and how to keep everything updated. In a managed model like SkudoCloud, certificates, WAF, TLS inspection and load balancing behave as one coherent system.

This has direct consequences for the business:

  • it reduces operational uncertainty
  • it improves visibility during incidents
  • and it avoids billing models tied to traffic volume or number of inspections

Instead of acquiring security, companies acquire operability. Instead of assembling components, they obtain an outcome. That is the difference of a managed approach.

Conclusion

Cloud adoption is already a given. The real question now is how to operate it sustainably. Fragmentation was a natural side effect of the migration phase. Unification will likely be the central theme of the operational phase.

Cloud simplified servers. Now it is time to simplify operations.

14 January, 2026 10:25AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

January 13, 2026

hackergotchi for Purism PureOS

Purism PureOS

PureOS Crimson Development Report: December 2025

"Fit and finish" appears in many industries. For much of the software industry, it refers to features that complete a fit for a target audience, ensuring that audience can use the product for their needs. At a frame shop, it means literally fitting the mounted artwork into a frame, then finishing the back of the frame.

At Purism, fit takes on another meaning - making apps fit on screens the size of the Librem 5.

The post PureOS Crimson Development Report: December 2025 appeared first on Purism.

13 January, 2026 10:11PM by Purism

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Forum downtime.

Apologies to users who were hit by forum downtime from ~9:00 to 16:30 Japan time. An upstream server crash combined with an unplanned package upgrade meant some configurations had to be edited. I think all is well now.

13 January, 2026 12:00AM

BunsenLabs Carbon Release Candidate 2 iso available for testing

As usual it was a longer road than planned, with some unexpected tasks, but there is now a Carbon release candidate iso file available for download here: https://sourceforge.net/projects/bunsen … hybrid.iso sha256 checksum: d0beb580ba500e2b562e1f39aa6ec02d03597d8f95d73fd86c1755e3fee1ef7d

If you have a free machine or VM to install it on, please give it some testing!

And please post any bugs here: https://forums.bunsenlabs.org/viewtopic.php?id=9656
That thread is now closed because having multiple bug reports mixed up together was too confusing. Please post any new bugs related to the Carbon RC2 iso in individual threads in the Bug Reports section, adding a tag [Carbon RC2] .

When it seems as if there aren't any bugs left to squash, we can do an Official Release. cool

Last edited by johnraff (2026-01-18 05:39:06)

13 January, 2026 12:00AM

January 11, 2026

hackergotchi for SparkyLinux

SparkyLinux

Labwc

There is a new desktop available for Sparkers: Labwc, as well as Sparky 2026.01~dev Labwc ISO image. What is Labwc? Installation on Sparky testing (9): (packages installation only, requires your set up): or (with Sparky settings): via APTus (>= 20260108)-> Desktops-> Labwc or (with Sparky settings): via Sparky testing (9) MinimalGUI/ISO image. Then reboot to take effects…

Source

11 January, 2026 11:42AM by pavroo

January 10, 2026

hackergotchi for Grml developers

Grml developers

Michael Prokop: Bookdump 2025

Foto der hier vorgestellten Bücher

Mein Lesejahr 2025 war mit durchschnittlich bisschen mehr als einem Buch pro Woche vergleichbar mit 2024. Mein Best-Of der von mir 2025 fertig gelesenen Bücher (jene die ich besonders lesenswert fand bzw. empfehlen möchte, die Reihenfolge entspricht dem Foto und stellt keinerlei Reihung dar):

  • Russische Spezialitäten, Dmitrij Kapitelman. Was für ein Feuerwerk von einem Buch, sprachgewaltig, traurig, amüsant.
  • Die Jungfrau, Monika Helfer. Nach Helfers “Die Bagage”, “Löwenherz” und “Vati” war natürlich auch dieses Buch Pflichtlektüre für mich.
  • Das Buch zum Film, Clemens J. Setz. Wunderbare Alltagsbeobachtungen und Bonmots – ich hab eigentlich nur eine Kritik: mit 192 Seiten zu kurz.
  • Wackelkontakt, Wolf Haas. Jaja, ein bekannter Bestseller etc. Aber er ist und bleibt einer meiner Lieblingsautoren. Ich war bei seiner Lesung in Graz und habe das Buch im Anschluss sogar noch ein zweites Mal gelesen, und es keine Sekunde bereut. Sprachkünstler, Hilfsausdruck!
  • Fleisch ist mein Gemüse, Heinz Strunk. Ich liebe Background-Geschichten, speziell wenn es um Musik bzw. das Musikerleben geht, und das ist hier mit dem Ausflug in die Branche der Tanzmusik der Fall. Bis auf einige wenige Ausnahmen flutscht es beim Lesen.
  • Wut und Wertung: Warum wir über Geschmack streiten, Johannes Franzen. Warum eskalieren Konflikte über Geschmack, Kunst und Kanon? Warum ist Streiten über Geschmack eine wichtige Kulturtechnik? Franzen arbeitet das anhand von tatsächlich existierenden Kontroversen und Skandalen auf, lehrreich und anregend.
  • Klapper, Kurt Prödel. Fans von Clemens J. Setz kennen natürlich Prödel, und da ich auch Coming-of-Age-Romanen mag, war das ein doppelter Volltreffer. Ich freue mich schon auf sein neues Buch “Salto”!
  • Hier treibt mein Kartoffelherz, Anna Weidenholzer. Ich kann absolut nichts mehr zu diesem Buch sagen, aber ich hab’s echt gern gelesen.
  • Die Infantin trägt den Scheitel links, Helena Adler. Das Buch hatte einen interessanten Sog auf mich, ich wollte es einfach weiterlesen. Die verspielte Sprache und Wortspiele haben es noch feiner gemacht.
  • Das schöne Leben, Christiane Rösinger. Ich hab Rösingers Bücher von Kathrin Passig empfohlen bekommen (Volltreffer, danke!). Ich hab mir auch alle anderen Bücher von Rösinger (“Berlin – Baku. Meine Reise zum Eurovision Song Contest”, “Zukunft machen wir später: Meine Deutschstunden mit Geflüchteten”, “Liebe wird oft überbewertet”) besorgt, und sehr gerne gelesen.

10 January, 2026 05:29PM

January 09, 2026

hackergotchi for Proxmox VE

Proxmox VE

New Archive CDN for End-of-Life (EOL) Releases

Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older.

The archive is reachable via the following URLs:

To use the archive for an EOL release, you will need to change the domain in the apt repository configuration...

Read more

09 January, 2026 05:43PM by t.lamprecht (invalid@example.com)

January 08, 2026

hackergotchi for GreenboneOS

GreenboneOS

December 2025 Threat Report: Emergency End-of-Year Patches and New Exploit Campaigns

In 2025, Greenbone increased the total number of vulnerability tests in the OPENVAS ENTERPRISE FEED to over 227,000, adding almost 40,000 vulnerability checks. Since the first CVE was published in 1999, over 300,000 software vulnerabilities have been added to MITRE’s CVE repository. CVE disclosures continued to rocket upward, increasing roughly 21% compared to 2024. CISA […]

08 January, 2026 01:05PM by Joseph Lee

January 07, 2026

hackergotchi for Deepin

Deepin

January 06, 2026

hackergotchi for VyOS

VyOS

VyOS 1.4.4 LTS Achieves Nutanix Ready Validation for AOS 7.3

We’re excited to announce that VyOS 1.4.4 LTS has officially achieved Nutanix Ready validation for Nutanix Acropolis Operating System (AOS) 7.3 and AHV Hypervisor 10.3.

This milestone strengthens our collaboration with Nutanix and ensures full interoperability for customers deploying VyOS Universal Router within the Nutanix Cloud Infrastructure solution.

06 January, 2026 02:30PM by Santiago Blanquet (yago.blanquet@vyos.io)

hackergotchi for Deepin

Deepin

January 03, 2026

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

New utility: xml2xfconf

In the process of getting Blob - and Carbon - ready for release, a bug with blob's handling of xfconf settings came up: https://forums.bunsenlabs.org/viewtopic … 79#p148079

It turned out that while xfconf-query doesn't output the type of settings entries, it requires to know the type if adding a new entry. So running 'xfconf-query -c "<channel>" -lv' is not enough for backing up an xfce app which stores its settings in the xfconf  database - which most of them do these days. We need to store the type too. That data is luckily stored in the app's xml file in ~/.config/xfce4/xfconf/xfce-perchannel-xml/ so to back it up, all we need to do is save that file.

In principle it might be possible to restore the settings by copying the xml file back into place, overwriting whatever's there, but the apps don't always respond right away, often needing  a logout/in. There's a better way - if you know the missing type then you can run xfconf-query commands to restore the settings.

So, this script called  xml2xfconf . Passed an xfconf xml file - eg a backed-up copy of one of those in xfce-perchannel-xml/ - it will print out a list of xfconf-query commands to apply those settings to the xfconf database, and they'll take effect immediately. cool

Example usage:

restore=$(mktemp)
xml2xfconf -x /path/to/xfce4-terminal.xml -c xfce4-terminal > "$restore"
bash "$restore"

Here's what got written into $restore:

xfconf-query -c xfce4-terminal -p /font-name -n -t string -s Monospace\ 10
xfconf-query -c xfce4-terminal -p /color-use-theme -n -t bool -s false
xfconf-query -c xfce4-terminal -p /font-allow-bold -n -t bool -s true
xfconf-query -c xfce4-terminal -p /title-mode -n -t string -s TERMINAL_TITLE_REPLACE
xfconf-query -c xfce4-terminal -p /scrolling-lines -n -t uint -s 50000
xfconf-query -c xfce4-terminal -p /font-use-system -n -t bool -s false
xfconf-query -c xfce4-terminal -p /background-mode -n -t string -s TERMINAL_BACKGROUND_TRANSPARENT
xfconf-query -c xfce4-terminal -p /background-darkness -n -t double -s 0.94999999999999996
xfconf-query -c xfce4-terminal -p /color-bold-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold-is-bright -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-background-vary -n -t bool -s false
xfconf-query -c xfce4-terminal -p /color-foreground -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-background -n -t string -s \#2c2c2c
xfconf-query -c xfce4-terminal -p /color-cursor-foreground -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-cursor -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-cursor-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-selection -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-background -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-palette -n -t string -s \#3f3f3f\;#705050\;#60b48a\;#dfaf8f\;#9ab8d7\;#dc8cc3\;#8cd0d3\;#dcdcdc\;#709080\;#dca3a3\;#72d5a3\;#f0dfaf\;#94bff3\;#ec93d3\;#93e0e3\;#ffffff
xfconf-query -c xfce4-terminal -p /tab-activity-color -n -t string -s \#aa0000

xml2xfconf has been uploaded in the latest version of bunsen-utilities, so now I'm going to rewrite the bits of BLOB which use xfconf (only a couple of apps actually) to use xml2xfconf and with luck the bug which @Dave75 found will go away.

And then the Carbon release can get rolling again.

It wasn't a welcome interruption, but this new utility might be useful outside Blob for people who want to backup and restore xfce  app settings. smile

03 January, 2026 12:00AM

January 02, 2026

hackergotchi for ZEVENET

ZEVENET

How to Evaluate a WAF in 2026 for SaaS Environments

Web applications and APIs are now the operational core of most digital services. They process transactions, expose business logic, manage identities, and connect distributed systems that evolve continuously. In parallel, the volume and sophistication of attacks has increased, driven by automation, accessible tooling, and cloud-specific attack vectors.

Web Application Firewalls remain a critical part of the security stack—but in 2026, the challenge is no longer whether a WAF is deployed. The real question is whether it can be evaluated, measured, and trusted under real operating conditions, especially when consumed as a service.

As WAFs move to SaaS models, teams delegate infrastructure, scaling, and maintenance to the provider. This simplifies operations, but it also changes the evaluation criteria. When you no longer control the underlying system, visibility, isolation, and predictable behavior become non-negotiable technical requirements.

Evaluating a WAF in 2026 is fundamentally different

Traditional evaluations focused heavily on rule coverage or whether a solution “covers OWASP Top 10.” Those checks still matter—but they no longer reflect production reality.

A modern evaluation must answer practical, operational questions:

  • Can the WAF block malicious traffic without breaking legitimate flows?
  • Does it behave consistently in prevention mode and under load?
  • Can its decisions be observed, explained, and audited?

In SaaS environments, this becomes even more critical. When a false positive blocks production traffic or latency spikes unexpectedly, there is often no lower layer to compensate. The WAF’s behavior is the system’s behavior. If that behavior cannot be measured and understood, the evaluation is incomplete.

Why most SaaS WAF evaluations fall short

Many WAF evaluations fail not due to lack of expertise, but because the process itself is incomplete.
Common pitfalls include:

  • Testing in monitor-only mode instead of prevention
  • Relying on default configurations with no real traffic
  • Ignoring operational limits until production
  • Inability to trace why a request was blocked

In SaaS models, additional constraints often surface late: payload size limits, rule caps, log retention, export restrictions, or rate limits in the control plane. These are not secondary details—they directly affect detection quality and incident response.

A meaningful evaluation must be observable and reproducible. If you cannot trace decisions through logs, correlate them with metrics, and explain them after the fact, the WAF becomes a black box.

Detection quality is defined by false positives, not demos

Detection capability is often summarized by a single number, usually the True Positive Rate (TPR). While important, this metric alone is misleading.

A WAF that aggressively blocks everything will score well in detection tests—and fail catastrophically in production.

Real-world evaluation must consider both sides of the equation: blocking malicious traffic and allowing legitimate traffic to pass. False positives are not a usability issue—especially in API-driven systems, where payload structure, schemas, and request volume amplify the cost of false positives.

At scale, even a low False Positive Rate (FPR) can result in:

  • Broken user flows
  • Failed API calls
  • Increased operational load
  • Pressure to weaken or disable protections

This is where most evaluations break down in practice: not on attack detection, but on how much legitimate traffic is disrupted.

A realistic PoC should include scenarios like:

Source of false positives Real-world example What to test
Complex request bodies Deep JSON, multipart forms Recorded API and UI traffic
Business logic flows Search, filtering, checkout End-to-end navigation
Uploads PDFs, images, metadata Real upload paths
Atypical headers Large cookies, custom headers Reverse proxy captures

In SaaS environments, false positives are even more costly, as tuning depends on provider capabilities, change latency, and visibility into decisions.

SKUDONET Cloud Solution

SkudoCloud was designed to deliver application delivery and WAF capabilities as a SaaS service while preserving the technical properties advanced teams need to operate safely in production: transparent inspection, predictable isolation, and full visibility into traffic and security decisions. The goal is to remove infrastructure overhead without turning operations into a black box.

That same philosophy shapes how WAFs should be evaluated in 2026. Teams should assess real behavior: prevention mode, realistic traffic patterns, false positives, API payloads, and performance under load—especially when the service is managed and the underlying system is not directly accessible.

To support that evaluation, we have documented the full methodology in our technical guide:

👉 Download the full guide:

02 January, 2026 11:03AM by Nieves Álvarez

January 01, 2026

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2025/12

The 12th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.18.2, 6.12.63-LTS, 6.6.119-LTS – Added to “dev” repos: COSMIC desktop – Sparky 2025.12 & 2025.12 Special Editions released Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in January too, please.

Source

01 January, 2026 08:08PM by pavroo

December 30, 2025

hackergotchi for Deepin

Deepin

December 29, 2025

hackergotchi for ZEVENET

ZEVENET

Why Application Delivery Visibility Breaks in Secure Architectures

Modern application delivery architectures are built with the right goals in mind. Load balancers distribute traffic, Web Application Firewalls enforce security policies, TLS protects data in transit, and monitoring systems promise observability across the stack. On paper, everything seems covered.

In real production environments, however, many of these architectures operate with critical blind spots. Especially when security components start making decisions that engineers cannot fully see, trace, or explain. This is rarely caused by a lack of tools. More often, it is the result of how security is embedded into the delivery path.

As security becomes more deeply integrated into application delivery, visibility does not automatically follow.

When security turns into a black box

In most production environments, security is no longer a separate layer. WAFs sit directly in the traffic path, inspecting requests, evaluating rules, applying reputation checks and deciding — in real time — whether traffic is allowed or blocked. TLS inspection happens inline, and policies are often updated automatically.

The problem is not that these decisions exist. The problem is that, very often, they cannot be clearly explained after the fact.

In many deployments, teams quickly run into the same limitations:

  • Engineers cannot determine which specific rule caused a request to be blocked
  • WAF logic is exposed only through high-level categories or abstract scores
  • Encrypted traffic is inspected, but the inspection process itself remains invisible
  • Logs are available, but without enough context to correlate decisions with behaviour

The result is a paradox that experienced teams recognize immediately: security coverage increases, while operational visibility decreases.

Common Security Blind Spots in Application Delivery Architectures

These blind spots rarely appear during normal operation. They tend to surface under pressure: traffic spikes, false positives, performance degradation or partial outages. When they do, troubleshooting becomes significantly more complex, because the information engineers need is often incomplete or fragmented.

1. Encrypted traffic without explainability

TLS encryption is essential, but it fundamentally changes how visibility works. In many application delivery stacks, traffic is decrypted at some point, inspected, and then re-encrypted. Security decisions are made, but the path between request, rule and outcome is not always traceable.

When something breaks, engineers are often left with little more than a generic message: “Request blocked by WAF.”

What is missing is the ability to correlate:

  • the original request,
  • the specific rule or condition involved,
  • the security decision that was applied,
  • and the downstream impact on the application.

Without that correlation, root cause analysis turns into guesswork rather than engineering.

2. Abstracted or hidden WAF rule logic

Many WAF platforms expose protection logic through simplified models such as risk scores, rule categories or predefined profiles. While these abstractions make dashboards easier to read, they remove critical detail from day-to-day operations.

When rule logic cannot be inspected directly:

  • false positives are harder to tune with precision,
  • rule conflicts remain invisible,
  • behaviour changes appear without an obvious trigger.

Over time, this erodes trust in automated protection. Teams stop understanding why something happens and start compensating by weakening policies instead of fixing the underlying issue.

3. Security decisions that impact delivery

Security controls do more than allow or block requests. They influence how connections are handled, how retries behave, how sessions persist, and how backend health is perceived by the delivery layer.

When these effects are not visible, delivery problems are often misdiagnosed:

  • backend instability may actually be selective blocking,
  • uneven load distribution may come from upstream filtering,
  • timeouts may be caused by inspection delays under load.

Engineers end up debugging load balancing logic or application behaviour, while the real cause sits silently inside the security layer.

4. Logs without operational context

Logs are often treated as a substitute for visibility. In practice, they are frequently:

  • sampled or rate-limited,
  • delayed,
  • detached from real-time behaviour,
  • owned or processed externally.

A log entry that explains what happened, but not why, is not observability, it is a post-mortem artifact. In critical environments, teams need actionable insight while an incident is unfolding, not hours later.

What Modern WAF Architectures Must Provide

A WAF integrated into the application delivery path should not act as an opaque enforcement layer. Instead, it should provide visibility at each critical stage of the decision process.

In practical terms, this means enabling teams to:Trace each security decision end to end, from the incoming request to the final action applied.

  • Trace each security decision end to end, from the incoming request to the final action applied.
  • Inspect WAF rule logic directly, without relying on abstract categories or risk scores.
  • Correlate blocked or modified requests with delivery behaviour, such as backend health, session persistence or retries.
  • Analyze encrypted traffic transparently, without losing context once TLS inspection is performed.
  • Maintain consistent visibility under load, during traffic peaks or active incidents.

Without these capabilities, security controls may protect applications, but they also introduce operational blind spots that slow down troubleshooting and increase risk.

Visibility in Secure Application Delivery with SKUDONET

SKUDONET Enterprise Edition is designed around a simple principle: security must protect traffic without breaking visibility.

Instead of treating security as a separate black box, SKUDONET integrates WAF and traffic management into a single, observable application delivery platform. This approach ensures that security decisions remain transparent, traceable and actionable for engineers working in real production conditions.

SKUDONET Application Delivery Visibility

Key aspects of this design include:

  • Full visibility and control over WAF rules and behaviour, allowing administrators to inspect rule logic, modify or disable existing rules, and define new ones based on real traffic patterns and application requirements.
  • Clear correlation between security decisions and application delivery impact, making it possible to understand exactly where a request failed, why it was blocked or modified, and how that decision affected backend behaviour.
  • Transparent inspection of encrypted traffic, preserving full request context throughout the entire lifecycle, from decryption to enforcement and delivery.
  • Actionable logging and diagnostics, designed to explain not only what happened, but why it happened, enabling effective tuning, troubleshooting and auditing.

By removing opacity from security enforcement, SKUDONET helps teams retain control over both protection and performance—especially in high-traffic or business-critical environments where visibility is essential.

A 30-day, fully functional evaluation of SKUDONET Enterprise Edition is available for teams who want to validate this level of visibility and control under real workloads.

29 December, 2025 11:31AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

December 26, 2025

hackergotchi for SparkyLinux

SparkyLinux

COSMIC

There is a new desktop available for Sparkers: COSMIC What is COSMIC? COSMIC desktop is available via Sparky ‘dev’ repositories so the repo has to be enabled to install COSMIC desktop on the top of Sparky 8 stable or testing (9). It uses the Wayland session as default. The Sparky meta package installs ‘xwayland’ package as default, but some application can not work or launch. That’s why I…

Source

26 December, 2025 04:21PM by pavroo

December 25, 2025

hackergotchi for Deepin

Deepin

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Carbon version of bunsen-blob is now available

BLOB, the utility that lets people try different desktop theming sets (eg go back to the Boron look, or even Crunchbang) has been upgraded to 13.1-1 on the Carbon repository. This brings a lot of improvements, like support for xfce4-panel profiles, xfconf settings for xfce4-terminal, flexibility over wallpaper settings with a switch to feh by default, and more.

See the changelog: https://github.com/BunsenLabs/bunsen-bl … /changelog
or all the commits: https://github.com/BunsenLabs/bunsen-bl … ts/carbon/

Right now "BLOB Themes Manager" is commented out of jgmenu's prepend.csv, but if you're on a Carbon system you can install the package 'bunsen-blob' and uncomment that menu line to use it. Please check it out. cool

Soon, an upgraded bunsen-configs will include it in the menu by default, and it will be added to the meta - and iso - package lists. A Release Candidate Carbon iso is not far away...

25 December, 2025 12:00AM

December 23, 2025

hackergotchi for Deepin

Deepin

December 21, 2025

hackergotchi for ArcheOS

ArcheOS

The OpArc Project at the SFSCON

Hello everyone, 

this short post is to let anyone interested know that our presentation at SFSCON, held in Bolzano/Bozen at the NOI Techpark, has been published online. SFSCON stays for SFSCON stands for South Tyrol Free Software Conference and is one of Europe's most established annual conferences on Free Software.

This year we at Arc-Team decided to participate with a talk that summarized our approximately 20 years of experience in applying the Open Source philosophy to archaeology (both in the software and hardware fields).

The presentation was titled "Arc-Team and the OpArc Project" and can be viewed both on the conference's official website (where you can also download a PDF version) and on the conference's YouTube channel

I hope the presentation can be interesting for someone. Have a nice day! 

21 December, 2025 02:47PM by Luca Bezzi (noreply@blogger.com)

hackergotchi for Qubes

Qubes

Qubes OS 4.3.0 has been released!

We’re pleased to announce the stable release of Qubes OS 4.3.0! This minor release includes a host of new features, improvements, and bug fixes. The ISO and associated verification files are available on the downloads page.

What’s new in Qubes 4.3?

  • Dom0 upgraded to Fedora 41 (#9402).
  • Xen upgraded to version 4.19 (#9420).
  • Default Fedora template upgraded to Fedora 42 (older versions not supported).
  • Default Debian template upgraded to Debian 13 (versions older than 12 not supported).
  • Default Whonix templates upgraded to Whonix 18 (older versions not supported).
  • Preloaded disposables (#1512)
  • Device “self-identity oriented” assignment (a.k.a. New Devices API) (#9325)
  • Qubes Windows Tools reintroduced with improved features (#1861).

These are just a few highlights from the many changes included in this release. For a more comprehensive list of changes, see the Qubes OS 4.3 release notes.

How to get Qubes OS 4.3.0

  • If you’d like to install Qubes OS for the first time or perform a clean reinstallation on an existing system, there’s never been a better time to do so! Simply download the Qubes 4.3.0 ISO and follow our installation guide.

  • If you’re currently using Qubes 4.2, learn how to upgrade to Qubes 4.3.

  • If you’re currently using a Qubes 4.3 release candidate (RC), update normally (which includes upgrading any EOL templates and standalones you might have) in order to make your system effectively equivalent to the stable Qubes 4.3.0 release. No reinstallation or other special action is required.

In all cases, we strongly recommend making a full backup beforehand.

Known issues in Qubes OS 4.3.0

Templates restored in 4.3.0 from a pre-4.3 backup may continue to target their original Qubes OS release repos (#8701). After restoring such templates in 4.3.0, you must enter the following additional commands in a dom0 terminal:

sudo qubes-dom0-update -y qubes-dist-upgrade
sudo qubes-dist-upgrade --releasever=4.3 --template-standalone-upgrade -y

This will automatically choose the templates that need to be updated. The templates will be shut down during this process.

Fresh templates on a clean 4.3.0 installation are not affected. Users who perform an in-place upgrade from 4.2 to 4.3 (instead of restoring templates from a backup) are also not affected, since the in-place upgrade process already includes the above fix in stage 4. For more information, see issue #8701.

View the full list of known bugs affecting Qubes 4.3 in our issue tracker.

Support for older releases

In accordance with our release support policy, Qubes 4.2 will remain supported for six months after the release of Qubes 4.3, until 2026-06-21. After that, Qubes 4.2 will no longer receive security updates or bug fixes.

Whonix templates are created and supported by our partner, the Whonix Project. The Whonix Project has set its own support policy for Whonix templates in Qubes. For more information, see the Whonix Support Schedule.

Thank you to our partners, donors, contributors, and testers!

This release would not be possible without generous support from our partners and donors, as well as contributions from our active community members, especially bug reports from our testers. We are eternally grateful to our excellent community for making the Qubes OS Project a great example of open-source collaboration.

21 December, 2025 12:00AM

December 20, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2025.12 Special Editions

There are new iso images of Sparky 2025.12 Special Editions out there: GameOver, Multimedia and Rescue. This release is based on Debian testing “Forky”. The December update of Sparky Special Edition iso images feature Linux kernel 6.17, updated packages from Debian and Sparky testing repos as of December 20, 2025, and most changes introduced at the 2025.12 release. The Linux kernels 6.18.2, 6.

Source

20 December, 2025 10:10PM by pavroo

hackergotchi for Purism PureOS

Purism PureOS

A Quarter Century After Cyberselfish, Big Tech Proves Borsook Right

In her book Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of Silicon Valley, published in 2000, Borsook who is based in Palo Alto, California and has previously written for Wired and a host of other industry publications, took aim at what she saw as disturbing trends among the tech industry.

The post A Quarter Century After Cyberselfish, Big Tech Proves Borsook Right appeared first on Purism.

20 December, 2025 05:10AM by Purism

December 19, 2025

hackergotchi for GreenboneOS

GreenboneOS

New Actively Exploited CVSS 10 Flaw in Cisco AsyncOS Spam Quarantine Remote Access

! Update January 26, 2026 On January 15th, 2025, Cisco released patches for CVE-2025-20393 CVSS 10. Cisco recommends upgrading to a fixed release. The patches are intended to remove the persistence mechanisms observed in the campaign. There are no workarounds; patching is required for complete mitigation. Fixed versions are: • Cisco Secure Email Gateway (SEG) […]

19 December, 2025 11:54AM by Joseph Lee

hackergotchi for VyOS

VyOS

VyOS 1.4.4 released: syslog over TLS, AWS GLB support, and 50+ bug fixes

Hello, Community!

Customers and holders of contributor subscriptions can now download VyOS 1.4.4 release images and the corresponding source tarball. This release adds TLS support for syslog, support for AWS gateway load balancer tunnel handler (on AWS only), an option to match BGP prefix origin validation extended communities in route maps, and more. It also fixes over fifty bugs. Additionally, there's now a proper validation to prevent manually assigned multicast addresses, which may break some old malformed configs, so pay attention to it. Last but not least, there's a deprecation warning for SSH DSA keys that will stop working in VyOS releases after 1.5 due to changes in OpenSSH, so make sure to update your user accounts to more secure algorithm keys while you still have the time.

19 December, 2025 09:22AM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for ZEVENET

ZEVENET

SKUDONET 2025: A Technical Recap of Product, Security and Platform Growth

2025 has been a defining year for SKUDONET — not because of a single announcement or isolated launch, but due to sustained progress across product development, security reinforcement and strategic expansion.

Throughout the year, our focus has remained consistent: strengthening the core platform, improving the operational experience for administrators, and ensuring that security and reliability evolve in line with real-world infrastructure demands.

This approach has translated into continuous, incremental improvements rather than disruptive changes, allowing teams to adopt new capabilities without compromising stability in production environments.

Product Evolution Throughout 2025

Over the course of eight product releases, SKUDONET continued to mature as an application delivery and security platform designed for critical environments.

Across these updates, we introduced:

  • 11 new features
  • 31 functional improvements
  • 23 stability fixes
  • 31 resolved security vulnerabilities (CVEs)

Rather than isolated enhancements, these updates reflect a continuous effort to simplify daily operations while reinforcing security and performance at scale.

Key areas of evolution included a renewed Web GUI, designed to be faster, more consistent and easier to navigate in complex environments, as well as meaningful progress in RBAC, enabling more precise and adaptable access control models.

Certificate management also saw significant improvements, with extended Let’s Encrypt automation, broader DNS provider support and fully automated renewal workflows. Alongside this, we reduced execution times for critical operations such as farm start/stop actions and API metric retrieval.

Security and Reliability Reinforcement

During the year, 31 CVEs were resolved, continuously hardening the platform’s attack surface. Beyond vulnerability remediation, SKUDONET focused on reinforcing internal consistency and predictability under load.
Key improvements were made across:

  • Traffic inspection and validation pipelines, improving consistency and traceability when processing and filtering requests
  • Logging and remote log forwarding, ensuring more reliable log handling and easier integration with external logging systems
  • Internal module stability, particularly within IPDS and RBAC, reducing edge-case behaviour under load

Several updates also introduced additional hardening measures, including:

  • Improved handling of client headers, mitigating spoofing and trust issues in proxied environments
  • More secure cookie insertion in HTTP/S services, with stronger defaults to reduce exposure to common web vulnerabilities
  • Stricter security defaults in the management interface, reinforcing protection of administrative access

Together, these enhancements contribute to a platform that behaves more predictably under pressure and is easier to audit and troubleshoot in production.

Automation and Operational Efficiency

Reducing operational overhead for administrators was another consistent theme throughout 2025.
Several improvements were introduced to simplify day-to-day operations and reduce manual intervention, including:

  • AutoUpdate, enabling systems to automatically check, download and install updates, helping teams stay current with security patches and platform improvements while minimizing maintenance windows
  • End-to-end SSL/TLS automation, covering the full certificate lifecycle from creation to renewal and notification, reducing manual certificate management effort
  • Performance optimizations in the Stats API and backend metrics collection, making integrations with monitoring and automation tools faster and more efficient

Together, these enhancements allow teams to spend less time on routine maintenance tasks and more time on capacity planning, optimization and higher-level architectural decisions.

SkudoCloud: A Strategic Step Forward

One of the most significant milestones of the year was the launch of SkudoCloud, SKUDONET’s fully managed SaaS platform for application delivery and security.

SkudoCloud introduces a new operational model in which teams can deploy secure application delivery infrastructure in minutes, without managing the underlying system lifecycle. From the first deployment, users benefit from:

  • Fully managed application delivery and security, removing the need to operate and maintain the platform
  • Integrated traffic management and protection, aligned with the same delivery and security principles as SKUDONET Enterprise Edition
  • Immediate availability of advanced security controls, applied from the initial deployment
  • A simplified operational model, focused on usage rather than infrastructure management

This launch represents a strategic expansion of the SKUDONET ecosystem, complementing on-premise and self-managed deployments with a cloud-native option designed for teams that prioritize simplicity, speed and operational focus.

Expanding Our Global Partner Ecosystem

Alongside product evolution, SKUDONET continued to expand its international presence.

During 2025, seven new partners joined our ecosystem across Europe, Asia and Latin America, strengthening our ability to support customers globally while maintaining close technical collaboration at a regional level:

  • 🇪🇸 Virtual Cable (Spain)
  • 🇹🇷 Fortiva (Turkey)
  • 🇮🇩 SINUX (Indonesia)
  • 🇪🇸 Secra Solutions (Spain)
  • 🇮🇳 Bluella (India)
  • 🇹🇼 Global OMC TECH Inc. (Taiwan)
  • 🇵🇪 BCloud Services SAC (Peru)

This growth reflects increasing demand for open, transparent and flexible application delivery solutions across diverse markets.

Looking Ahead to 2026

2026 will begin with an important milestone: the launch of the SkudoManager, the SKUDONET Central Console.

This unified interface will enable teams to manage multiple nodes, services and products from a single control plane, providing global infrastructure visibility, centralized user and policy management, and integrated monitoring of farms, certificates, security and performance.

Alongside this, we will continue expanding SkudoCloud and reinforcing the Enterprise Edition’s core architecture, staying aligned with our principles of transparency, performance and security.

Thank You for Being Part of the Journey

The progress achieved in 2025 has been possible thanks to our customers, partners and community. We look forward to continuing this journey together in 2026, building an application delivery and security platform that evolves with real operational needs.

19 December, 2025 09:13AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

December 16, 2025

hackergotchi for Purism PureOS

Purism PureOS

2025 Year-End Sale

Announcing Purism's Year End Sale. Offering 15% off your purchases through the end of the year. Just use YEAREND as your coupon code for hardware purchases through December 31, 2025!
Please note that orders placed after December 17th will not ship until January.

The post 2025 Year-End Sale appeared first on Purism.

16 December, 2025 09:17PM by Purism

hackergotchi for GreenboneOS

GreenboneOS

Greenbone’s OPENVAS SCAN Now Supports the Proxmox VE Hypervisor

Users appreciate when software can easily integrate into their existing IT environment. For vendors, this means supporting a cross-platform mix of operating systems and infrastructure. We’re excited to expand our virtualization platform support, bringing Proxmox VE into our family of supported hypervisors. This addition enables more flexibility for deploying OPENVAS SCAN in diverse IT environments. […]

16 December, 2025 09:11AM by Greenbone AG

hackergotchi for ZEVENET

ZEVENET

Open-Source Software Licensing in the SaaS Era

Open-source software has been one of the most transformative forces in the technology sector. Operating systems, databases, web servers, and encryption libraries that we now consider essential exist thanks to thousands of developers who chose to release their code so that anyone could study it, modify it, and improve it.

This model has enabled companies and organizations to build advanced solutions without relying exclusively on proprietary software. However, this openness also introduces a recurring challenge: how to ensure the sustainability of open-source software in a world where software is no longer distributed, but consumed as a service.

In this context, recent discussions around well-known projects have brought renewed attention to licenses such as AGPL (Affero General Public License), specifically designed to respond to this shift in how software is delivered and consumed. Beyond individual cases, the underlying message is clear: open-source software requires a balance between those who contribute and those who use it.

What Do We Mean by “Open Source” in 2025?

When people talk about open-source software, it is often confused with “free software” in the sense of cost. In reality, the term refers to a set of fundamental freedoms:

  • The freedom to run the program for any purpose
  • The freedom to study how it works
  • The freedom to modify it
  • The freedom to redistribute it, with or without changes

These freedoms are defined and enforced through licenses. Some licenses prioritize maximum adoption, while others aim to ensure that the ecosystem remains collaborative and sustainable.

During the 1990s and early 2000s, the dominant model was traditional software distribution: installers, CDs, and packaged binaries. In that context, licenses such as GPL ensured that if you redistributed the software, you were required to release your modifications.

Today, this landscape has changed completely, most software is delivered as a service. Companies can benefit from open-source software without ever distributing it; they simply run it on their own infrastructure. This shift is precisely where modern licensing models come into play.

2. MIT, Apache, GPL, AGPL: What Actually Sets Them Apart

When discussing open-source licenses, we are not just talking about legal text, but about different collaboration models.
Broadly speaking, there are two major families of licenses:

A) Permissive Software licenses (MIT, BSD, Apache 2.0)

Permissive licenses such as MIT, BSD, or Apache 2.0 allow organizations to take the code, modify it, integrate it into proprietary products, and redistribute it without any obligation to return improvements to the community. They are attractive to companies that want minimal legal constraints and maximum flexibility.

Their primary goal is typically to encourage widespread adoption, leaving the decision to contribute entirely up to each organization.

B) Copyleft Software licenses (GPL, LGPL, AGPL)

Copyleft licenses such as GPL, LGPL, and AGPL follow a different logic. If a project benefits from community-driven development, the improvements made on top of that code should remain accessible to the community. The intent is not to restrict commercial use, but to prevent open-source code from being absorbed into closed solutions without any return to the original project.

AGPL (Affero GPL) emerged to address a specific change in context: the transition from distributed software to software offered as a service. Traditional GPL licenses focused on redistribution—if you shipped a binary to a third party, you were required to provide the source code and your modifications.

Permissive vs Copyleft software license

What was the problem?

In the SaaS model, many companies began using open-source software internally or as part of web services without ever “distributing” it. They simply ran it on their own servers and exposed functionality through APIs or web interfaces. In these cases, modifications could be made without being shared.

AGPL extends the scope of reciprocity: if you offer the software to users over a network, your modifications must be made accessible.

When Open Source Forces a Change in Licensing Models

As open-source software has become deeply embedded in enterprise infrastructures, many projects have reached a point where usage grows faster than contributions. This is a natural outcome of how technology is consumed today, particularly in SaaS and cloud environments.

In recent years, several mature projects have adopted AGPL or dual-licensing models after observing recurring patterns:

  • The software becomes a critical component in enterprise environments
  • Internal or commercial forks evolve independently
  • Improvements remain isolated and never reach the core project
  • The cost of ongoing development falls almost entirely on the original maintainers

The result is that, despite widespread adoption, the project lacks the resources required for long-term sustainability.

In this context, adopting reciprocal licenses such as AGPL is not a restrictive move, but a mechanism to preserve the continuity of the project.

The Role of AGPL in Cloud and SaaS Ecosystems

The shift toward cloud and SaaS models has fundamentally transformed the software lifecycle. Many historical licenses were not designed for environments where software operates exclusively as a remote service.

Licenses such as AGPL introduce mechanisms intended to protect the integrity of open-source ecosystems in this new context. Their purpose is not to limit commercial use, but to ensure that significant improvements do not remain locked inside private implementations—especially when the software underpins critical infrastructure.

From a technical and organizational perspective, this approach provides several benefits:

  • Project cohesion, by preventing parallel versions from diverging into incompatible branches
  • Greater transparency and security, as shared improvements can be audited and reviewed collectively
  • Reduced duplication of effort, allowing innovation to build on a common foundation
  • A more balanced ecosystem, where the cost of evolution is not borne by a single actor

As a result, more projects are adopting hybrid models that combine open foundations, reciprocity mechanisms, and commercial offerings that fund ongoing development. This is not an exception, but a natural response to how software is built and maintained today.

5. Open Source and Long-Term Sustainability

Open-source software remains a fundamental driver of innovation. Its sustainability, however, depends on maintaining a fair balance between those who create and those who rely on it.

As SaaS models continue to redefine software consumption, licensing frameworks evolve alongside them. AGPL and other reciprocal licenses do not aim to restrict adoption, but to ensure that critical projects can continue to grow, improve, and remain viable over time.

Ultimately, the goal is to protect the continuity of the open-source ecosystem in a technical landscape that is changing rapidly.

At SKUDONET, we work closely with open-source technologies in security and application delivery. Understanding how licensing models evolve is a key part of building sustainable infrastructure.

16 December, 2025 07:13AM by Nieves Álvarez

December 15, 2025

hackergotchi for Purism PureOS

Purism PureOS

PureOS Crimson Development Report: November 2025

With our sights set on the beta release milestone, one key component still remains: a way to upgrade from Byzantium to Crimson.

If you're a Linux expert, you might already know how Debian handles release upgrades. Some eager individuals have already upgraded from Byzantium to the Crimson alpha this way. However, we need an easy, graphical upgrade procedure, so everyone with a Librem 5 can get the improvements coming in Crimson.

The post PureOS Crimson Development Report: November 2025 appeared first on Purism.

15 December, 2025 08:59PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Native integration available between Canonical LXD and HPE Alletra MP B10000

The integration combines efficient open source virtualization with high performance, enterprise-grade storage

We are pleased to announce a new,  native integration between Canonical LXD and HPE Alletra. This integration brings together Canonical’s securely designed, open source virtualization platform with HPE’s enterprise-grade storage to deliver a simple, scalable, high-performance experience. 

Enterprise-grade storage meets efficient open source virtualization

HPE Alletra is designed to deliver mission-critical storage at mid range economics, with a consistent data experience across various cloud environments. With this integration, Canonical LXD and MicroCloud users can now provision and manage Alletra block storage directly through the LXD interface, without the need for any third-party plugins or additional abstraction layers. 

The integration enables users to seamlessly create, attach, snapshot, and manage thin-provisioned block volumes as easily as working with local storage, while retaining the full performance, resilience, and enterprise data services of HPE Alletra. 

Simplified operations and scalable performance

HPE Alletra NVMe-based architecture ensures sub-millisecond latency for demanding workloads, with built-in services such as thin-provisioning and data deduplication minimizing storage costs while maintaining consistent performance. Paired with LXD’s lightweight control plane and streamlined UI, users can easily operate their environments, combining the best of open source with enterprise storage functionality.

As demands grow, new volumes and storage capacity can be allocated on the fly. Alletra’s scale-out, modular architecture ensures the platform can expand without disrupting running workloads. 

A foundation for modern infrastructure deployments 

LXD provides a unified open source virtualization platform to run system containers and virtual machines with a focus on efficiency, security, and ease of management. With native support for HPE Alletra, enterprises can now build, deploy, and manage their workloads with enterprise storage guarantees, whether in private clouds, on-premises data centers, or edge environments. 

The combined solution empowers teams to deliver predictable performance for critical and data-intensive workloads while reducing complexity and ensuring agility.

Availability

The integration between LXD and HPE Alletra is available starting with the LXD 6.6 feature release, and requires a HPE Alletra WSAPI version 1. LXD currently supports connecting to HPE Alletra storage through NVMe/TCP or iSCSI protocols. For detailed information, visit Canonical’s LXD documentation.

15 December, 2025 03:57PM

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2025.12

There are new SparkyLinux 2025.12 codenamed “Tiamat” ISO images available of the semi-rolling line. This new release is based on the Debian testing “Forky”. Main changes: – Packages updated from the Debian and Sparky testing repositories as of December 14, 2025. – Linux kernel 6.17.11 (6.18.1, 6.12.62-LTS & 6.6.119-LTS in Sparky repos) – Firefox 140.5.0esr (146.0 in Sparky repos) …

Source

15 December, 2025 09:59AM by pavroo

hackergotchi for Deepin

Deepin

December 14, 2025

hackergotchi for Grml developers

Grml developers

Evgeni Golov: Home Assistant, Govee Lights Local, VLANs, Oh my!

We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.

Our network is not that complicated, but there is a dedicated VLAN for IOT devices. Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily. So far, this has never been a problem.

Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.

The API involves sending JSON over multicast, which the Govee device will answer to.

No devices found on the network

After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:

DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2
DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2

That's not the IP address in the IOT VLAN!

Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.

You need to go to SettingsNetworkNetwork adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.

Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.

14 December, 2025 03:48PM

December 13, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Why you should retire your Microsoft Azure Consumption Commitment (MACC) with Ubuntu Pro

When your organization first signed its Microsoft Azure Consumption Commitment (MACC), it was a strategic step to unlock better pricing and enable cloud growth. However, fulfilling that commitment efficiently requires planning. Organizations often look for ways to retire their MACC that drive strategic value, rather than simply increasing consumption to meet a deadline.

The goal is to meet your commitment while delivering long-term benefits to the business.

With Ubuntu Pro in the Azure Marketplace, you can retire your MACC at 100% of the pretax purchase amount. In practice, this allows you to meet consumption goals on your standard Azure invoice, while securing your open source supply chain and automating compliance.

Turn a spend target into an open source security strategy

Instead of simply increasing consumption to hit a target, effective IT and FinOps teams align their MACC with broader strategic goals. Open source support and security maintenance is a priority for enterprises, as a recent Linux Foundation report shows: 54% of enterprises want long-term guarantees, and 53% expect rapid security patching.

Ubuntu Pro offers both. By choosing software that strengthens your security and operations, you can retire your MACC while funding capabilities your organization prioritizes.

Allocating MACC to Ubuntu Pro is a direct investment in your open source estate:

  • Expanded Security Maintenance (ESM): extend security coverage to the critical open source applications running above the operating system layer. ESM provides up to 15 years of security updates for the OS, plus tens of thousands of packages. You might already see alerts for these missing updates in your Azure portal – learn how to check your exposure in our blog: [A complete security view for every Ubuntu LTS VM on Azure].
  • Kernel Livepatch: reduce maintenance windows by applying critical kernel patches without requiring a reboot for most workloads.
  • Compliance tooling: access options for CIS hardening and FIPS 140-3 validated cryptographic modules to support meeting compliance and regulatory needs.
  • Optional enterprise support: add enterprise SLAs, direct access to Canonical engineers for break-fix and bug-fix, and guidance on operating Ubuntu and ESM-covered packages on Azure.

By choosing Ubuntu Pro, you convert your MACC spend into a maintained open source foundation across the development lifecycle.

Maximize value and streamline procurement

Retiring your commitment should be financially efficient and administratively simple. While standard Marketplace listings are MACC-eligible, many organizations use private offers to secure tailored commercial terms, like custom pricing or volume discounts, without sacrificing eligibility.


We support both standard private offers and multiparty private offers for rollouts involving resellers in the US/UK. In all cases, checking that your purchase counts toward your commitment is straightforward:

  • Confirm Eligibility: verify the listing or private offer is marked as “Azure benefit-eligible.”
  • Purchase Correctly: execute the transaction in the Azure portal under the tenant and subscription tied to your MACC agreement.

This approach guarantees that every dollar spent satisfies your financial goals while delivering the specific security coverage your organization needs.

Ready to align Ubuntu Pro with your MACC? Talk to our team.

13 December, 2025 12:07AM

December 12, 2025

hackergotchi for Grml developers

Grml developers

grml development blog: Grml - new stable release 2025.12 available

We are proud to announce our new stable release 🚢 version 2025.12, code-named ‘Postwurfsendung’!

Grml is a bootable live system (Live CD) based on Debian. Grml 2025.12 brings you fresh software packages from Debian testing/forky, enhanced hardware support and addresses known bugs from previous releases.

Like in the previous release 2025.08, Live ISOs 📀 are available for 64-bit x86 (amd64) and 64-bit ARM CPUs (arm64).

❤️ Thanks ❤️

Once again netcup contributed financially, this time specifically to this release. Thank you, netcup ❤️

12 December, 2025 11:12AM

hackergotchi for Deepin

Deepin

Hidden Gems of the deepin File Manager: Work Smarter, Not Harder

“Where’s the image I just saved?” “Which folder did I put last week’s budget sheet in?” “Opening my asset library feels like digging through seven layers of folders—my mouse is practically smoking!” If these questions sound familiar, chances are you’re only using your file manager at its most basic level. In fact, the deepin file manager is packed with a whole suite of efficient features designed to make file management smooth, quick, and organized. Today, we’re uncovering these “hidden gems you might not know about”! Whether you’re a longtime deepin user or just getting started, these 9 practical tips—centered around finding, using, ...Read more

12 December, 2025 03:10AM by xiaofei

December 11, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Studio: Coming to 26.04 LTS: Three Layouts

Xfce Legacy

A lot of people have asked us why Ubuntu Studio comes with a panel on top as the default. For that, it’s a simple answer: Legacy.

When Ubuntu Studio 12.04 LTS (Precise Pangolin) released over 13 years ago, it was released with a top panel by default as that was the default for our desktop envirionment: Xfce.

Fast-forward eight years to 20.10 and Xfce was no longer our default desktop environment: we had switched to KDE’s Plasma Desktop. Plasma has a bottom panel by default, similar to Windows. However, to ease the transition for our long-time users, we kept the panel on top by default, resizing it to be similar to the default top panel of Xfce.

A macOS-Like Layout

With 25.10’s release, we included an additional layout: two panels. One panel is on top with a global menu, and the bottom contains some default applications, a trash can, and a full-screen application launcher. This is a way to feel familiar to those with a similar layout from where they may be coming from, being an operating system for creativity: macOS.

Familiarity and Traditionalism: Windows-like Layout

Starting with 26.04 LTS, we’ll also include one more layout: a bottom, Windows 10-like layout. This is to ease the transition for those coming from Windows, and due to popular request and reports.

Should We Change The Default?

It has been 13 years since we defaulted to a top panel, but is that the right idea anymore?

Right now, on the Ubuntu Discourse, we have a poll to decide if we should change the default layout starting with 26.04 LTS. This will not affect layouts for anyone upgrading from a prior release, but only new installations or new users going forward.

If you would like to participate in the poll, head on over to the Ubuntu Discourse and cast a vote!

11 December, 2025 06:41PM

Ubuntu Blog: Java 25 now available on Google Cloud Serverless

[December 11, 2025] Today Canonical, the publisher of Ubuntu, announced the immediate availability of Java 25 across Google Cloud’s serverless portfolio, including Cloud Run, App Engine, and Cloud Functions.

This release is the result of a collaboration between Google Cloud and Canonical, and it will allow developers to access the latest Java features the moment they are released publicly. All three serverless products use Ubuntu 24.04 as the base image, with Canonical actively maintaining the runtime and ensuring timely security patches.

Simplified deployment with Buildpacks

Deploying Java 25 is easy and fast thanks to Google Cloud Buildpacks. You do not need to create manual Dockerfiles or manage complex container configurations.

Buildpacks are designed to transform your source code into a production-ready container image automatically. When you deploy your application, the Buildpacks system detects your requested Java version and automatically provisions the Ubuntu-based Java 25 environment, which Canonical team continuously updates with security fixes. This “source-to-deploy” workflow allows you to focus entirely on writing code while Google Cloud and Canonical handle the underlying OS and runtime security.

Get started

To get started, simply use the GOOGLE_RUNTIME_VERSION environment variable to specify the JDK version to 25.  

pack build java-app –builder=gcr.io/buildpacks/builder –env GOOGLE_RUNTIME_VERSION=25

To learn more about Canonical support on Java, please read our reference documentation.

More reading and resources

11 December, 2025 02:38PM

Ubuntu Blog: How to launch a Deep Learning VM on Google Cloud

Setting up a local Deep Learning environment can be a headache. Between managing CUDA drivers, resolving Python library conflicts, and ensuring you have enough GPU power, you often spend more time configuring than coding.

Google Cloud and Canonical work together to solve this with Deep Learning VM Images, which use Ubuntu Accelerator Optimized OS as the base OS. These are pre-configured virtual machines optimized for data science and machine learning tasks. They come pre-installed with popular frameworks, such as PyTorch, and the necessary NVIDIA drivers.

In this guide, I’ll walk you through how to launch a Deep Learning VM on GCP using the Console, and how to verify your software stack so you can start training immediately.

Why use a Deep Learning VM?

  • Pre-installed frameworks: No need to pip install generic libraries manually.
  • GPU-ready: NVIDIA drivers are pre-installed and verified.
  • Jupyter integration: Seamless access to JupyterLab right out of the box.

How to make a Deep Learning VM in GCP

Step 1: Navigate to the GCP Marketplace

First, log in to your Google Cloud Console. Instead of creating a generic Compute Engine instance, we want to use a specialized image from the Marketplace.

  1. Open the Google Cloud Console.
  2. In the search bar at the top, type “Deep Learning VM”.
  3. Select the product named Deep Learning VM published by Google.

Step 2: Configure your instance

Once you are on the Marketplace Deep Learning VM listing page, click Launch. This will take you to the deployment configuration screen. This is where you define the power behind your model.

Here are the key settings you need to pay attention to:

  • Zone: Make sure to select a zone that supports the specific GPU you want to use (in my case, I selected the us-central1-f zone).
  • Machine Type: Choose a CPU/RAM combination that meets your requirements if you don’t need a GPU.
  • GPU Type: You can add your GPU type, such as the NVIDIA T4, A100, or H100. 

Configuring the VM instance in the Google Cloud Console.

Once you have made your selections, click Deploy.

Step 3: Connect and verify

After a minute or two, your VM will be deployed. You can find it listed in your Compute Engine > VM Instances page.

To access the machine, click the SSH button next to your new instance. This opens a terminal window directly in your browser.

Step 4: Check the software stack & drivers

Now, let’s make sure everything is working under the hood.

1. Verify NVIDIA drivers

If you have attached a GPU, the most important check is to ensure the drivers have loaded correctly. Run the following command in your SSH terminal:

nvidia-smi

You should see a table listing your GPU (e.g., A100) and the CUDA version.

2. Check pre-installed software

Google’s Deep Learning VMs usually come with PyTorch pre-configured. You can check the installed packages to ensure your favorite libraries are there:

pip show torch

Conclusion

And that’s it! In just a few minutes, you have built a fully configured Deep Learning environment. You can now start running training scripts directly from the terminal.

Don’t forget: Deep Learning VMs with GPUs can be expensive. Remember to stop your instance when you aren’t using it to avoid unexpected charges!

Learn  more about Canonical’s offerings on GCP

Read more

11 December, 2025 02:31PM

hackergotchi for Tails

Tails

Tails 7.3.1

Today, we are releasing 7.3.1 instead of 7.3 because a security vulnerability was fixed in a software library included in Tails while we were preparing 7.3. We started the release process again to include this fix.

Changes and updates

  • Update Tor Browser to 15.0.3.

  • Update the Tor client to 0.4.8.21.

  • Update Thunderbird to 140.5.0.

For more details, read our changelog.

Get Tails 7.3.1

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.3.1.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.3.1 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.3.1 directly:

11 December, 2025 12:00AM

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E368 Ensino De TIC, Com Artur Coelho

Desta feita, raptámos…errr…recebemos um convidado muito especial: Artur Coelho. Professor de TIC, formador, apaixonado pela educação e criatividade com robótica, IA, 3D e programação de software, mantém uma visão crítica, actual e informada sobre o ensino das Tecnologias de Informação e Comunicação (TIC) nas escolas portuguesas - onde foi pioneiro na introdução de diversas tecnologias e construiu um percurso invejável, influenciando positivamente um grande número de alunos e professores. A conversa foi tão longa e interessante que vamos tentar rapt…convidá-lo outra vez no futuro.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada pelo Luís Louro - a mascote Faruk - vencedora do terceiro lugar no concurso de mascote para o Software Freedom Day.

11 December, 2025 12:00AM

December 10, 2025

Ubuntu Blog: Harnessing the potential of 5G with Kubernetes: a cloud-native telco transformation perspective

Telecommunications networks are undergoing a cloud-native revolution. 5G promises ultra-fast connectivity and real-time services, but achieving those benefits requires an infrastructure that is agile, low-latency, and highly reliable. Kubernetes has emerged as a cornerstone for telecom operators to meet 5G demands. In 2025, Canonical Kubernetes delivers a single, production-grade Kubernetes platform with long-term support (LTS) and telco-specific optimizations, deployable across clouds, data centers, and the far edge.

This blog explores how Canonical Kubernetes empowers 5G and cloud-native telco workloads with high performance, enhanced platform awareness (EPA), and robust security, while offering flexible deployment via snaps, Juju, or Cluster API. We’ll also highlight its integration into industry initiatives like Sylva, support for GPU/DPU acceleration, and synergy with MicroCloud for scalable edge infrastructure.

The rise of the cloud-native telco

Telecom decision-makers face immense pressure to evolve their networks rapidly and cost-effectively. Traditional, hardware-centric architectures struggle to keep pace with 5G’s requirements for low latency, high throughput, and dynamic scaling. This is where Kubernetes – the de facto platform for cloud-native applications – comes in. Kubernetes brings powerful automation, scalability, and resiliency that allow telcos to manage network functions like software across data centers, public clouds, and far-edge deployments. The result is a more agile operational model: services can be rolled out faster, resources automatically optimized to demand, and updates applied continuously without disrupting critical services. In the 5G era, such agility is essential for delivering innovations like network slicing, multi-access edge computing (MEC), and AI-driven services.

At the same time, Kubernetes opens the door for telcos to refactor their network functions into microservices. Instead of relying on monolithic appliances or heavy virtual machines, operators can deploy cloud-native network functions (CNFs) – essentially containerized network services – that are lighter and faster to roll out than traditional virtual network functions (VNFs). By shifting to CNFs, new network features (whether a 5G core component or a firewall) can be introduced or updated in a fraction of the time, using automated CI/CD pipelines instead of lengthy manual upgrades. This approach helps telcos simplify the migration from legacy systems to a more agile, software-driven network model.

However, adopting Kubernetes for telecom workloads also means meeting rigorous performance and reliability standards. Carrier-grade services like voice, video, and core network functions can’t tolerate unpredictable delays or downtime. Telco leaders need a Kubernetes platform that combines cloud-native flexibility with telco-grade performance, security, and support. Canonical Kubernetes answers that call, providing a Kubernetes distribution specifically tuned for telecommunications needs.

Canonical Kubernetes: optimized for cloud-native 5G networks and edge computing

Canonical’s Kubernetes distribution has been engineered from the ground up to address the unique challenges of 5G and cloud-native telco cloud deployments. It is a single, unified Kubernetes offering that blends the ease of use of lightweight deployments with the robustness of an enterprise-grade platform. Importantly, Canonical Kubernetes can be deployed and managed in whatever way best fits a telco’s environment – whether installed as a secure snap package or integrated with full automation tooling like Juju (model-driven operations) or Kubernetes Cluster API (CAPI). This flexibility means operators can start small at the network edge or scale up to carrier-core clusters, all using the same consistent platform. Notably, Canonical Kubernetes brings cloud-native telco-friendly capabilities in the areas of performance, networking, operations, and support:

High performance & low latency

Real-time linux kernel support ensures that high-priority network workloads execute with predictable, ultra-low latency, a critical requirement for functions like the 5G user plane function (UPF). In parallel, built-in support for advanced networking (including SR-IOV and DPDK) enables fast packet processing by giving containerized network functions direct access to hardware, dramatically reducing network I/O latency for high bandwidth 5G applications. Together, these features allow cloud-native network functions to meet stringent performance and determinism once only achievable on specialized telecom hardware.

GPU acceleration

Canonical Kubernetes integrates seamlessly with acceleration technologies to support emerging cloud-native telco workloads. It works with NVIDIA’s GPU and networking operators to leverage hardware accelerators (GPUs, SmartNICs, DPUs) for intensive tasks. It supports NVIDIA’s Multi-Instance GPU (MIG), which expands the performance and value of the NVIDIA’s data center GPUs, such as the latest GB200 and RTX PRO 6000 Blackwell Server Edition by partitioning the GPU into up to seven instances, each fully hardware isolated with its own high-bandwidth memory, cache, and streaming multiprocessors. The partitioned instances are transparent to workloads which greatly optimizes the use of resources and allows for serving workloads with guaranteed QoS.

This means telecom operators can run AI/ML analytics, media processing, or virtual RAN computations that take advantage of GPUs and DPU offloading within their Kubernetes clusters – all managed under the same platform. By tapping into hardware acceleration, telcos can deliver advanced services (like AI-driven network optimization or AR/VR streaming) with high performance, without needing separate siloed infrastructure.

Operational efficiency and automation

Day-0 to Day-2 operations are streamlined through automation in Canonical’s stack. The distribution supports full lifecycle management – clusters can be deployed, scaled, and updated via one-step commands or integrated CI/CD pipelines, reducing manual effort and errors. Using Juju charms, Canonical’s model-driven operations further simplify complex orchestration, enabling teams to configure and update Kubernetes and related services in a repeatable, declarative way. Built-in self-healing and high availability features ensure that the platform can recover from failures automatically, keeping services running without intervention.

This high degree of automation translates into faster rollout of new network functions and updates (with minimal downtime), allowing telco teams to focus on innovation rather than routine ops tasks.

Edge flexibility

Canonical Kubernetes is designed to run from the core to the far edge with equal ease. Its lightweight, efficient design (delivered as a single snap package) results in a low resource footprint, making it viable even on a one- or two-node edge cluster in a remote site. At the same time, it scales up to multi-node deployments for central networks. The platform supports a variety of configurations – from a single node for an ultra-compact edge appliance, to a dual-node high-availability cluster, to large multi-node clusters for data centers – all with the same tooling and consistent experience.

This flexibility allows operators to extend cloud capabilities to edge locations (for ultra-low latency processing) while managing everything in a unified way. In practice, Canonical’s solution can power cloud-native telco IT workloads, 5G core functions, and edge applications under one umbrella, meeting the specific performance and latency needs of each environment.

Long-Term support and stability

Canonical backs its Kubernetes with long-term support options far exceeding the typical open-source release cycle. Each Canonical Kubernetes LTS version can receive security patches and maintenance for up to 15 years, ensuring a stable foundation for cloud-native telco services over the entire 5G rollout and beyond. (For comparison, upstream Kubernetes offers roughly 1 year of support per release).

This extended support window means carriers can avoid frequent, disruptive upgrades and rest assured that their infrastructure remains compliant over the long term. Such a commitment to stability is a key reason telecom operators choose Canonical – long-term maintenance provides confidence that critical network workloads will run on a hardened, well-maintained platform for many years.

Cost efficiency and vendor neutrality

As an open-source, upstream-aligned distribution, Canonical Kubernetes has no licensing costs and prevents vendor lock-in. Telcos are free to deploy it on their preferred hardware or cloud, and they benefit from a large ecosystem of Kubernetes-compatible tools and operators. The platform’s efficient resource usage and automation also help drive down operating costs – by improving hardware utilization and simplifying management, it enables operators to serve growing traffic loads without linear cost increases. In short, Canonical’s Kubernetes offers carrier-grade performance and features at a fraction of the cost of proprietary alternatives, all while keeping the operator in control of their technology roadmap.

Enabling a new wave of cloud-native telco services

Using Canonical Kubernetes, cloud-native telcos can position themselves to innovate faster and operate more efficiently in the 5G era. They can readily stand up cloud-native 5G Core functions, scale out Open RAN deployments, and push applications to the network edge – all on a consistent Kubernetes foundation. In fact, Kubernetes makes it feasible for telcos to transition from traditional VNFs on virtual machines to containerized CNFs, reducing resource overhead and speeding up deployment of network features. This means legacy network applications can be modernized step-by-step and run alongside new microservices on the same platform, avoiding risky “big bang” overhauls.

The result is not only technical efficiency but business agility: operators can launch new services (from enhanced mobile broadband to IoT analytics) in weeks instead of months, respond quickly to customer demand spikes, and streamline the integration of new network functions or vendors.

Early adopters in the industry are already seeing the benefits. For example, Canonical’s Kubernetes has been embraced in initiatives like the European Sylva open telco cloud project, in part due to its security, flexibility and long-term support advantages. This momentum underscores that a performant, open Kubernetes platform is becoming a strategic asset for telcos aiming to stay ahead in a competitive landscape. Perhaps most importantly, Canonical Kubernetes lets telcos focus on delivering value to subscribers – ultra-reliable connectivity, rich digital services, tailored enterprise solutions – rather than getting bogged down in infrastructure complexity. It abstracts away much of the heavy lifting of deploying and upgrading distributed systems, while providing the controls needed to meet strict cloud-native telco requirements. The combination of automation, performance tuning, and openness creates a powerful engine for telecom innovation.

Cloud-native at any scale: Canonical Kubernetes meets MicroCloud

At the edge, complexity is the enemy. That’s why Canonical Kubernetes pairs naturally with MicroCloud, our lightweight production-grade cloud infrastructure for distributed environments. MicroCloud fits the edge use case extremely well: it is easy to deploy, fully automated, and optimized for bare-metal and low-power sites. Drop it into a telco cabinet, regional hub, or remote data center, and you get a resilient control plane for running Kubernetes, virtualization, and storage with zero overhead.

In such deployments, MicroCloud and Canonical Kubernetes form a tightly integrated stack that brings cloud-native operations to the far edge. Need to orchestrate CNFs next to VMs? Spin up a single-node cluster with high availability? Scale to dozens of locations without rearchitecting? This combo makes it possible, with snaps for simple updates, Juju for full automation, and long-term support built in.

Conclusion: building the future of cloud-native telco on open source Kubernetes

5G and edge computing are reshaping telecom networks, and Kubernetes has proven to be an essential technology powering this evolution. Industrial IoT, automotive applications , smart cities, robotics, remote health care, and the gaming industry rely on high data transfer, close to real time latency, very high availability and reliability. Canonical Kubernetes brings the best of cloud-native innovation to the telecom domain in a form that aligns with carriers’ operational realities and performance needs. It delivers a rare mix of benefits – agility and efficiency from automation, high performance for demanding workloads, freedom from lock-in, and assured long-term support – making it a compelling choice for any telco modernizing its infrastructure.

Telecommunications leaders looking to become cloud-native telcos should consider how an open-source platform like Canonical Kubernetes can serve as a foundation for growth. Whether the goal is to reduce operating costs in the core network, roll out programmable 5G services at the edge, or simply break free from proprietary constraints, Canonical’s Kubernetes distribution provides a proven path forward.

Explore further

To dive deeper into how Canonical Kubernetes meets telco performance and reliability requirements, we invite you to read our detailed white paper: Addressing telco performance requirements with Canonical Kubernetes. It offers in-depth insights and benchmark results from real-world cloud-native telco scenarios. Additionally, visit our blogs on Ubuntu.com and Canonical.com for more success stories and technical guides – from 5G network modernization strategies to edge:

Visiting MWC 2026? Book a meeting with Canonical to find out more.

10 December, 2025 03:47PM

hackergotchi for Purism PureOS

Purism PureOS

Purism Liberty Phone Exists vs. Delayed T1 Phone

NBC News reports that Trump Mobile customers have been waiting months for a promised ‘Made in the USA’ smartphone, originally announced for August delivery. The T1 phone was marketed as domestically produced, but delays and vague updates have raised skepticism. References to ‘Made in the USA’ have been removed from the company’s site, and leaked images suggest the device resembles existing Chinese-made models. This situation underscores the complexity of building smartphones in America without established infrastructure.

The post Purism Liberty Phone Exists vs. Delayed T1 Phone appeared first on Purism.

10 December, 2025 03:36PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The rhythm of reliability: inside Canonical’s operational cadence

In software engineering, we often talk about the “iron triangle” of constraints: time, resources, and features. You can rarely fix all three. At many companies, when scope creeps or resources get tight, the timeline is often the first element of the triangle to slip.

At Canonical, we take a different approach. For us, time is the fixed constraint.

This isn’t just about strict project management. It is a mechanism of trust. Our users, customers, and the open source community need to know exactly when the next Ubuntu release is coming. To deliver that reliability externally, we need a rigorous operational rhythm internally, and for over 20 years, we have honored this commitment.

Here is how we orchestrate the business of building software, from our six-month cycles to the daily pulse of engineering:

Fig. 1 Canonical’s Operating Cycle

The six-month cycle

Our entire engineering organization operates on a six-month cycle that aligns with the Ubuntu release cadence. This cycle is our heartbeat. It drives accountability and ensures we ship features on time.

To make this work, we rely on three critical control points:

  • Sprint Readiness Review (SRR): This is where prioritization happens. Before a cycle begins, we don’t just ask “what fits?”: we ask “what matters?” We go through feedback to find the most valuable engineering opportunities, ensuring we prioritize quality and impact over volume. We don’t start the work until we know the scope is worth the effort.
  • Product Roadmap Sprint: The SRR culminates in this one-week, face-to-face event. This is the formal moment of truth where we close out the previous cycle and leadership signs off on the plan for the next one. It ensures that every team leaves the room with a clear, approved mandate.
  • Midcycle Review: Three months in, we hold a “Virtual Sprint” to check our progress. Crucially, we review any “bad news”, in which we immediately identify items that will not ship or are at risk. By addressing what won’t happen upfront, leadership can make informed decisions to course-correct immediately rather than letting a deadline slip.

This discipline ensures we stay agile, and that we can adjust our trajectory halfway through without derailing the entire delivery.

The two-week pulse

While the six-month cycle sets the destination, the “pulse” gets us there. A pulse is our version of a two-week agile sprint.

Crucially, these pulses are synchronized across the entire company, on a cross-functional basis. Marketing, Sales, and Support all operate on this same frequency. When a team member says, “we will do it next pulse,” everyone, regardless of department, knows exactly what that means. This creates a shared expectation of delivery that keeps the whole organization moving in lockstep.

Sprints are for in-person connection

We distinguish between a “pulse” (our virtual, two-week work iteration) and a “sprint.” For us, a sprint is a physical, one-week event where teams meet face-to-face.

We are a remote-first company, which makes these moments invaluable. Sprints provide the high-bandwidth communication and human connection needed to sustain us through months of remote execution.

We also stagger these sprints to separate context. Our Engineering Sprints happen in May and November (immediately after an Ubuntu release) so teams can focus purely on technical roadmapping. Commercial Sprints happen in January and July, aligning with our fiscal half-years to focus on business value. This “dual-clock” system ensures that commercial goals and technical realities are synchronized without overwhelming the teams.

Managing the exceptions

Of course, market reality doesn’t always adhere to a six-month schedule. Customers have urgent needs, and high-value opportunities appear unexpectedly. To handle this without breaking our rhythm, we use the Commercial Review (CR) process.

The CR process protects our engineering teams from chaos while giving us the agility to say “yes” to the right opportunities.

  • Protection: We don’t let unverified requests disrupt the roadmap. A Review Board assesses every non-standard request before we make a promise.
  • Conscious trade-offs: If a new request is critical, we ask: “What are we removing to make space for this?” It forces a conscious decision. We review the roadmap and agree on what gets deprioritized to satisfy the new request.

This ensures that when we do deviate from the plan, it is a strategic choice, not an accident.

Quality as a natural habit

Underpinning this entire rhythm is a commitment to quality standards. We follow the Plan, Do, Check, Act (PDCA) cycle, a concept rooted in ISO 9001. While we align with these formal frameworks, it has become a natural habit for us at Canonical.

This operational discipline is what enables up to 15 years of LTS commitment on a vast portfolio of open source components, providing Long-Term Support for the entire, integrated collection of application software, libraries, and toolchains. Offering 15 years of security maintenance on our entire stack is only possible because we are operationally consistent. Long-term stability is the direct result of short-term discipline.

By sticking to this rhythm, we ensure that Canonical remains not just a source of great technology, but a reliable partner for the long haul.

10 December, 2025 12:22PM