(中文) 调试更高效,投递更省心!deepin 系统 Windows 应用兼容体验再优化
26 January, 2026 10:24AM by xiaofei
26 January, 2026 10:24AM by xiaofei
Now that 2025 is over, it’s time to look back and feel proud of the path we’ve walked. Last year has been really exciting in terms of contributions to GStreamer and WebKit for the Igalia Multimedia team.
With more than 459 contributions along the year, we’ve been one of the top contributors to the GStreamer project, in areas like Vulkan Video, GstValidate, VA, GStreamer Editing Services, WebRTC or H.266 support.
Igalia’s contributions to the GStreamer projectIn Vulkan Video we’ve worked on the VP9 video decoder, and cooperated with other contributors to push the AV1 decoder as well. There’s now an H.264 base class for video encoding that is designed to support general hardware-accelerated processing.
GStreaming Editing Services, the framework to build video editing applications, has gained time remapping support, which now allows to include fast/slow motion effects in the videos. Video transformations (scaling, cropping, rounded corners, etc) are now hardware-accelerated thanks to the addition of new Skia-based GStreamer elements and integration with OpenGL. Buffer pool tuning and pipeline improvements have helped to optimize memory usage and performance, enabling the edition of 4K video at 60 frames per second. Much of this work to improve and ensure quality in GStreamer Editing Services has also brought improvements in the GstValidate testing framework, which will be useful for other parts of GStreamer.
Regarding H.266 (VVC), full playback support (with decoders such as vvdec and avdec_h266, demuxers and muxers for Matroska, MP4 and TS, and parsers for the vvc1 and vvi1 formats) is now available in GStreamer 1.26 thanks to Igalia’s work. This allows user applications such as the WebKitGTK web browser to leverage the hardware accelerated decoding provided by VAAPI to play H.266 video using GStreamer.
Igalia has also been one of the top contributors to GStreamer Rust, with 43 contributions. Most of the commits there have been related to Vulkan Video.
Igalia’s contributions to the GStreamer Rust projectIn addition to GStreamer, the team also has a strong presence in WebKit, where we leverage our GStreamer knowledge to implement many features of the web engine related to multimedia. From the 1739 contributions to the WebKit project done last year by Igalia, the Multimedia team has made 323 of them. Nearly one third of those have been related to generic multimedia playback, and the rest have been on areas such as WebRTC, MediaStream, MSE, WebAudio, a new Quirks system to provide adaptations for specific hardware multimedia platforms at runtime, WebCodecs or MediaRecorder.
Igalia Multimedia Team’s contributions to different areas of the WebKit projectWe’re happy about what we’ve achieved along the year and look forward to maintaining this success and bringing even more exciting features and contributions in 2026.
0
0
26 January, 2026 09:34AM by Enrique Ocaña González (eocanha@igalia.com)
This post is a placeholder so that links can be added to BL live config popups.
The data will be added at release time.
For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.
The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.
Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:
The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.
In real incidents, operational ambiguity surfaces like this:
There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.
Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.
The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.
For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:
SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.
In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.
A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.
23 January, 2026 10:39AM by Nieves Álvarez
23 January, 2026 09:54AM by xiaofei
For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.
The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.
Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:
The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.
In real incidents, operational ambiguity surfaces like this:
There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.
Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.
The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.
For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:
SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.
In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.
A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.
22 January, 2026 08:27AM by Nieves Álvarez
Somehow this whole DevOps thing is all about generating the wildest things from some (usually equally wild) template.
And today we're gonna generate YAML from ERB, what could possibly go wrong?!
Well, actually, quite a lot, so one wants to validate the generated result before using it to break systems at scale.
The YAML we generate is a cloud-init cloud-config, and while checking that we generated a valid YAML document is easy (and we were already doing that), it would be much better if we could check that cloud-init can actually use it.
Enter cloud-init schema, or so I thought.
Turns out running cloud-init schema is rather broken without root privileges,
as it tries to load a ton of information from the running system.
This seems like a bug (or multiple), as the data should not be required for the validation of the schema itself.
I've not found a way to disable that behavior.
Luckily, I know Python.
Enter evgeni-knows-better-and-can-write-python:
#!/usr/bin/env python3 import sys from cloudinit.config.schema import get_schema, validate_cloudconfig_file, SchemaValidationError try: valid = validate_cloudconfig_file(config_path=sys.argv[1], schema=get_schema()) if not valid: raise RuntimeError("Schema is not valid") except (SchemaValidationError, RuntimeError) as e: print(e) sys.exit(1)
The canonical1 version if this lives in the Foreman git repo, so go there if you think this will ever receive any updates.
The hardest part was to understand thevalidate_cloudconfig_file API,
as it will sometimes raise an SchemaValidationError,
sometimes a RuntimeError and sometimes just return False.
No idea why.
But the above just turns it into a couple of printed lines and a non zero exit code,
unless of course there are no problems, then you get peaceful silence.
"canonical", not "Canonical" ↩
21 January, 2026 08:05AM by xiaofei
20 January, 2026 07:53AM by Joseph Lee
19 January, 2026 05:41AM by xiaofei
You can now save your language and keyboard layout from the Welcome Screen to the USB stick. These settings will be applied automatically when restarting Tails.
If you turn on this option, your language and keyboard layout are saved unencrypted on the USB stick to help you type the passphrase of your Persistent Storage more easily.
Update Tor Browser to 15.0.4.
Update Thunderbird to 140.6.0.
Update the Linux kernel to 6.12.63.
Drop support for BitTorrent download.
With the ongoing transition from BitTorrent v1 to v2, the BitTorrent v1 files that we provided until now can become a security concern. We don't think that updating to BitTorrent v2 is worth the extra migration and maintenance cost for our team.
Direct download from one of our mirrors is usually faster.
Fix opening .gpg encrypted files in Kleopatra when double-clicking or selecting Open with Kleopatra from the shortcut menu. (#21281)
Fix the desktop crashing when unlocking VeraCrypt volumes with a wrong password. (#21286)
Use 24-hour time format consistently in the top navigation bar and the lock screen. (#21310)
For more details, read our changelog.
Automatic upgrades are available from Tails 7.0 or later to 7.4.
If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.
Follow our installation instructions:
The Persistent Storage on the USB stick will be lost if you install instead of upgrading.
If you don't need installation or upgrade instructions, you can download Tails 7.4 directly:
Talking about cloud today is no longer about a technological trend, but about a central piece of the business. More and more companies are moving their infrastructure to cloud providers under the promise of less hardware, less maintenance, fewer licenses and less time spent on activities that do not generate value.
Much of that promise has been fulfilled. Cloud has democratized capabilities that only large organizations could access a few years ago. Launching a service, increasing capacity or deploying a new region is now easier, faster and more accessible.
However, as often happens with technology, the story changes when we zoom into operations. Cloud simplifies infrastructure, but it does not always simplify how that infrastructure is operated. And that nuance affects not only technical teams, but also the business itself.
The first point of friction does not appear in compute or storage, but in the services that accompany the infrastructure. This includes security, load balancing, TLS certificates, application firewalls, monitoring and observability.
In the cloud provider’s catalog, the technology is there, but it is sold as separate components. Security on one side, certificates on another, observability on another, and advanced capabilities billed as add-ons. The customer does not go without service, but is left with a recurring question: what exactly must be purchased to remain protected and operate reliably?
A less visible aspect also emerges: security is billed per event, per inspection or per volume of traffic. What used to be a hardware expense becomes a bill based on requests, analysis and certificates. Cloud solved hardware, but externalized the operational complexity of security.
Metrics and logs exist, but they are often fragmented, sampled and weakly correlated. Understanding what happened during an incident may require navigating multiple services and data models. Cloud promises security, but it rarely promises explanations.
And at its core this is not a technical problem, but a model problem. Cloud security is commercialized as a product but consumed as a service. And when there is a mismatch between how something is purchased and how it is used, friction eventually appears.
This is the context in which SkudoCloud emerges — not to replace the cloud provider or compete as infrastructure, but to resolve the operational coherence between load balancing, security and visibility.
SkudoCloud is a SaaS platform that enables companies to deploy advanced load balancing and application protection without assembling separate modules, tools or services. From a single interface, organizations can:
The most evident difference appears in security. In the modular cloud model, the customer must decide what to purchase, which rules to enable, how to correlate logs and how to keep everything updated. In a managed model like SkudoCloud, certificates, WAF, TLS inspection and load balancing behave as one coherent system.
This has direct consequences for the business:
Instead of acquiring security, companies acquire operability. Instead of assembling components, they obtain an outcome. That is the difference of a managed approach.
Cloud adoption is already a given. The real question now is how to operate it sustainably. Fragmentation was a natural side effect of the migration phase. Unification will likely be the central theme of the operational phase.
Cloud simplified servers. Now it is time to simplify operations.
14 January, 2026 10:25AM by Nieves Álvarez
14 January, 2026 07:59AM by xiaofei
"Fit and finish" appears in many industries. For much of the software industry, it refers to features that complete a fit for a target audience, ensuring that audience can use the product for their needs. At a frame shop, it means literally fitting the mounted artwork into a frame, then finishing the back of the frame.
At Purism, fit takes on another meaning - making apps fit on screens the size of the Librem 5.
The post PureOS Crimson Development Report: December 2025 appeared first on Purism.
13 January, 2026 10:11PM by Purism
Apologies to users who were hit by forum downtime from ~9:00 to 16:30 Japan time. An upstream server crash combined with an unplanned package upgrade meant some configurations had to be edited. I think all is well now.
As usual it was a longer road than planned, with some unexpected tasks, but there is now a Carbon release candidate iso file available for download here: https://sourceforge.net/projects/bunsen … hybrid.iso sha256 checksum: d0beb580ba500e2b562e1f39aa6ec02d03597d8f95d73fd86c1755e3fee1ef7d
If you have a free machine or VM to install it on, please give it some testing!
And please post any bugs here:
https://forums.bunsenlabs.org/viewtopic.php?id=9656
That thread is now closed because having multiple bug reports mixed up together was too confusing. Please post any new bugs related to the Carbon RC2 iso in individual threads in the
Bug Reports
section, adding a tag
[Carbon RC2]
.
When it seems as if there aren't any bugs left to squash, we can do an Official Release.
Last edited by johnraff (2026-01-18 05:39:06)
There is a new desktop available for Sparkers: Labwc, as well as Sparky 2026.01~dev Labwc ISO image. What is Labwc? Installation on Sparky testing (9): (packages installation only, requires your set up): or (with Sparky settings): via APTus (>= 20260108)-> Desktops-> Labwc or (with Sparky settings): via Sparky testing (9) MinimalGUI/ISO image. Then reboot to take effects…
11 January, 2026 11:42AM by pavroo

Mein Lesejahr 2025 war mit durchschnittlich bisschen mehr als einem Buch pro Woche vergleichbar mit 2024. Mein Best-Of der von mir 2025 fertig gelesenen Bücher (jene die ich besonders lesenswert fand bzw. empfehlen möchte, die Reihenfolge entspricht dem Foto und stellt keinerlei Reihung dar):
09 January, 2026 05:43PM by t.lamprecht (invalid@example.com)
08 January, 2026 01:05PM by Joseph Lee
07 January, 2026 10:15AM by xiaofei
We’re excited to announce that VyOS 1.4.4 LTS has officially achieved Nutanix Ready validation for Nutanix Acropolis Operating System (AOS) 7.3 and AHV Hypervisor 10.3.
This milestone strengthens our collaboration with Nutanix and ensures full interoperability for customers deploying VyOS Universal Router within the Nutanix Cloud Infrastructure solution.
06 January, 2026 02:30PM by Santiago Blanquet (yago.blanquet@vyos.io)
06 January, 2026 09:26AM by xiaofei
In the process of getting Blob - and Carbon - ready for release, a bug with blob's handling of xfconf settings came up: https://forums.bunsenlabs.org/viewtopic … 79#p148079
It turned out that while xfconf-query doesn't output the type of settings entries, it requires to know the type if adding a new entry. So running 'xfconf-query -c "<channel>" -lv' is not enough for backing up an xfce app which stores its settings in the xfconf database - which most of them do these days. We need to store the type too. That data is luckily stored in the app's xml file in ~/.config/xfce4/xfconf/xfce-perchannel-xml/ so to back it up, all we need to do is save that file.
In principle it might be possible to restore the settings by copying the xml file back into place, overwriting whatever's there, but the apps don't always respond right away, often needing a logout/in. There's a better way - if you know the missing type then you can run xfconf-query commands to restore the settings.
So, this script called
xml2xfconf
. Passed an xfconf xml file - eg a backed-up copy of one of those in xfce-perchannel-xml/ - it will print out a list of xfconf-query commands to apply those settings to the xfconf database, and they'll take effect immediately.
Example usage:
restore=$(mktemp)
xml2xfconf -x /path/to/xfce4-terminal.xml -c xfce4-terminal > "$restore"
bash "$restore"
Here's what got written into $restore:
xfconf-query -c xfce4-terminal -p /font-name -n -t string -s Monospace\ 10
xfconf-query -c xfce4-terminal -p /color-use-theme -n -t bool -s false
xfconf-query -c xfce4-terminal -p /font-allow-bold -n -t bool -s true
xfconf-query -c xfce4-terminal -p /title-mode -n -t string -s TERMINAL_TITLE_REPLACE
xfconf-query -c xfce4-terminal -p /scrolling-lines -n -t uint -s 50000
xfconf-query -c xfce4-terminal -p /font-use-system -n -t bool -s false
xfconf-query -c xfce4-terminal -p /background-mode -n -t string -s TERMINAL_BACKGROUND_TRANSPARENT
xfconf-query -c xfce4-terminal -p /background-darkness -n -t double -s 0.94999999999999996
xfconf-query -c xfce4-terminal -p /color-bold-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold-is-bright -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-background-vary -n -t bool -s false
xfconf-query -c xfce4-terminal -p /color-foreground -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-background -n -t string -s \#2c2c2c
xfconf-query -c xfce4-terminal -p /color-cursor-foreground -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-cursor -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-cursor-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-selection -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-background -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-palette -n -t string -s \#3f3f3f\;#705050\;#60b48a\;#dfaf8f\;#9ab8d7\;#dc8cc3\;#8cd0d3\;#dcdcdc\;#709080\;#dca3a3\;#72d5a3\;#f0dfaf\;#94bff3\;#ec93d3\;#93e0e3\;#ffffff
xfconf-query -c xfce4-terminal -p /tab-activity-color -n -t string -s \#aa0000
xml2xfconf has been uploaded in the latest version of bunsen-utilities, so now I'm going to rewrite the bits of BLOB which use xfconf (only a couple of apps actually) to use xml2xfconf and with luck the bug which @Dave75 found will go away.
And then the Carbon release can get rolling again.
It wasn't a welcome interruption, but this new utility might be useful outside Blob for people who want to backup and restore xfce app settings.
Web applications and APIs are now the operational core of most digital services. They process transactions, expose business logic, manage identities, and connect distributed systems that evolve continuously. In parallel, the volume and sophistication of attacks has increased, driven by automation, accessible tooling, and cloud-specific attack vectors.
Web Application Firewalls remain a critical part of the security stack—but in 2026, the challenge is no longer whether a WAF is deployed. The real question is whether it can be evaluated, measured, and trusted under real operating conditions, especially when consumed as a service.
As WAFs move to SaaS models, teams delegate infrastructure, scaling, and maintenance to the provider. This simplifies operations, but it also changes the evaluation criteria. When you no longer control the underlying system, visibility, isolation, and predictable behavior become non-negotiable technical requirements.
Traditional evaluations focused heavily on rule coverage or whether a solution “covers OWASP Top 10.” Those checks still matter—but they no longer reflect production reality.
A modern evaluation must answer practical, operational questions:
In SaaS environments, this becomes even more critical. When a false positive blocks production traffic or latency spikes unexpectedly, there is often no lower layer to compensate. The WAF’s behavior is the system’s behavior. If that behavior cannot be measured and understood, the evaluation is incomplete.
Many WAF evaluations fail not due to lack of expertise, but because the process itself is incomplete.
Common pitfalls include:
In SaaS models, additional constraints often surface late: payload size limits, rule caps, log retention, export restrictions, or rate limits in the control plane. These are not secondary details—they directly affect detection quality and incident response.
A meaningful evaluation must be observable and reproducible. If you cannot trace decisions through logs, correlate them with metrics, and explain them after the fact, the WAF becomes a black box.
Detection capability is often summarized by a single number, usually the True Positive Rate (TPR). While important, this metric alone is misleading.
A WAF that aggressively blocks everything will score well in detection tests—and fail catastrophically in production.
Real-world evaluation must consider both sides of the equation: blocking malicious traffic and allowing legitimate traffic to pass. False positives are not a usability issue—especially in API-driven systems, where payload structure, schemas, and request volume amplify the cost of false positives.
At scale, even a low False Positive Rate (FPR) can result in:
This is where most evaluations break down in practice: not on attack detection, but on how much legitimate traffic is disrupted.
A realistic PoC should include scenarios like:
In SaaS environments, false positives are even more costly, as tuning depends on provider capabilities, change latency, and visibility into decisions.
SkudoCloud was designed to deliver application delivery and WAF capabilities as a SaaS service while preserving the technical properties advanced teams need to operate safely in production: transparent inspection, predictable isolation, and full visibility into traffic and security decisions. The goal is to remove infrastructure overhead without turning operations into a black box.
That same philosophy shapes how WAFs should be evaluated in 2026. Teams should assess real behavior: prevention mode, realistic traffic patterns, false positives, API payloads, and performance under load—especially when the service is managed and the underlying system is not directly accessible.
To support that evaluation, we have documented the full methodology in our technical guide:
Download the full guide:
02 January, 2026 11:03AM by Nieves Álvarez
The 12th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.18.2, 6.12.63-LTS, 6.6.119-LTS – Added to “dev” repos: COSMIC desktop – Sparky 2025.12 & 2025.12 Special Editions released Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in January too, please.
01 January, 2026 08:08PM by pavroo
30 December, 2025 02:13AM by xiaofei
Modern application delivery architectures are built with the right goals in mind. Load balancers distribute traffic, Web Application Firewalls enforce security policies, TLS protects data in transit, and monitoring systems promise observability across the stack. On paper, everything seems covered.
In real production environments, however, many of these architectures operate with critical blind spots. Especially when security components start making decisions that engineers cannot fully see, trace, or explain. This is rarely caused by a lack of tools. More often, it is the result of how security is embedded into the delivery path.
As security becomes more deeply integrated into application delivery, visibility does not automatically follow.
In most production environments, security is no longer a separate layer. WAFs sit directly in the traffic path, inspecting requests, evaluating rules, applying reputation checks and deciding — in real time — whether traffic is allowed or blocked. TLS inspection happens inline, and policies are often updated automatically.
The problem is not that these decisions exist. The problem is that, very often, they cannot be clearly explained after the fact.
In many deployments, teams quickly run into the same limitations:
The result is a paradox that experienced teams recognize immediately: security coverage increases, while operational visibility decreases.
These blind spots rarely appear during normal operation. They tend to surface under pressure: traffic spikes, false positives, performance degradation or partial outages. When they do, troubleshooting becomes significantly more complex, because the information engineers need is often incomplete or fragmented.
TLS encryption is essential, but it fundamentally changes how visibility works. In many application delivery stacks, traffic is decrypted at some point, inspected, and then re-encrypted. Security decisions are made, but the path between request, rule and outcome is not always traceable.
When something breaks, engineers are often left with little more than a generic message: “Request blocked by WAF.”
What is missing is the ability to correlate:
Without that correlation, root cause analysis turns into guesswork rather than engineering.
Many WAF platforms expose protection logic through simplified models such as risk scores, rule categories or predefined profiles. While these abstractions make dashboards easier to read, they remove critical detail from day-to-day operations.
When rule logic cannot be inspected directly:
Over time, this erodes trust in automated protection. Teams stop understanding why something happens and start compensating by weakening policies instead of fixing the underlying issue.
Security controls do more than allow or block requests. They influence how connections are handled, how retries behave, how sessions persist, and how backend health is perceived by the delivery layer.
When these effects are not visible, delivery problems are often misdiagnosed:
Engineers end up debugging load balancing logic or application behaviour, while the real cause sits silently inside the security layer.
Logs are often treated as a substitute for visibility. In practice, they are frequently:
A log entry that explains what happened, but not why, is not observability, it is a post-mortem artifact. In critical environments, teams need actionable insight while an incident is unfolding, not hours later.
A WAF integrated into the application delivery path should not act as an opaque enforcement layer. Instead, it should provide visibility at each critical stage of the decision process.
In practical terms, this means enabling teams to:Trace each security decision end to end, from the incoming request to the final action applied.
Without these capabilities, security controls may protect applications, but they also introduce operational blind spots that slow down troubleshooting and increase risk.
SKUDONET Enterprise Edition is designed around a simple principle: security must protect traffic without breaking visibility.
Instead of treating security as a separate black box, SKUDONET integrates WAF and traffic management into a single, observable application delivery platform. This approach ensures that security decisions remain transparent, traceable and actionable for engineers working in real production conditions.
Key aspects of this design include:
By removing opacity from security enforcement, SKUDONET helps teams retain control over both protection and performance—especially in high-traffic or business-critical environments where visibility is essential.
A 30-day, fully functional evaluation of SKUDONET Enterprise Edition is available for teams who want to validate this level of visibility and control under real workloads.
29 December, 2025 11:31AM by Nieves Álvarez
29 December, 2025 02:17AM by xiaofei
There is a new desktop available for Sparkers: COSMIC What is COSMIC? COSMIC desktop is available via Sparky ‘dev’ repositories so the repo has to be enabled to install COSMIC desktop on the top of Sparky 8 stable or testing (9). It uses the Wayland session as default. The Sparky meta package installs ‘xwayland’ package as default, but some application can not work or launch. That’s why I…
26 December, 2025 04:21PM by pavroo
25 December, 2025 02:04AM by xiaofei
BLOB, the utility that lets people try different desktop theming sets (eg go back to the Boron look, or even Crunchbang) has been upgraded to 13.1-1 on the Carbon repository. This brings a lot of improvements, like support for xfce4-panel profiles, xfconf settings for xfce4-terminal, flexibility over wallpaper settings with a switch to feh by default, and more.
See the changelog:
https://github.com/BunsenLabs/bunsen-bl … /changelog
or all the commits:
https://github.com/BunsenLabs/bunsen-bl … ts/carbon/
Right now "BLOB Themes Manager" is commented out of jgmenu's prepend.csv, but if you're on a Carbon system you can install the package 'bunsen-blob' and uncomment that menu line to use it. Please check it out.
Soon, an upgraded bunsen-configs will include it in the menu by default, and it will be added to the meta - and iso - package lists. A Release Candidate Carbon iso is not far away...
23 December, 2025 02:03AM by xiaofei
Hello everyone,
this short post is to let anyone interested know that our presentation at SFSCON, held in Bolzano/Bozen at the NOI Techpark, has been published online. SFSCON stays for SFSCON stands for South Tyrol Free Software Conference and is one of Europe's most established annual conferences on Free Software.
This year we at Arc-Team decided to participate with a talk that summarized our approximately 20 years of experience in applying the Open Source philosophy to archaeology (both in the software and hardware fields).
The presentation was titled "Arc-Team and the OpArc Project" and can be viewed both on the conference's official website (where you can also download a PDF version) and on the conference's YouTube channel.
I hope the presentation can be interesting for someone. Have a nice day!
21 December, 2025 02:47PM by Luca Bezzi (noreply@blogger.com)
We’re pleased to announce the stable release of Qubes OS 4.3.0! This minor release includes a host of new features, improvements, and bug fixes. The ISO and associated verification files are available on the downloads page.
These are just a few highlights from the many changes included in this release. For a more comprehensive list of changes, see the Qubes OS 4.3 release notes.
If you’d like to install Qubes OS for the first time or perform a clean reinstallation on an existing system, there’s never been a better time to do so! Simply download the Qubes 4.3.0 ISO and follow our installation guide.
If you’re currently using Qubes 4.2, learn how to upgrade to Qubes 4.3.
If you’re currently using a Qubes 4.3 release candidate (RC), update normally (which includes upgrading any EOL templates and standalones you might have) in order to make your system effectively equivalent to the stable Qubes 4.3.0 release. No reinstallation or other special action is required.
In all cases, we strongly recommend making a full backup beforehand.
Templates restored in 4.3.0 from a pre-4.3 backup may continue to target their original Qubes OS release repos (#8701). After restoring such templates in 4.3.0, you must enter the following additional commands in a dom0 terminal:
sudo qubes-dom0-update -y qubes-dist-upgrade
sudo qubes-dist-upgrade --releasever=4.3 --template-standalone-upgrade -y
This will automatically choose the templates that need to be updated. The templates will be shut down during this process.
Fresh templates on a clean 4.3.0 installation are not affected. Users who perform an in-place upgrade from 4.2 to 4.3 (instead of restoring templates from a backup) are also not affected, since the in-place upgrade process already includes the above fix in stage 4. For more information, see issue #8701.
View the full list of known bugs affecting Qubes 4.3 in our issue tracker.
In accordance with our release support policy, Qubes 4.2 will remain supported for six months after the release of Qubes 4.3, until 2026-06-21. After that, Qubes 4.2 will no longer receive security updates or bug fixes.
Whonix templates are created and supported by our partner, the Whonix Project. The Whonix Project has set its own support policy for Whonix templates in Qubes. For more information, see the Whonix Support Schedule.
This release would not be possible without generous support from our partners and donors, as well as contributions from our active community members, especially bug reports from our testers. We are eternally grateful to our excellent community for making the Qubes OS Project a great example of open-source collaboration.
There are new iso images of Sparky 2025.12 Special Editions out there: GameOver, Multimedia and Rescue. This release is based on Debian testing “Forky”. The December update of Sparky Special Edition iso images feature Linux kernel 6.17, updated packages from Debian and Sparky testing repos as of December 20, 2025, and most changes introduced at the 2025.12 release. The Linux kernels 6.18.2, 6.
20 December, 2025 10:10PM by pavroo
In her book Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of Silicon Valley, published in 2000, Borsook who is based in Palo Alto, California and has previously written for Wired and a host of other industry publications, took aim at what she saw as disturbing trends among the tech industry.
The post A Quarter Century After Cyberselfish, Big Tech Proves Borsook Right appeared first on Purism.
20 December, 2025 05:10AM by Purism
19 December, 2025 11:54AM by Joseph Lee
Hello, Community!
Customers and holders of contributor subscriptions can now download VyOS 1.4.4 release images and the corresponding source tarball. This release adds TLS support for syslog, support for AWS gateway load balancer tunnel handler (on AWS only), an option to match BGP prefix origin validation extended communities in route maps, and more. It also fixes over fifty bugs. Additionally, there's now a proper validation to prevent manually assigned multicast addresses, which may break some old malformed configs, so pay attention to it. Last but not least, there's a deprecation warning for SSH DSA keys that will stop working in VyOS releases after 1.5 due to changes in OpenSSH, so make sure to update your user accounts to more secure algorithm keys while you still have the time.
19 December, 2025 09:22AM by Daniil Baturin (daniil@sentrium.io)
2025 has been a defining year for SKUDONET — not because of a single announcement or isolated launch, but due to sustained progress across product development, security reinforcement and strategic expansion.
Throughout the year, our focus has remained consistent: strengthening the core platform, improving the operational experience for administrators, and ensuring that security and reliability evolve in line with real-world infrastructure demands.
This approach has translated into continuous, incremental improvements rather than disruptive changes, allowing teams to adopt new capabilities without compromising stability in production environments.
Over the course of eight product releases, SKUDONET continued to mature as an application delivery and security platform designed for critical environments.
Across these updates, we introduced:
Rather than isolated enhancements, these updates reflect a continuous effort to simplify daily operations while reinforcing security and performance at scale.
Key areas of evolution included a renewed Web GUI, designed to be faster, more consistent and easier to navigate in complex environments, as well as meaningful progress in RBAC, enabling more precise and adaptable access control models.
Certificate management also saw significant improvements, with extended Let’s Encrypt automation, broader DNS provider support and fully automated renewal workflows. Alongside this, we reduced execution times for critical operations such as farm start/stop actions and API metric retrieval.
During the year, 31 CVEs were resolved, continuously hardening the platform’s attack surface. Beyond vulnerability remediation, SKUDONET focused on reinforcing internal consistency and predictability under load.
Key improvements were made across:
Several updates also introduced additional hardening measures, including:
Together, these enhancements contribute to a platform that behaves more predictably under pressure and is easier to audit and troubleshoot in production.
Reducing operational overhead for administrators was another consistent theme throughout 2025.
Several improvements were introduced to simplify day-to-day operations and reduce manual intervention, including:
Together, these enhancements allow teams to spend less time on routine maintenance tasks and more time on capacity planning, optimization and higher-level architectural decisions.
One of the most significant milestones of the year was the launch of SkudoCloud, SKUDONET’s fully managed SaaS platform for application delivery and security.
SkudoCloud introduces a new operational model in which teams can deploy secure application delivery infrastructure in minutes, without managing the underlying system lifecycle. From the first deployment, users benefit from:
This launch represents a strategic expansion of the SKUDONET ecosystem, complementing on-premise and self-managed deployments with a cloud-native option designed for teams that prioritize simplicity, speed and operational focus.
Alongside product evolution, SKUDONET continued to expand its international presence.
During 2025, seven new partners joined our ecosystem across Europe, Asia and Latin America, strengthening our ability to support customers globally while maintaining close technical collaboration at a regional level:
Virtual Cable (Spain)
Fortiva (Turkey)
SINUX (Indonesia)
Secra Solutions (Spain)
Bluella (India)
Global OMC TECH Inc. (Taiwan)
BCloud Services SAC (Peru)This growth reflects increasing demand for open, transparent and flexible application delivery solutions across diverse markets.
2026 will begin with an important milestone: the launch of the SkudoManager, the SKUDONET Central Console.
This unified interface will enable teams to manage multiple nodes, services and products from a single control plane, providing global infrastructure visibility, centralized user and policy management, and integrated monitoring of farms, certificates, security and performance.
Alongside this, we will continue expanding SkudoCloud and reinforcing the Enterprise Edition’s core architecture, staying aligned with our principles of transparency, performance and security.
The progress achieved in 2025 has been possible thanks to our customers, partners and community. We look forward to continuing this journey together in 2026, building an application delivery and security platform that evolves with real operational needs.
19 December, 2025 09:13AM by Nieves Álvarez
19 December, 2025 03:27AM by xiaofei
Announcing Purism's Year End Sale. Offering 15% off your purchases through the end of the year. Just use YEAREND as your coupon code for hardware purchases through December 31, 2025!
Please note that orders placed after December 17th will not ship until January.
The post 2025 Year-End Sale appeared first on Purism.
16 December, 2025 09:17PM by Purism
16 December, 2025 09:11AM by Greenbone AG
Open-source software has been one of the most transformative forces in the technology sector. Operating systems, databases, web servers, and encryption libraries that we now consider essential exist thanks to thousands of developers who chose to release their code so that anyone could study it, modify it, and improve it.
This model has enabled companies and organizations to build advanced solutions without relying exclusively on proprietary software. However, this openness also introduces a recurring challenge: how to ensure the sustainability of open-source software in a world where software is no longer distributed, but consumed as a service.
In this context, recent discussions around well-known projects have brought renewed attention to licenses such as AGPL (Affero General Public License), specifically designed to respond to this shift in how software is delivered and consumed. Beyond individual cases, the underlying message is clear: open-source software requires a balance between those who contribute and those who use it.
When people talk about open-source software, it is often confused with “free software” in the sense of cost. In reality, the term refers to a set of fundamental freedoms:
These freedoms are defined and enforced through licenses. Some licenses prioritize maximum adoption, while others aim to ensure that the ecosystem remains collaborative and sustainable.
During the 1990s and early 2000s, the dominant model was traditional software distribution: installers, CDs, and packaged binaries. In that context, licenses such as GPL ensured that if you redistributed the software, you were required to release your modifications.
Today, this landscape has changed completely, most software is delivered as a service. Companies can benefit from open-source software without ever distributing it; they simply run it on their own infrastructure. This shift is precisely where modern licensing models come into play.
When discussing open-source licenses, we are not just talking about legal text, but about different collaboration models.
Broadly speaking, there are two major families of licenses:
Permissive licenses such as MIT, BSD, or Apache 2.0 allow organizations to take the code, modify it, integrate it into proprietary products, and redistribute it without any obligation to return improvements to the community. They are attractive to companies that want minimal legal constraints and maximum flexibility.
Their primary goal is typically to encourage widespread adoption, leaving the decision to contribute entirely up to each organization.
Copyleft licenses such as GPL, LGPL, and AGPL follow a different logic. If a project benefits from community-driven development, the improvements made on top of that code should remain accessible to the community. The intent is not to restrict commercial use, but to prevent open-source code from being absorbed into closed solutions without any return to the original project.
AGPL (Affero GPL) emerged to address a specific change in context: the transition from distributed software to software offered as a service. Traditional GPL licenses focused on redistribution—if you shipped a binary to a third party, you were required to provide the source code and your modifications.
In the SaaS model, many companies began using open-source software internally or as part of web services without ever “distributing” it. They simply ran it on their own servers and exposed functionality through APIs or web interfaces. In these cases, modifications could be made without being shared.
AGPL extends the scope of reciprocity: if you offer the software to users over a network, your modifications must be made accessible.
As open-source software has become deeply embedded in enterprise infrastructures, many projects have reached a point where usage grows faster than contributions. This is a natural outcome of how technology is consumed today, particularly in SaaS and cloud environments.
In recent years, several mature projects have adopted AGPL or dual-licensing models after observing recurring patterns:
The result is that, despite widespread adoption, the project lacks the resources required for long-term sustainability.
In this context, adopting reciprocal licenses such as AGPL is not a restrictive move, but a mechanism to preserve the continuity of the project.
The shift toward cloud and SaaS models has fundamentally transformed the software lifecycle. Many historical licenses were not designed for environments where software operates exclusively as a remote service.
Licenses such as AGPL introduce mechanisms intended to protect the integrity of open-source ecosystems in this new context. Their purpose is not to limit commercial use, but to ensure that significant improvements do not remain locked inside private implementations—especially when the software underpins critical infrastructure.
From a technical and organizational perspective, this approach provides several benefits:
As a result, more projects are adopting hybrid models that combine open foundations, reciprocity mechanisms, and commercial offerings that fund ongoing development. This is not an exception, but a natural response to how software is built and maintained today.
Open-source software remains a fundamental driver of innovation. Its sustainability, however, depends on maintaining a fair balance between those who create and those who rely on it.
As SaaS models continue to redefine software consumption, licensing frameworks evolve alongside them. AGPL and other reciprocal licenses do not aim to restrict adoption, but to ensure that critical projects can continue to grow, improve, and remain viable over time.
Ultimately, the goal is to protect the continuity of the open-source ecosystem in a technical landscape that is changing rapidly.
At SKUDONET, we work closely with open-source technologies in security and application delivery. Understanding how licensing models evolve is a key part of building sustainable infrastructure.
16 December, 2025 07:13AM by Nieves Álvarez
With our sights set on the beta release milestone, one key component still remains: a way to upgrade from Byzantium to Crimson.
If you're a Linux expert, you might already know how Debian handles release upgrades. Some eager individuals have already upgraded from Byzantium to the Crimson alpha this way. However, we need an easy, graphical upgrade procedure, so everyone with a Librem 5 can get the improvements coming in Crimson.
The post PureOS Crimson Development Report: November 2025 appeared first on Purism.
15 December, 2025 08:59PM by Purism
The integration combines efficient open source virtualization with high performance, enterprise-grade storage
We are pleased to announce a new, native integration between Canonical LXD and HPE Alletra. This integration brings together Canonical’s securely designed, open source virtualization platform with HPE’s enterprise-grade storage to deliver a simple, scalable, high-performance experience.
HPE Alletra is designed to deliver mission-critical storage at mid range economics, with a consistent data experience across various cloud environments. With this integration, Canonical LXD and MicroCloud users can now provision and manage Alletra block storage directly through the LXD interface, without the need for any third-party plugins or additional abstraction layers.
The integration enables users to seamlessly create, attach, snapshot, and manage thin-provisioned block volumes as easily as working with local storage, while retaining the full performance, resilience, and enterprise data services of HPE Alletra.

HPE Alletra NVMe-based architecture ensures sub-millisecond latency for demanding workloads, with built-in services such as thin-provisioning and data deduplication minimizing storage costs while maintaining consistent performance. Paired with LXD’s lightweight control plane and streamlined UI, users can easily operate their environments, combining the best of open source with enterprise storage functionality.
As demands grow, new volumes and storage capacity can be allocated on the fly. Alletra’s scale-out, modular architecture ensures the platform can expand without disrupting running workloads.
LXD provides a unified open source virtualization platform to run system containers and virtual machines with a focus on efficiency, security, and ease of management. With native support for HPE Alletra, enterprises can now build, deploy, and manage their workloads with enterprise storage guarantees, whether in private clouds, on-premises data centers, or edge environments.
The combined solution empowers teams to deliver predictable performance for critical and data-intensive workloads while reducing complexity and ensuring agility.
The integration between LXD and HPE Alletra is available starting with the LXD 6.6 feature release, and requires a HPE Alletra WSAPI version 1. LXD currently supports connecting to HPE Alletra storage through NVMe/TCP or iSCSI protocols. For detailed information, visit Canonical’s LXD documentation.
There are new SparkyLinux 2025.12 codenamed “Tiamat” ISO images available of the semi-rolling line. This new release is based on the Debian testing “Forky”. Main changes: – Packages updated from the Debian and Sparky testing repositories as of December 14, 2025. – Linux kernel 6.17.11 (6.18.1, 6.12.62-LTS & 6.6.119-LTS in Sparky repos) – Firefox 140.5.0esr (146.0 in Sparky repos) …
15 December, 2025 09:59AM by pavroo
15 December, 2025 06:38AM by xiaofei
We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.
Our network is not that complicated, but there is a dedicated VLAN for IOT devices.
Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily.
So far, this has never been a problem.
Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.
The API involves sending JSON over multicast, which the Govee device will answer to.
No devices found on the network
After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:
DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2 DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2
That's not the IP address in the IOT VLAN!
Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.
You need to go to Settings → Network → Network adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.
Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.
When your organization first signed its Microsoft Azure Consumption Commitment (MACC), it was a strategic step to unlock better pricing and enable cloud growth. However, fulfilling that commitment efficiently requires planning. Organizations often look for ways to retire their MACC that drive strategic value, rather than simply increasing consumption to meet a deadline.
The goal is to meet your commitment while delivering long-term benefits to the business.
With Ubuntu Pro in the Azure Marketplace, you can retire your MACC at 100% of the pretax purchase amount. In practice, this allows you to meet consumption goals on your standard Azure invoice, while securing your open source supply chain and automating compliance.
Instead of simply increasing consumption to hit a target, effective IT and FinOps teams align their MACC with broader strategic goals. Open source support and security maintenance is a priority for enterprises, as a recent Linux Foundation report shows: 54% of enterprises want long-term guarantees, and 53% expect rapid security patching.
Ubuntu Pro offers both. By choosing software that strengthens your security and operations, you can retire your MACC while funding capabilities your organization prioritizes.
Allocating MACC to Ubuntu Pro is a direct investment in your open source estate:
By choosing Ubuntu Pro, you convert your MACC spend into a maintained open source foundation across the development lifecycle.
Retiring your commitment should be financially efficient and administratively simple. While standard Marketplace listings are MACC-eligible, many organizations use private offers to secure tailored commercial terms, like custom pricing or volume discounts, without sacrificing eligibility.
We support both standard private offers and multiparty private offers for rollouts involving resellers in the US/UK. In all cases, checking that your purchase counts toward your commitment is straightforward:
This approach guarantees that every dollar spent satisfies your financial goals while delivering the specific security coverage your organization needs.
Ready to align Ubuntu Pro with your MACC? Talk to our team.
We are proud to announce our new stable release 🚢 version 2025.12, code-named ‘Postwurfsendung’!
Grml is a bootable live system (Live CD) based on Debian. Grml 2025.12 brings you fresh software packages from Debian testing/forky, enhanced hardware support and addresses known bugs from previous releases.
Like in the previous release 2025.08, Live ISOs 📀 are available for 64-bit x86 (amd64) and 64-bit ARM CPUs (arm64).
Once again netcup contributed financially, this time specifically to this release. Thank you, netcup ❤️
12 December, 2025 03:10AM by xiaofei
A lot of people have asked us why Ubuntu Studio comes with a panel on top as the default. For that, it’s a simple answer: Legacy.
When Ubuntu Studio 12.04 LTS (Precise Pangolin) released over 13 years ago, it was released with a top panel by default as that was the default for our desktop envirionment: Xfce.
Fast-forward eight years to 20.10 and Xfce was no longer our default desktop environment: we had switched to KDE’s Plasma Desktop. Plasma has a bottom panel by default, similar to Windows. However, to ease the transition for our long-time users, we kept the panel on top by default, resizing it to be similar to the default top panel of Xfce.

With 25.10’s release, we included an additional layout: two panels. One panel is on top with a global menu, and the bottom contains some default applications, a trash can, and a full-screen application launcher. This is a way to feel familiar to those with a similar layout from where they may be coming from, being an operating system for creativity: macOS.

Starting with 26.04 LTS, we’ll also include one more layout: a bottom, Windows 10-like layout. This is to ease the transition for those coming from Windows, and due to popular request and reports.

It has been 13 years since we defaulted to a top panel, but is that the right idea anymore?
Right now, on the Ubuntu Discourse, we have a poll to decide if we should change the default layout starting with 26.04 LTS. This will not affect layouts for anyone upgrading from a prior release, but only new installations or new users going forward.
If you would like to participate in the poll, head on over to the Ubuntu Discourse and cast a vote!
[December 11, 2025] Today Canonical, the publisher of Ubuntu, announced the immediate availability of Java 25 across Google Cloud’s serverless portfolio, including Cloud Run, App Engine, and Cloud Functions.
This release is the result of a collaboration between Google Cloud and Canonical, and it will allow developers to access the latest Java features the moment they are released publicly. All three serverless products use Ubuntu 24.04 as the base image, with Canonical actively maintaining the runtime and ensuring timely security patches.
Deploying Java 25 is easy and fast thanks to Google Cloud Buildpacks. You do not need to create manual Dockerfiles or manage complex container configurations.
Buildpacks are designed to transform your source code into a production-ready container image automatically. When you deploy your application, the Buildpacks system detects your requested Java version and automatically provisions the Ubuntu-based Java 25 environment, which Canonical team continuously updates with security fixes. This “source-to-deploy” workflow allows you to focus entirely on writing code while Google Cloud and Canonical handle the underlying OS and runtime security.
To get started, simply use the GOOGLE_RUNTIME_VERSION environment variable to specify the JDK version to 25.
pack build java-app –builder=gcr.io/buildpacks/builder –env GOOGLE_RUNTIME_VERSION=25
To learn more about Canonical support on Java, please read our reference documentation.
Setting up a local Deep Learning environment can be a headache. Between managing CUDA drivers, resolving Python library conflicts, and ensuring you have enough GPU power, you often spend more time configuring than coding.
Google Cloud and Canonical work together to solve this with Deep Learning VM Images, which use Ubuntu Accelerator Optimized OS as the base OS. These are pre-configured virtual machines optimized for data science and machine learning tasks. They come pre-installed with popular frameworks, such as PyTorch, and the necessary NVIDIA drivers.
In this guide, I’ll walk you through how to launch a Deep Learning VM on GCP using the Console, and how to verify your software stack so you can start training immediately.
First, log in to your Google Cloud Console. Instead of creating a generic Compute Engine instance, we want to use a specialized image from the Marketplace.

Once you are on the Marketplace Deep Learning VM listing page, click Launch. This will take you to the deployment configuration screen. This is where you define the power behind your model.
Here are the key settings you need to pay attention to:

Configuring the VM instance in the Google Cloud Console.
Once you have made your selections, click Deploy.
After a minute or two, your VM will be deployed. You can find it listed in your Compute Engine > VM Instances page.
To access the machine, click the SSH button next to your new instance. This opens a terminal window directly in your browser.

Now, let’s make sure everything is working under the hood.
If you have attached a GPU, the most important check is to ensure the drivers have loaded correctly. Run the following command in your SSH terminal:
nvidia-smi
You should see a table listing your GPU (e.g., A100) and the CUDA version.

Google’s Deep Learning VMs usually come with PyTorch pre-configured. You can check the installed packages to ensure your favorite libraries are there:
pip show torch

And that’s it! In just a few minutes, you have built a fully configured Deep Learning environment. You can now start running training scripts directly from the terminal.
Don’t forget: Deep Learning VMs with GPUs can be expensive. Remember to stop your instance when you aren’t using it to avoid unexpected charges!
Learn more about Canonical’s offerings on GCP
Today, we are releasing 7.3.1 instead of 7.3 because a security vulnerability was fixed in a software library included in Tails while we were preparing 7.3. We started the release process again to include this fix.
For more details, read our changelog.
Automatic upgrades are available from Tails 7.0 or later to 7.3.1.
If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.
Follow our installation instructions:
The Persistent Storage on the USB stick will be lost if you install instead of upgrading.
If you don't need installation or upgrade instructions, you can download Tails 7.3.1 directly:
Desta feita, raptámos…errr…recebemos um convidado muito especial: Artur Coelho. Professor de TIC, formador, apaixonado pela educação e criatividade com robótica, IA, 3D e programação de software, mantém uma visão crítica, actual e informada sobre o ensino das Tecnologias de Informação e Comunicação (TIC) nas escolas portuguesas - onde foi pioneiro na introdução de diversas tecnologias e construiu um percurso invejável, influenciando positivamente um grande número de alunos e professores. A conversa foi tão longa e interessante que vamos tentar rapt…convidá-lo outra vez no futuro.
Já sabem: oiçam, subscrevam e partilhem!
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo e sem rede pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada pelo Luís Louro - a mascote Faruk - vencedora do terceiro lugar no concurso de mascote para o Software Freedom Day.
Telecommunications networks are undergoing a cloud-native revolution. 5G promises ultra-fast connectivity and real-time services, but achieving those benefits requires an infrastructure that is agile, low-latency, and highly reliable. Kubernetes has emerged as a cornerstone for telecom operators to meet 5G demands. In 2025, Canonical Kubernetes delivers a single, production-grade Kubernetes platform with long-term support (LTS) and telco-specific optimizations, deployable across clouds, data centers, and the far edge.
This blog explores how Canonical Kubernetes empowers 5G and cloud-native telco workloads with high performance, enhanced platform awareness (EPA), and robust security, while offering flexible deployment via snaps, Juju, or Cluster API. We’ll also highlight its integration into industry initiatives like Sylva, support for GPU/DPU acceleration, and synergy with MicroCloud for scalable edge infrastructure.
Telecom decision-makers face immense pressure to evolve their networks rapidly and cost-effectively. Traditional, hardware-centric architectures struggle to keep pace with 5G’s requirements for low latency, high throughput, and dynamic scaling. This is where Kubernetes – the de facto platform for cloud-native applications – comes in. Kubernetes brings powerful automation, scalability, and resiliency that allow telcos to manage network functions like software across data centers, public clouds, and far-edge deployments. The result is a more agile operational model: services can be rolled out faster, resources automatically optimized to demand, and updates applied continuously without disrupting critical services. In the 5G era, such agility is essential for delivering innovations like network slicing, multi-access edge computing (MEC), and AI-driven services.
At the same time, Kubernetes opens the door for telcos to refactor their network functions into microservices. Instead of relying on monolithic appliances or heavy virtual machines, operators can deploy cloud-native network functions (CNFs) – essentially containerized network services – that are lighter and faster to roll out than traditional virtual network functions (VNFs). By shifting to CNFs, new network features (whether a 5G core component or a firewall) can be introduced or updated in a fraction of the time, using automated CI/CD pipelines instead of lengthy manual upgrades. This approach helps telcos simplify the migration from legacy systems to a more agile, software-driven network model.
However, adopting Kubernetes for telecom workloads also means meeting rigorous performance and reliability standards. Carrier-grade services like voice, video, and core network functions can’t tolerate unpredictable delays or downtime. Telco leaders need a Kubernetes platform that combines cloud-native flexibility with telco-grade performance, security, and support. Canonical Kubernetes answers that call, providing a Kubernetes distribution specifically tuned for telecommunications needs.
Canonical’s Kubernetes distribution has been engineered from the ground up to address the unique challenges of 5G and cloud-native telco cloud deployments. It is a single, unified Kubernetes offering that blends the ease of use of lightweight deployments with the robustness of an enterprise-grade platform. Importantly, Canonical Kubernetes can be deployed and managed in whatever way best fits a telco’s environment – whether installed as a secure snap package or integrated with full automation tooling like Juju (model-driven operations) or Kubernetes Cluster API (CAPI). This flexibility means operators can start small at the network edge or scale up to carrier-core clusters, all using the same consistent platform. Notably, Canonical Kubernetes brings cloud-native telco-friendly capabilities in the areas of performance, networking, operations, and support:
Real-time linux kernel support ensures that high-priority network workloads execute with predictable, ultra-low latency, a critical requirement for functions like the 5G user plane function (UPF). In parallel, built-in support for advanced networking (including SR-IOV and DPDK) enables fast packet processing by giving containerized network functions direct access to hardware, dramatically reducing network I/O latency for high bandwidth 5G applications. Together, these features allow cloud-native network functions to meet stringent performance and determinism once only achievable on specialized telecom hardware.
Canonical Kubernetes integrates seamlessly with acceleration technologies to support emerging cloud-native telco workloads. It works with NVIDIA’s GPU and networking operators to leverage hardware accelerators (GPUs, SmartNICs, DPUs) for intensive tasks. It supports NVIDIA’s Multi-Instance GPU (MIG), which expands the performance and value of the NVIDIA’s data center GPUs, such as the latest GB200 and RTX PRO 6000 Blackwell Server Edition by partitioning the GPU into up to seven instances, each fully hardware isolated with its own high-bandwidth memory, cache, and streaming multiprocessors. The partitioned instances are transparent to workloads which greatly optimizes the use of resources and allows for serving workloads with guaranteed QoS.
This means telecom operators can run AI/ML analytics, media processing, or virtual RAN computations that take advantage of GPUs and DPU offloading within their Kubernetes clusters – all managed under the same platform. By tapping into hardware acceleration, telcos can deliver advanced services (like AI-driven network optimization or AR/VR streaming) with high performance, without needing separate siloed infrastructure.
Day-0 to Day-2 operations are streamlined through automation in Canonical’s stack. The distribution supports full lifecycle management – clusters can be deployed, scaled, and updated via one-step commands or integrated CI/CD pipelines, reducing manual effort and errors. Using Juju charms, Canonical’s model-driven operations further simplify complex orchestration, enabling teams to configure and update Kubernetes and related services in a repeatable, declarative way. Built-in self-healing and high availability features ensure that the platform can recover from failures automatically, keeping services running without intervention.
This high degree of automation translates into faster rollout of new network functions and updates (with minimal downtime), allowing telco teams to focus on innovation rather than routine ops tasks.
Canonical Kubernetes is designed to run from the core to the far edge with equal ease. Its lightweight, efficient design (delivered as a single snap package) results in a low resource footprint, making it viable even on a one- or two-node edge cluster in a remote site. At the same time, it scales up to multi-node deployments for central networks. The platform supports a variety of configurations – from a single node for an ultra-compact edge appliance, to a dual-node high-availability cluster, to large multi-node clusters for data centers – all with the same tooling and consistent experience.
This flexibility allows operators to extend cloud capabilities to edge locations (for ultra-low latency processing) while managing everything in a unified way. In practice, Canonical’s solution can power cloud-native telco IT workloads, 5G core functions, and edge applications under one umbrella, meeting the specific performance and latency needs of each environment.
Canonical backs its Kubernetes with long-term support options far exceeding the typical open-source release cycle. Each Canonical Kubernetes LTS version can receive security patches and maintenance for up to 15 years, ensuring a stable foundation for cloud-native telco services over the entire 5G rollout and beyond. (For comparison, upstream Kubernetes offers roughly 1 year of support per release).
This extended support window means carriers can avoid frequent, disruptive upgrades and rest assured that their infrastructure remains compliant over the long term. Such a commitment to stability is a key reason telecom operators choose Canonical – long-term maintenance provides confidence that critical network workloads will run on a hardened, well-maintained platform for many years.
As an open-source, upstream-aligned distribution, Canonical Kubernetes has no licensing costs and prevents vendor lock-in. Telcos are free to deploy it on their preferred hardware or cloud, and they benefit from a large ecosystem of Kubernetes-compatible tools and operators. The platform’s efficient resource usage and automation also help drive down operating costs – by improving hardware utilization and simplifying management, it enables operators to serve growing traffic loads without linear cost increases. In short, Canonical’s Kubernetes offers carrier-grade performance and features at a fraction of the cost of proprietary alternatives, all while keeping the operator in control of their technology roadmap.
Using Canonical Kubernetes, cloud-native telcos can position themselves to innovate faster and operate more efficiently in the 5G era. They can readily stand up cloud-native 5G Core functions, scale out Open RAN deployments, and push applications to the network edge – all on a consistent Kubernetes foundation. In fact, Kubernetes makes it feasible for telcos to transition from traditional VNFs on virtual machines to containerized CNFs, reducing resource overhead and speeding up deployment of network features. This means legacy network applications can be modernized step-by-step and run alongside new microservices on the same platform, avoiding risky “big bang” overhauls.
The result is not only technical efficiency but business agility: operators can launch new services (from enhanced mobile broadband to IoT analytics) in weeks instead of months, respond quickly to customer demand spikes, and streamline the integration of new network functions or vendors.
Early adopters in the industry are already seeing the benefits. For example, Canonical’s Kubernetes has been embraced in initiatives like the European Sylva open telco cloud project, in part due to its security, flexibility and long-term support advantages. This momentum underscores that a performant, open Kubernetes platform is becoming a strategic asset for telcos aiming to stay ahead in a competitive landscape. Perhaps most importantly, Canonical Kubernetes lets telcos focus on delivering value to subscribers – ultra-reliable connectivity, rich digital services, tailored enterprise solutions – rather than getting bogged down in infrastructure complexity. It abstracts away much of the heavy lifting of deploying and upgrading distributed systems, while providing the controls needed to meet strict cloud-native telco requirements. The combination of automation, performance tuning, and openness creates a powerful engine for telecom innovation.
At the edge, complexity is the enemy. That’s why Canonical Kubernetes pairs naturally with MicroCloud, our lightweight production-grade cloud infrastructure for distributed environments. MicroCloud fits the edge use case extremely well: it is easy to deploy, fully automated, and optimized for bare-metal and low-power sites. Drop it into a telco cabinet, regional hub, or remote data center, and you get a resilient control plane for running Kubernetes, virtualization, and storage with zero overhead.
In such deployments, MicroCloud and Canonical Kubernetes form a tightly integrated stack that brings cloud-native operations to the far edge. Need to orchestrate CNFs next to VMs? Spin up a single-node cluster with high availability? Scale to dozens of locations without rearchitecting? This combo makes it possible, with snaps for simple updates, Juju for full automation, and long-term support built in.
5G and edge computing are reshaping telecom networks, and Kubernetes has proven to be an essential technology powering this evolution. Industrial IoT, automotive applications , smart cities, robotics, remote health care, and the gaming industry rely on high data transfer, close to real time latency, very high availability and reliability. Canonical Kubernetes brings the best of cloud-native innovation to the telecom domain in a form that aligns with carriers’ operational realities and performance needs. It delivers a rare mix of benefits – agility and efficiency from automation, high performance for demanding workloads, freedom from lock-in, and assured long-term support – making it a compelling choice for any telco modernizing its infrastructure.
Telecommunications leaders looking to become cloud-native telcos should consider how an open-source platform like Canonical Kubernetes can serve as a foundation for growth. Whether the goal is to reduce operating costs in the core network, roll out programmable 5G services at the edge, or simply break free from proprietary constraints, Canonical’s Kubernetes distribution provides a proven path forward.
To dive deeper into how Canonical Kubernetes meets telco performance and reliability requirements, we invite you to read our detailed white paper: Addressing telco performance requirements with Canonical Kubernetes. It offers in-depth insights and benchmark results from real-world cloud-native telco scenarios. Additionally, visit our blogs on Ubuntu.com and Canonical.com for more success stories and technical guides – from 5G network modernization strategies to edge:
Visiting MWC 2026? Book a meeting with Canonical to find out more.
NBC News reports that Trump Mobile customers have been waiting months for a promised ‘Made in the USA’ smartphone, originally announced for August delivery. The T1 phone was marketed as domestically produced, but delays and vague updates have raised skepticism. References to ‘Made in the USA’ have been removed from the company’s site, and leaked images suggest the device resembles existing Chinese-made models. This situation underscores the complexity of building smartphones in America without established infrastructure.
The post Purism Liberty Phone Exists vs. Delayed T1 Phone appeared first on Purism.
10 December, 2025 03:36PM by Purism
In software engineering, we often talk about the “iron triangle” of constraints: time, resources, and features. You can rarely fix all three. At many companies, when scope creeps or resources get tight, the timeline is often the first element of the triangle to slip.
At Canonical, we take a different approach. For us, time is the fixed constraint.
This isn’t just about strict project management. It is a mechanism of trust. Our users, customers, and the open source community need to know exactly when the next Ubuntu release is coming. To deliver that reliability externally, we need a rigorous operational rhythm internally, and for over 20 years, we have honored this commitment.
Here is how we orchestrate the business of building software, from our six-month cycles to the daily pulse of engineering:

Fig. 1 Canonical’s Operating Cycle
Our entire engineering organization operates on a six-month cycle that aligns with the Ubuntu release cadence. This cycle is our heartbeat. It drives accountability and ensures we ship features on time.
To make this work, we rely on three critical control points:
This discipline ensures we stay agile, and that we can adjust our trajectory halfway through without derailing the entire delivery.
While the six-month cycle sets the destination, the “pulse” gets us there. A pulse is our version of a two-week agile sprint.
Crucially, these pulses are synchronized across the entire company, on a cross-functional basis. Marketing, Sales, and Support all operate on this same frequency. When a team member says, “we will do it next pulse,” everyone, regardless of department, knows exactly what that means. This creates a shared expectation of delivery that keeps the whole organization moving in lockstep.
We distinguish between a “pulse” (our virtual, two-week work iteration) and a “sprint.” For us, a sprint is a physical, one-week event where teams meet face-to-face.
We are a remote-first company, which makes these moments invaluable. Sprints provide the high-bandwidth communication and human connection needed to sustain us through months of remote execution.
We also stagger these sprints to separate context. Our Engineering Sprints happen in May and November (immediately after an Ubuntu release) so teams can focus purely on technical roadmapping. Commercial Sprints happen in January and July, aligning with our fiscal half-years to focus on business value. This “dual-clock” system ensures that commercial goals and technical realities are synchronized without overwhelming the teams.
Of course, market reality doesn’t always adhere to a six-month schedule. Customers have urgent needs, and high-value opportunities appear unexpectedly. To handle this without breaking our rhythm, we use the Commercial Review (CR) process.
The CR process protects our engineering teams from chaos while giving us the agility to say “yes” to the right opportunities.
This ensures that when we do deviate from the plan, it is a strategic choice, not an accident.
Underpinning this entire rhythm is a commitment to quality standards. We follow the Plan, Do, Check, Act (PDCA) cycle, a concept rooted in ISO 9001. While we align with these formal frameworks, it has become a natural habit for us at Canonical.
This operational discipline is what enables up to 15 years of LTS commitment on a vast portfolio of open source components, providing Long-Term Support for the entire, integrated collection of application software, libraries, and toolchains. Offering 15 years of security maintenance on our entire stack is only possible because we are operationally consistent. Long-term stability is the direct result of short-term discipline.
By sticking to this rhythm, we ensure that Canonical remains not just a source of great technology, but a reliable partner for the long haul.