(中文) 玲珑商店社区版 2.0 时代开启 !支持十余款发行版玲珑环境自动安装
13 February, 2026 02:23AM by xiaofei
13 February, 2026 02:23AM by xiaofei
Debian - on which BunsenLabs is based - have dropped 32 bit kernels, installers and iso images from the current stable Trixie release.
BunsenLabs will be forced to do likewise, and the upcoming Carbon release will have no 32 bit iso images, or 32bit package repositories.
Users with 32 bit machines can continue to use BunsenLabs Boron for as long as Debian Long Term Support for Bookworm continues, which is expected to be until June 30, 2028:
https://wiki.debian.org/LTS
Previous discussion:
https://forums.bunsenlabs.org/viewtopic … 48#p140748
11 February, 2026 10:20AM by Joseph Lee
11 February, 2026 02:34AM by xiaofei
This release is an emergency release to fix critical security vulnerabilities in the Linux kernel.
Update the Linux kernel to 6.12.69, which fixes DSA 6126-1, multiple security vulnerabilities that could allow an application in Tails to gain administration privileges.
For example, if an attacker was able to exploit other unknown security vulnerabilities in an application included in Tails, they might then use DSA 6126-1 to take full control of your Tails and deanonymize you.
This attack is very unlikely, but could be performed by a strong attacker, such as a government or a hacking firm. We are not aware of this attack being used in practice.
Update Thunderbird to 140.7.1.
Fix opening the Wi-Fi settings from the Tor Connection assistant. (#18587)
Fix reopening Electrum when it was not closed cleanly. (#21390)
Fix applying the language saved to the USB stick in the Welcome Screen. (#21383)
For more details, read our changelog.
Automatic upgrades are available from Tails 7.0 or later to 7.4.2.
If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.
Follow our installation instructions:
The Persistent Storage on the USB stick will be lost if you install instead of upgrading.
If you don't need installation or upgrade instructions, you can download Tails 7.4.2 directly:
The following new Fedora 43 templates are now available for Qubes OS 4.3:
fedora-43-xfce (default Fedora template with the Xfce desktop environment)fedora-43 (alternative Fedora template with the GNOME desktop environment)fedora-43-minimal (minimal template for advanced users)Note: Fedora 43 template availability for Qubes OS 4.2 will be announced separately.
There are two ways to upgrade a template to a new Fedora release:
Recommended: Install a fresh template to replace an existing one. This option is simpler for less experienced users, but it won’t preserve any modifications you’ve made to your template. After you install the new template, you’ll have to redo your desired template modifications (if any) and switch everything that was set to the old template to the new template. If you choose to modify your template, you may wish to write those modifications down so that you remember what to redo on each fresh install. To see a log of package manager actions, open a terminal in the template and use the dnf history command.
Advanced: Perform an in-place upgrade of an existing Fedora template. This option will preserve any modifications you’ve made to the template, but it may be more complicated for less experienced users.
Note: No user action is required regarding the OS version in dom0 (see our note on dom0 and EOL).
05 February, 2026 10:14AM by xiaofei
05 February, 2026 07:04AM by Joseph Lee
Open Source infrastructure is often a deliberate and well-reasoned choice. It offers transparency, control and a level of flexibility that fits well with how many engineering teams like to build and operate systems. Deploying an open source load balancer or reverse proxy is usually a conscious decision, backed by solid documentation, community knowledge and proven behavior in production.
In most cases, it performs exactly as expected. Configuration is understandable, behavior is predictable and the system feels under control.
The challenge does not appear at deployment time. It emerges later, as traffic increases, environments expand and the same platform has to support more services, more changes and more operators. Configuration grows, operational tasks multiply and the margin for error narrows. Changes that were once straightforward start requiring coordination, validation and caution.
At that stage, the problem is not the software itself. The difficulty lies in operating open source infrastructure reliably as the system grows and operational demands increase.
At this stage, most teams know the technology well. They trust Open Source and often run mature projects like HAProxy, NGINX, Apache, or even the SKUDONET Community Edition. These tools are proven, fast and predictable, and they give administrators full control over how traffic is handled.
As the environment grows, friction starts to appear:
Security adds more pressure. Rules, ACLs or WAF logic exist, but tuning them safely takes effort. When something goes wrong, it is not always clear whether the issue comes from configuration, traffic patterns or the infrastructure itself.
None of this breaks the system. But it slows it down operationally. The load balancer still works, yet running it demands more time, more care and more experience than before. This is usually when teams start questioning whether relying only on community tooling is still the right model for their current scale.
When this point is reached, teams know what is not working and they start by looking around the ecosystem they already trust. Users of HAProxy, NGINX or Apache usually do not want to replace their stack. Instead, they evaluate the commercial or enterprise options built around the same technologies, expecting easier operation, better visibility and safer upgrades.
These editions typically promise:
The problem is that this promise does not always translate into simpler operations. Some enterprise versions keep much of the same operational complexity as the community tools, with configuration-heavy workflows and limited abstraction. Others introduce pricing models that grow quickly with traffic and environments, or platforms that are technically powerful but harder to operate on a daily basis.
SKUDONET Enterprise is designed to remove the operational friction that appears when Open Source infrastructure grows.
Configuration, traffic control and visibility are handled from a single plane, instead of being spread across files, nodes and environments. This reduces the effort required to introduce changes and lowers the operational risk.
In practice, this translates into:
High availability, updates and maintenance are treated as part of the platform, not as separate projects that require careful coordination. Routine tasks no longer depend on manual processes or deep system-specific knowledge to be executed safely.
Integration remains straightforward. Existing architectures and deployment models stay in place, allowing teams to add Enterprise capabilities without redesigning their stack or introducing heavy control layers.
Pricing stays predictable as environments scale, avoiding the cost escalation and licensing complexity commonly associated with traditional commercial editions.
The result is a platform that preserves the technical foundations teams trust, while making infrastructure easier to operate, easier to maintain and easier to scale.
If you want to evaluate how this approach works in practice, you can try SKUDONET Enterprise with a 30-day demo and validate the fit in your own environment.
04 February, 2026 01:18PM by Nieves Álvarez
04 February, 2026 10:05AM by xiaofei
BunsenLabs Carbon Release Candidate 3 iso is available here: https://sourceforge.net/projects/bunsen … hybrid.iso https://sourceforge.net/projects/bunsen … iso.sha256
sha256 sum: 47de769531fc0c99d9e0fa4b095ff280919684e5baae29fe264b9970e962a45f
Unless unexpected bugs come up, this should be the same as the Official Release of Bunsenlabs Carbon.
If you do find a new bug related to the Carbon RC3 iso, please post it in the Bug Reports section, adding a tag [Carbon RC3].
The 1st monthly Sparky project and donate report of the 2026: – Linux kernel updated up to 6.18.8, 6.12.68-LTS, 6.6.122-LTS – Added new desktop to Sparky testing (9): Labwc – Sparky 2026.01~dev2 Labwc released – changed ‘firefox-sparky’ packaga name to ‘firefox-latest’ Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive.
01 February, 2026 07:26PM by pavroo
You might have seen Policy will reject signature within a year warnings in apt(-get) update runs like this:
root@424812bd4556:/# apt update
Get:1 http://foo.example.org/debian demo InRelease [4229 B]
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
Get:5 http://foo.example.org/debian demo/main amd64 Packages [1097 B]
Fetched 5326 B in 0s (43.2 kB/s)
All packages are up to date.
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
root@424812bd4556:/# apt --audit update
Hit:1 http://foo.example.org/debian demo InRelease
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
All packages are up to date.
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
Audit: http://foo.example.org/debian/dists/demo/InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is:
Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:
No binding signature at time 2024-06-19T10:33:47Z
because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance
because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
Audit: The sources.list(5) entry for 'http://foo.example.org/debian' should be upgraded to deb822 .sources
Audit: Missing Signed-By in the sources.list(5) entry for 'http://foo.example.org/debian'
Audit: Consider migrating all sources.list(5) entries to the deb822 .sources format
Audit: The deb822 .sources format supports both embedded as well as external OpenPGP keys
Audit: See apt-secure(8) for best practices in configuring repository signing.
Audit: Some sources can be modernized. Run 'apt modernize-sources' to do so.
If you ignored this for the last year, I would like to tell you that 2026-02-01 is not that far away (hello from the past if you’re reading this because you’re already affected).
Let’s simulate the future:
root@424812bd4556:/# apt --update -y install faketime [...] root@424812bd4556:/# export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1 FAKETIME="2026-08-29 23:42:11" root@424812bd4556:/# date Sat Aug 29 23:42:11 UTC 2026 root@424812bd4556:/# apt update Get:1 http://foo.example.org/debian demo InRelease [4229 B] Hit:2 http://deb.debian.org/debian trixie InRelease Err:1 http://foo.example.org/debian demo InRelease Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound: No binding signature at time 2024-06-19T10:33:47Z because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance because: SHA1 is not considered secure since 2026-02-01T00:00:00Z [...] Warning: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. OpenPGP signature verification failed: http://foo.example.org/debian demo InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound: No binding signature at time 2024-06-19T10:33:47Z because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance because: SHA1 is not considered secure since 2026-02-01T00:00:00Z [...] root@424812bd4556:/# echo $? 100
Now, the proper solution would have been to fix the signing key underneath (via e.g. sq cert lint &dash&dashfix &dash&dashcert-file $PRIVAT_KEY_FILE > $PRIVAT_KEY_FILE-fixed).
If you don’t have access to the according private key (e.g. when using an upstream repository that has been ignoring this issue), you’re out of luck for a proper fix.
But there’s a workaround for the apt situation (related see apt commit 0989275c2f7afb7a5f7698a096664a1035118ebf):
root@424812bd4556:/# cat /usr/share/apt/default-sequoia.config # Default APT Sequoia configuration. To overwrite, consider copying this # to /etc/crypto-policies/back-ends/apt-sequoia.config and modify the # desired values. [asymmetric_algorithms] dsa2048 = 2024-02-01 dsa3072 = 2024-02-01 dsa4096 = 2024-02-01 brainpoolp256 = 2028-02-01 brainpoolp384 = 2028-02-01 brainpoolp512 = 2028-02-01 rsa2048 = 2030-02-01 [hash_algorithms] sha1.second_preimage_resistance = 2026-02-01 # Extend the expiry for legacy repositories sha224 = 2026-02-01 [packets] signature.v3 = 2026-02-01 # Extend the expiry
Adjust this according to your needs:
root@424812bd4556:/# mkdir -p /etc/crypto-policies/back-ends/ root@424812bd4556:/# cp /usr/share/apt/default-sequoia.config /etc/crypto-policies/back-ends/apt-sequoia.config root@424812bd4556:/# $EDITOR /etc/crypto-policies/back-ends/apt-sequoia.config root@424812bd4556:/# cat /etc/crypto-policies/back-ends/apt-sequoia.config # APT Sequoia override configuration [asymmetric_algorithms] dsa2048 = 2024-02-01 dsa3072 = 2024-02-01 dsa4096 = 2024-02-01 brainpoolp256 = 2028-02-01 brainpoolp384 = 2028-02-01 brainpoolp512 = 2028-02-01 rsa2048 = 2030-02-01 [hash_algorithms] sha1.second_preimage_resistance = 2026-09-01 # Extend the expiry for legacy repositories sha224 = 2026-09-01 [packets] signature.v3 = 2026-02-01 # Extend the expiry
Then we’re back into the original situation, being a warning instead of an error:
root@424812bd4556:/# apt update Hit:1 http://deb.debian.org/debian trixie InRelease Get:2 http://foo.example.org/debian demo InRelease [4229 B] Hit:3 http://deb.debian.org/debian trixie-updates InRelease Hit:4 http://deb.debian.org/debian-security trixie-security InRelease Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details [..]
Please note that this is a workaround, and not a proper solution.
30 January, 2026 10:05AM by xiaofei
Hello, Community! The belated development update for December 2025 and January 2026 is finally here.
We are getting closer to the 1.5 release but there's also quite a bit of work towards the future. In particular, there's good progress towards replacing the old configuration command completion mechanism with a VyConf-based equivalent, which will allow us to get rid of legacy command definition files eventually.
More immediate improvements include certificate-based authentication for OpenConnect, new operational commands for VPP, support for configuring watchdog timers, and multiple bug fixes.
30 January, 2026 09:00AM by Daniil Baturin (daniil@sentrium.io)
This release is an emergency release to fix critical security vulnerabilities in OpenSSL, a network encryption library used by Tor.
Update the OpenSSL library to 3.5.4, which fixes DSA 6113-1, a set of vulnerabilities that could be critical. Using this set of vulnerabilities, an malicious Tor relay might be able to deanonymize a Tails user.
We are not aware of these vulnerabilities being exploited in practice.
Update the Tor client to 0.4.8.22.
Update Thunderbird to 140.7.0.
Fix Gmail authentication in Thunderbird. (#21384)
Add a spinner when opening the Wi-Fi settings from the Tor Connection assistant. (#18594)
For more details, read our changelog.
The homepage of Tor Browser incorrectly says you are still using Tails 7.4, even after you have upgraded to 7.4.1. It also links to the release notes for that older version.
If in doubt, to verify that you are using Tails 7.4.1, choose Apps ▸ Tails ▸ About Tails.
Automatic upgrades are available from Tails 7.0 or later to 7.4.1.
If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.
Follow our installation instructions:
The Persistent Storage on the USB stick will be lost if you install instead of upgrading.
If you don't need installation or upgrade instructions, you can download Tails 7.4.1 directly:
In recent weeks, several incidents surfaced where content providers blocked traffic coming from multi-tenant proxies to stop automated attacks or illegal rebroadcasting. The countermeasure reduced the offensive surface, but also denied access to legitimate users travelling through the same channel. It illustrates a common issue: upstream security — security applied at proxies, CDNs or scrubbing centers before traffic reaches the application — does not always retain the context required to make good decisions.
The relevant point is not the individual incident, but what it exposes: when security runs upstream and multi-tenant, the backend loses semantics, session state and part of the operational timeline. This alters how attacks are detected, how they are mitigated, and how user continuity is preserved.
The issue is not that these proxies “fail”, but that their efficiency relies on sharing channel, capacity and enforcement across thousands of customers. The model optimizes cost and scale, but erodes signals that were historically essential for security and operations: origin, semantics, persistence and temporal correlation. Once those signals disappear, security stops being a purely defensive problem and becomes an operational decision problem.
Multi-tenant proxies — Cloudflare being the most visible reference — terminate TLS, filter bots, apply WAF rules, absorb DDoS and optimize latency before forwarding requests to the backend. Operationally, the model offers:
The problem emerges in the least visible layer: traffic identity. When thousands of customers share the same defensive channel, the IP address no longer represents a user, it represents the proxy. For the backend, origin stops being an identity signal and becomes a collective. Attackers, legitimate users and corporate SSO traffic exit through the same door.
Traditional web security largely assumed origin was enough to make decisions. In a multi-tenant model, that signal degrades and the system no longer separates legitimate from abusive behavior with the same clarity.
At that point the decision collapses to two choices:
The difficulty is not having two options, but having to choose with incomplete information. That is where the multi-tenant model shows its real cost: it gains efficiency but loses context.
Context loss is not just about hiding origin or masking IP. In production it appears across multiple planes, and — importantly — not in the same place nor at the same time. This fragments the operational timeline, weakens signals and complicates defensive decision-making.
When TLS negotiation and establishment happen before reaching the application, the backend stops seeing signals that do not indicate attack but do indicate degradation of legitimate clients, such as:
During brownouts or incident response, these signals matter because they describe the real client, not the attacker. In a multi-tenant proxy, that degradation disappears and the application only sees “apparently normal” HTTP. For continuity and SLO compliance, that information is lost in the wrong plane.
When filtering occurs before the application — at a proxy or intermediary — another effect appears: the backend sees the symptom but not the cause.
The real circuit is:
Request → WAF/Proxy → Block → END
but for the backend it becomes simply: less traffic
Without correlation between planes, root-cause analysis becomes unreliable. A drop in requests may look like failure, user abandonment or load pressure when it is in fact defensive blocking.
In modern architectures, user state does not live in the connection but in the session: identity, role, flow position and transactional continuity. When session lives in a proxy or intermediary layer, the backend loses persistence and affinity. In applications driven by login, payment or transactional actions, this is critical.
The symptoms do not resemble an attack; they resemble broken UX:
A typical case where infrastructure “works”, but the user churns because the flow cannot complete.
The quietest plane concerns who sees what and when. If logs, metrics and traces stay at the proxy or upstream service, the downstream side — the one closer to application and backend — becomes partial or blind.
Without temporal continuity across planes, the following increase:
And, more importantly, real-time defensive decisions degrade — precisely where continuity matters.
In recent years, defensive analysis has shifted toward behavior. Where the client comes from matters less than what the client is trying to do. Regular timings, repeated attempts, invalid sequences, actions that violate flow logic, or discrepancies between what the client requests and what the application expects are more stable signals than an aggregated IP.
In short:
Interpreting intent requires three planes that upstream proxies lose by design:
Without those planes, defensive decisions simplify. With them, they can be made precise.
If context disappears upstream, the question is not “remove the proxy”, but locating where the information lives that distinguishes abuse from legitimate use. That information only exists where three things converge:
That point is usually the application or the component immediately before it (typically an ADC or integrated WAF), where session, semantics, protocol, results and transactional continuity coexist.
A practical example:
login() → login_failed() → login_failed() → login_failed()
vs:
login() → 2FA() → checkout() → pay()
For the upstream proxy, both are valid HTTP. For the application, they are different intentions: abuse vs legitimate flow.
What matters here is not “blocking more”, but blocking with context — which in operations becomes the difference between:
and, in service terms, between losing legitimate users or preserving continuity.
SKUDONET operates in that plane closer to the application, without the constraints of the multi-tenant model. The approach is mono-tenant and unified: TLS, session, WAF, load-balancing and observability coexist in the same plane without fragmenting across layers or externalizing identity and semantics.
This has three operational consequences:
No aggregation or masking. IP becomes useful again when combined with behavior.
Login, payment, checkout, reservation or any stateful action survives even during active/passive failover.
Errors, attempts and results occur in the same place, enabling precise decisions instead of global blocking.
Schematically:
From this plane, security stops being “block proxy yes/no” and focuses on blocking abuse while preserving legitimate users.
Multi-tenant proxies solve scale, cost and distribution. But continuity, semantics and intent still live near the application — because it is the only plane where full context exists.
If continuity and application-level context matter to your stack, you can evaluate SKUDONET Enterprise Edition with a 30-day trial.
29 January, 2026 08:23AM by Nieves Álvarez
28 January, 2026 03:02AM by xiaofei
The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is not affected.
The following XSAs do affect the security of Qubes OS:
The following XSAs do not affect the security of Qubes OS, and no user action is necessary:
Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.
26 January, 2026 10:24AM by xiaofei
Now that 2025 is over, it’s time to look back and feel proud of the path we’ve walked. Last year has been really exciting in terms of contributions to GStreamer and WebKit for the Igalia Multimedia team.
With more than 459 contributions along the year, we’ve been one of the top contributors to the GStreamer project, in areas like Vulkan Video, GstValidate, VA, GStreamer Editing Services, WebRTC or H.266 support.
Igalia’s contributions to the GStreamer projectIn Vulkan Video we’ve worked on the VP9 video decoder, and cooperated with other contributors to push the AV1 decoder as well. There’s now an H.264 base class for video encoding that is designed to support general hardware-accelerated processing.
GStreaming Editing Services, the framework to build video editing applications, has gained time remapping support, which now allows to include fast/slow motion effects in the videos. Video transformations (scaling, cropping, rounded corners, etc) are now hardware-accelerated thanks to the addition of new Skia-based GStreamer elements and integration with OpenGL. Buffer pool tuning and pipeline improvements have helped to optimize memory usage and performance, enabling the edition of 4K video at 60 frames per second. Much of this work to improve and ensure quality in GStreamer Editing Services has also brought improvements in the GstValidate testing framework, which will be useful for other parts of GStreamer.
Regarding H.266 (VVC), full playback support (with decoders such as vvdec and avdec_h266, demuxers and muxers for Matroska, MP4 and TS, and parsers for the vvc1 and vvi1 formats) is now available in GStreamer 1.26 thanks to Igalia’s work. This allows user applications such as the WebKitGTK web browser to leverage the hardware accelerated decoding provided by VAAPI to play H.266 video using GStreamer.
Igalia has also been one of the top contributors to GStreamer Rust, with 43 contributions. Most of the commits there have been related to Vulkan Video.
Igalia’s contributions to the GStreamer Rust projectIn addition to GStreamer, the team also has a strong presence in WebKit, where we leverage our GStreamer knowledge to implement many features of the web engine related to multimedia. From the 1739 contributions to the WebKit project done last year by Igalia, the Multimedia team has made 323 of them. Nearly one third of those have been related to generic multimedia playback, and the rest have been on areas such as WebRTC, MediaStream, MSE, WebAudio, a new Quirks system to provide adaptations for specific hardware multimedia platforms at runtime, WebCodecs or MediaRecorder.
Igalia Multimedia Team’s contributions to different areas of the WebKit projectWe’re happy about what we’ve achieved along the year and look forward to maintaining this success and bringing even more exciting features and contributions in 2026.
0
0
26 January, 2026 09:34AM by Enrique Ocaña González (eocanha@igalia.com)
This post is a placeholder so that links can be added to BL live config popups.
The data will be added at release time.
For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.
The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.
Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:
The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.
In real incidents, operational ambiguity surfaces like this:
There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.
Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.
The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.
For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:
SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.
In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.
A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.
23 January, 2026 10:39AM by Nieves Álvarez
23 January, 2026 09:54AM by xiaofei
For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.
The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.
Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:
The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.
In real incidents, operational ambiguity surfaces like this:
There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.
Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.
The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.
For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:
SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.
In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.
A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.
22 January, 2026 08:27AM by Nieves Álvarez
Somehow this whole DevOps thing is all about generating the wildest things from some (usually equally wild) template.
And today we're gonna generate YAML from ERB, what could possibly go wrong?!
Well, actually, quite a lot, so one wants to validate the generated result before using it to break systems at scale.
The YAML we generate is a cloud-init cloud-config, and while checking that we generated a valid YAML document is easy (and we were already doing that), it would be much better if we could check that cloud-init can actually use it.
Enter cloud-init schema, or so I thought.
Turns out running cloud-init schema is rather broken without root privileges,
as it tries to load a ton of information from the running system.
This seems like a bug (or multiple), as the data should not be required for the validation of the schema itself.
I've not found a way to disable that behavior.
Luckily, I know Python.
Enter evgeni-knows-better-and-can-write-python:
#!/usr/bin/env python3 import sys from cloudinit.config.schema import get_schema, validate_cloudconfig_file, SchemaValidationError try: valid = validate_cloudconfig_file(config_path=sys.argv[1], schema=get_schema()) if not valid: raise RuntimeError("Schema is not valid") except (SchemaValidationError, RuntimeError) as e: print(e) sys.exit(1)
The canonical1 version if this lives in the Foreman git repo, so go there if you think this will ever receive any updates.
The hardest part was to understand thevalidate_cloudconfig_file API,
as it will sometimes raise an SchemaValidationError,
sometimes a RuntimeError and sometimes just return False.
No idea why.
But the above just turns it into a couple of printed lines and a non zero exit code,
unless of course there are no problems, then you get peaceful silence.
"canonical", not "Canonical" ↩
21 January, 2026 08:05AM by xiaofei
20 January, 2026 07:53AM by Joseph Lee
19 January, 2026 05:41AM by xiaofei
You can now save your language and keyboard layout from the Welcome Screen to the USB stick. These settings will be applied automatically when restarting Tails.
If you turn on this option, your language and keyboard layout are saved unencrypted on the USB stick to help you type the passphrase of your Persistent Storage more easily.
Update Tor Browser to 15.0.4.
Update Thunderbird to 140.6.0.
Update the Linux kernel to 6.12.63.
Drop support for BitTorrent download.
With the ongoing transition from BitTorrent v1 to v2, the BitTorrent v1 files that we provided until now can become a security concern. We don't think that updating to BitTorrent v2 is worth the extra migration and maintenance cost for our team.
Direct download from one of our mirrors is usually faster.
Fix opening .gpg encrypted files in Kleopatra when double-clicking or selecting Open with Kleopatra from the shortcut menu. (#21281)
Fix the desktop crashing when unlocking VeraCrypt volumes with a wrong password. (#21286)
Use 24-hour time format consistently in the top navigation bar and the lock screen. (#21310)
For more details, read our changelog.
Automatic upgrades are available from Tails 7.0 or later to 7.4.
If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.
Follow our installation instructions:
The Persistent Storage on the USB stick will be lost if you install instead of upgrading.
If you don't need installation or upgrade instructions, you can download Tails 7.4 directly:
Talking about cloud today is no longer about a technological trend, but about a central piece of the business. More and more companies are moving their infrastructure to cloud providers under the promise of less hardware, less maintenance, fewer licenses and less time spent on activities that do not generate value.
Much of that promise has been fulfilled. Cloud has democratized capabilities that only large organizations could access a few years ago. Launching a service, increasing capacity or deploying a new region is now easier, faster and more accessible.
However, as often happens with technology, the story changes when we zoom into operations. Cloud simplifies infrastructure, but it does not always simplify how that infrastructure is operated. And that nuance affects not only technical teams, but also the business itself.
The first point of friction does not appear in compute or storage, but in the services that accompany the infrastructure. This includes security, load balancing, TLS certificates, application firewalls, monitoring and observability.
In the cloud provider’s catalog, the technology is there, but it is sold as separate components. Security on one side, certificates on another, observability on another, and advanced capabilities billed as add-ons. The customer does not go without service, but is left with a recurring question: what exactly must be purchased to remain protected and operate reliably?
A less visible aspect also emerges: security is billed per event, per inspection or per volume of traffic. What used to be a hardware expense becomes a bill based on requests, analysis and certificates. Cloud solved hardware, but externalized the operational complexity of security.
Metrics and logs exist, but they are often fragmented, sampled and weakly correlated. Understanding what happened during an incident may require navigating multiple services and data models. Cloud promises security, but it rarely promises explanations.
And at its core this is not a technical problem, but a model problem. Cloud security is commercialized as a product but consumed as a service. And when there is a mismatch between how something is purchased and how it is used, friction eventually appears.
This is the context in which SkudoCloud emerges — not to replace the cloud provider or compete as infrastructure, but to resolve the operational coherence between load balancing, security and visibility.
SkudoCloud is a SaaS platform that enables companies to deploy advanced load balancing and application protection without assembling separate modules, tools or services. From a single interface, organizations can:
The most evident difference appears in security. In the modular cloud model, the customer must decide what to purchase, which rules to enable, how to correlate logs and how to keep everything updated. In a managed model like SkudoCloud, certificates, WAF, TLS inspection and load balancing behave as one coherent system.
This has direct consequences for the business:
Instead of acquiring security, companies acquire operability. Instead of assembling components, they obtain an outcome. That is the difference of a managed approach.
Cloud adoption is already a given. The real question now is how to operate it sustainably. Fragmentation was a natural side effect of the migration phase. Unification will likely be the central theme of the operational phase.
Cloud simplified servers. Now it is time to simplify operations.
14 January, 2026 10:25AM by Nieves Álvarez
14 January, 2026 07:59AM by xiaofei
"Fit and finish" appears in many industries. For much of the software industry, it refers to features that complete a fit for a target audience, ensuring that audience can use the product for their needs. At a frame shop, it means literally fitting the mounted artwork into a frame, then finishing the back of the frame.
At Purism, fit takes on another meaning - making apps fit on screens the size of the Librem 5.
The post PureOS Crimson Development Report: December 2025 appeared first on Purism.
13 January, 2026 10:11PM by Purism
Apologies to users who were hit by forum downtime from ~9:00 to 16:30 Japan time. An upstream server crash combined with an unplanned package upgrade meant some configurations had to be edited. I think all is well now.
RC3 is now available , so please test that one - links here: https://forums.bunsenlabs.org/viewtopic.php?id=9682
---
As usual it was a longer road than planned, with some unexpected tasks, but there is now a Carbon RC2 candidate iso file available for download here:
https://sourceforge.net/projects/bunsen … hybrid.iso
sha256 checksum: d0beb580ba500e2b562e1f39aa6ec02d03597d8f95d73fd86c1755e3fee1ef7d
If you have a free machine or VM to install it on, please give it some testing!
And please post any bugs here:
https://forums.bunsenlabs.org/viewtopic.php?id=9656
That thread is now closed because having multiple bug reports mixed up together was too confusing. Please post any new bugs related to the Carbon RC2 iso in individual threads in the
Bug Reports
section, adding a tag
[Carbon RC2]
.
When it seems as if there aren't any bugs left to squash, we can do an Official Release.
Last edited by johnraff (2026-02-03 07:37:59)
There is a new desktop available for Sparkers: Labwc, as well as Sparky 2026.01~dev Labwc ISO image. What is Labwc? Installation on Sparky testing (9): (packages installation only, requires your set up): or (with Sparky settings): via APTus (>= 20260108)-> Desktops-> Labwc or (with Sparky settings): via Sparky testing (9) MinimalGUI/ISO image. Then reboot to take effects…
11 January, 2026 11:42AM by pavroo

Mein Lesejahr 2025 war mit durchschnittlich bisschen mehr als einem Buch pro Woche vergleichbar mit 2024. Mein Best-Of der von mir 2025 fertig gelesenen Bücher (jene die ich besonders lesenswert fand bzw. empfehlen möchte, die Reihenfolge entspricht dem Foto und stellt keinerlei Reihung dar):
09 January, 2026 05:43PM by t.lamprecht (invalid@example.com)
08 January, 2026 01:05PM by Joseph Lee
07 January, 2026 10:15AM by xiaofei
We’re excited to announce that VyOS 1.4.4 LTS has officially achieved Nutanix Ready validation for Nutanix Acropolis Operating System (AOS) 7.3 and AHV Hypervisor 10.3.
This milestone strengthens our collaboration with Nutanix and ensures full interoperability for customers deploying VyOS Universal Router within the Nutanix Cloud Infrastructure solution.
06 January, 2026 02:30PM by Santiago Blanquet (yago.blanquet@vyos.io)
06 January, 2026 09:26AM by xiaofei
In the process of getting Blob - and Carbon - ready for release, a bug with blob's handling of xfconf settings came up: https://forums.bunsenlabs.org/viewtopic … 79#p148079
It turned out that while xfconf-query doesn't output the type of settings entries, it requires to know the type if adding a new entry. So running 'xfconf-query -c "<channel>" -lv' is not enough for backing up an xfce app which stores its settings in the xfconf database - which most of them do these days. We need to store the type too. That data is luckily stored in the app's xml file in ~/.config/xfce4/xfconf/xfce-perchannel-xml/ so to back it up, all we need to do is save that file.
In principle it might be possible to restore the settings by copying the xml file back into place, overwriting whatever's there, but the apps don't always respond right away, often needing a logout/in. There's a better way - if you know the missing type then you can run xfconf-query commands to restore the settings.
So, this script called
xml2xfconf
. Passed an xfconf xml file - eg a backed-up copy of one of those in xfce-perchannel-xml/ - it will print out a list of xfconf-query commands to apply those settings to the xfconf database, and they'll take effect immediately.
Example usage:
restore=$(mktemp)
xml2xfconf -x /path/to/xfce4-terminal.xml -c xfce4-terminal > "$restore"
bash "$restore"
Here's what got written into $restore:
xfconf-query -c xfce4-terminal -p /font-name -n -t string -s Monospace\ 10
xfconf-query -c xfce4-terminal -p /color-use-theme -n -t bool -s false
xfconf-query -c xfce4-terminal -p /font-allow-bold -n -t bool -s true
xfconf-query -c xfce4-terminal -p /title-mode -n -t string -s TERMINAL_TITLE_REPLACE
xfconf-query -c xfce4-terminal -p /scrolling-lines -n -t uint -s 50000
xfconf-query -c xfce4-terminal -p /font-use-system -n -t bool -s false
xfconf-query -c xfce4-terminal -p /background-mode -n -t string -s TERMINAL_BACKGROUND_TRANSPARENT
xfconf-query -c xfce4-terminal -p /background-darkness -n -t double -s 0.94999999999999996
xfconf-query -c xfce4-terminal -p /color-bold-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold-is-bright -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-background-vary -n -t bool -s false
xfconf-query -c xfce4-terminal -p /color-foreground -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-background -n -t string -s \#2c2c2c
xfconf-query -c xfce4-terminal -p /color-cursor-foreground -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-cursor -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-cursor-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-selection -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-background -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-palette -n -t string -s \#3f3f3f\;#705050\;#60b48a\;#dfaf8f\;#9ab8d7\;#dc8cc3\;#8cd0d3\;#dcdcdc\;#709080\;#dca3a3\;#72d5a3\;#f0dfaf\;#94bff3\;#ec93d3\;#93e0e3\;#ffffff
xfconf-query -c xfce4-terminal -p /tab-activity-color -n -t string -s \#aa0000
xml2xfconf has been uploaded in the latest version of bunsen-utilities, so now I'm going to rewrite the bits of BLOB which use xfconf (only a couple of apps actually) to use xml2xfconf and with luck the bug which @Dave75 found will go away.
And then the Carbon release can get rolling again.
It wasn't a welcome interruption, but this new utility might be useful outside Blob for people who want to backup and restore xfce app settings.
Web applications and APIs are now the operational core of most digital services. They process transactions, expose business logic, manage identities, and connect distributed systems that evolve continuously. In parallel, the volume and sophistication of attacks has increased, driven by automation, accessible tooling, and cloud-specific attack vectors.
Web Application Firewalls remain a critical part of the security stack—but in 2026, the challenge is no longer whether a WAF is deployed. The real question is whether it can be evaluated, measured, and trusted under real operating conditions, especially when consumed as a service.
As WAFs move to SaaS models, teams delegate infrastructure, scaling, and maintenance to the provider. This simplifies operations, but it also changes the evaluation criteria. When you no longer control the underlying system, visibility, isolation, and predictable behavior become non-negotiable technical requirements.
Traditional evaluations focused heavily on rule coverage or whether a solution “covers OWASP Top 10.” Those checks still matter—but they no longer reflect production reality.
A modern evaluation must answer practical, operational questions:
In SaaS environments, this becomes even more critical. When a false positive blocks production traffic or latency spikes unexpectedly, there is often no lower layer to compensate. The WAF’s behavior is the system’s behavior. If that behavior cannot be measured and understood, the evaluation is incomplete.
Many WAF evaluations fail not due to lack of expertise, but because the process itself is incomplete.
Common pitfalls include:
In SaaS models, additional constraints often surface late: payload size limits, rule caps, log retention, export restrictions, or rate limits in the control plane. These are not secondary details—they directly affect detection quality and incident response.
A meaningful evaluation must be observable and reproducible. If you cannot trace decisions through logs, correlate them with metrics, and explain them after the fact, the WAF becomes a black box.
Detection capability is often summarized by a single number, usually the True Positive Rate (TPR). While important, this metric alone is misleading.
A WAF that aggressively blocks everything will score well in detection tests—and fail catastrophically in production.
Real-world evaluation must consider both sides of the equation: blocking malicious traffic and allowing legitimate traffic to pass. False positives are not a usability issue—especially in API-driven systems, where payload structure, schemas, and request volume amplify the cost of false positives.
At scale, even a low False Positive Rate (FPR) can result in:
This is where most evaluations break down in practice: not on attack detection, but on how much legitimate traffic is disrupted.
A realistic PoC should include scenarios like:
In SaaS environments, false positives are even more costly, as tuning depends on provider capabilities, change latency, and visibility into decisions.
SkudoCloud was designed to deliver application delivery and WAF capabilities as a SaaS service while preserving the technical properties advanced teams need to operate safely in production: transparent inspection, predictable isolation, and full visibility into traffic and security decisions. The goal is to remove infrastructure overhead without turning operations into a black box.
That same philosophy shapes how WAFs should be evaluated in 2026. Teams should assess real behavior: prevention mode, realistic traffic patterns, false positives, API payloads, and performance under load—especially when the service is managed and the underlying system is not directly accessible.
To support that evaluation, we have documented the full methodology in our technical guide:
Download the full guide:
02 January, 2026 11:03AM by Nieves Álvarez
The 12th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.18.2, 6.12.63-LTS, 6.6.119-LTS – Added to “dev” repos: COSMIC desktop – Sparky 2025.12 & 2025.12 Special Editions released Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in January too, please.
01 January, 2026 08:08PM by pavroo
30 December, 2025 02:13AM by xiaofei
Modern application delivery architectures are built with the right goals in mind. Load balancers distribute traffic, Web Application Firewalls enforce security policies, TLS protects data in transit, and monitoring systems promise observability across the stack. On paper, everything seems covered.
In real production environments, however, many of these architectures operate with critical blind spots. Especially when security components start making decisions that engineers cannot fully see, trace, or explain. This is rarely caused by a lack of tools. More often, it is the result of how security is embedded into the delivery path.
As security becomes more deeply integrated into application delivery, visibility does not automatically follow.
In most production environments, security is no longer a separate layer. WAFs sit directly in the traffic path, inspecting requests, evaluating rules, applying reputation checks and deciding — in real time — whether traffic is allowed or blocked. TLS inspection happens inline, and policies are often updated automatically.
The problem is not that these decisions exist. The problem is that, very often, they cannot be clearly explained after the fact.
In many deployments, teams quickly run into the same limitations:
The result is a paradox that experienced teams recognize immediately: security coverage increases, while operational visibility decreases.
These blind spots rarely appear during normal operation. They tend to surface under pressure: traffic spikes, false positives, performance degradation or partial outages. When they do, troubleshooting becomes significantly more complex, because the information engineers need is often incomplete or fragmented.
TLS encryption is essential, but it fundamentally changes how visibility works. In many application delivery stacks, traffic is decrypted at some point, inspected, and then re-encrypted. Security decisions are made, but the path between request, rule and outcome is not always traceable.
When something breaks, engineers are often left with little more than a generic message: “Request blocked by WAF.”
What is missing is the ability to correlate:
Without that correlation, root cause analysis turns into guesswork rather than engineering.
Many WAF platforms expose protection logic through simplified models such as risk scores, rule categories or predefined profiles. While these abstractions make dashboards easier to read, they remove critical detail from day-to-day operations.
When rule logic cannot be inspected directly:
Over time, this erodes trust in automated protection. Teams stop understanding why something happens and start compensating by weakening policies instead of fixing the underlying issue.
Security controls do more than allow or block requests. They influence how connections are handled, how retries behave, how sessions persist, and how backend health is perceived by the delivery layer.
When these effects are not visible, delivery problems are often misdiagnosed:
Engineers end up debugging load balancing logic or application behaviour, while the real cause sits silently inside the security layer.
Logs are often treated as a substitute for visibility. In practice, they are frequently:
A log entry that explains what happened, but not why, is not observability, it is a post-mortem artifact. In critical environments, teams need actionable insight while an incident is unfolding, not hours later.
A WAF integrated into the application delivery path should not act as an opaque enforcement layer. Instead, it should provide visibility at each critical stage of the decision process.
In practical terms, this means enabling teams to:Trace each security decision end to end, from the incoming request to the final action applied.
Without these capabilities, security controls may protect applications, but they also introduce operational blind spots that slow down troubleshooting and increase risk.
SKUDONET Enterprise Edition is designed around a simple principle: security must protect traffic without breaking visibility.
Instead of treating security as a separate black box, SKUDONET integrates WAF and traffic management into a single, observable application delivery platform. This approach ensures that security decisions remain transparent, traceable and actionable for engineers working in real production conditions.
Key aspects of this design include:
By removing opacity from security enforcement, SKUDONET helps teams retain control over both protection and performance—especially in high-traffic or business-critical environments where visibility is essential.
A 30-day, fully functional evaluation of SKUDONET Enterprise Edition is available for teams who want to validate this level of visibility and control under real workloads.
29 December, 2025 11:31AM by Nieves Álvarez
29 December, 2025 02:17AM by xiaofei
There is a new desktop available for Sparkers: COSMIC What is COSMIC? COSMIC desktop is available via Sparky ‘dev’ repositories so the repo has to be enabled to install COSMIC desktop on the top of Sparky 8 stable or testing (9). It uses the Wayland session as default. The Sparky meta package installs ‘xwayland’ package as default, but some application can not work or launch. That’s why I…
26 December, 2025 04:21PM by pavroo
25 December, 2025 02:04AM by xiaofei
BLOB, the utility that lets people try different desktop theming sets (eg go back to the Boron look, or even Crunchbang) has been upgraded to 13.1-1 on the Carbon repository. This brings a lot of improvements, like support for xfce4-panel profiles, xfconf settings for xfce4-terminal, flexibility over wallpaper settings with a switch to feh by default, and more.
See the changelog:
https://github.com/BunsenLabs/bunsen-bl … /changelog
or all the commits:
https://github.com/BunsenLabs/bunsen-bl … ts/carbon/
Right now "BLOB Themes Manager" is commented out of jgmenu's prepend.csv, but if you're on a Carbon system you can install the package 'bunsen-blob' and uncomment that menu line to use it. Please check it out.
Soon, an upgraded bunsen-configs will include it in the menu by default, and it will be added to the meta - and iso - package lists. A Release Candidate Carbon iso is not far away...
23 December, 2025 02:03AM by xiaofei
Hello everyone,
this short post is to let anyone interested know that our presentation at SFSCON, held in Bolzano/Bozen at the NOI Techpark, has been published online. SFSCON stays for SFSCON stands for South Tyrol Free Software Conference and is one of Europe's most established annual conferences on Free Software.
This year we at Arc-Team decided to participate with a talk that summarized our approximately 20 years of experience in applying the Open Source philosophy to archaeology (both in the software and hardware fields).
The presentation was titled "Arc-Team and the OpArc Project" and can be viewed both on the conference's official website (where you can also download a PDF version) and on the conference's YouTube channel.
I hope the presentation can be interesting for someone. Have a nice day!
21 December, 2025 02:47PM by Luca Bezzi (noreply@blogger.com)
We’re pleased to announce the stable release of Qubes OS 4.3.0! This minor release includes a host of new features, improvements, and bug fixes. The ISO and associated verification files are available on the downloads page.
These are just a few highlights from the many changes included in this release. For a more comprehensive list of changes, see the Qubes OS 4.3 release notes.
If you’d like to install Qubes OS for the first time or perform a clean reinstallation on an existing system, there’s never been a better time to do so! Simply download the Qubes 4.3.0 ISO and follow our installation guide.
If you’re currently using Qubes 4.2, learn how to upgrade to Qubes 4.3.
If you’re currently using a Qubes 4.3 release candidate (RC), update normally (which includes upgrading any EOL templates and standalones you might have) in order to make your system effectively equivalent to the stable Qubes 4.3.0 release. No reinstallation or other special action is required.
In all cases, we strongly recommend making a full backup beforehand.
Templates restored in 4.3.0 from a pre-4.3 backup may continue to target their original Qubes OS release repos (#8701). After restoring such templates in 4.3.0, you must enter the following additional commands in a dom0 terminal:
sudo qubes-dom0-update -y qubes-dist-upgrade
sudo qubes-dist-upgrade --releasever=4.3 --template-standalone-upgrade -y
This will automatically choose the templates that need to be updated. The templates will be shut down during this process.
Fresh templates on a clean 4.3.0 installation are not affected. Users who perform an in-place upgrade from 4.2 to 4.3 (instead of restoring templates from a backup) are also not affected, since the in-place upgrade process already includes the above fix in stage 4. For more information, see issue #8701.
View the full list of known bugs affecting Qubes 4.3 in our issue tracker.
In accordance with our release support policy, Qubes 4.2 will remain supported for six months after the release of Qubes 4.3, until 2026-06-21. After that, Qubes 4.2 will no longer receive security updates or bug fixes.
Whonix templates are created and supported by our partner, the Whonix Project. The Whonix Project has set its own support policy for Whonix templates in Qubes. For more information, see the Whonix Support Schedule.
This release would not be possible without generous support from our partners and donors, as well as contributions from our active community members, especially bug reports from our testers. We are eternally grateful to our excellent community for making the Qubes OS Project a great example of open-source collaboration.
There are new iso images of Sparky 2025.12 Special Editions out there: GameOver, Multimedia and Rescue. This release is based on Debian testing “Forky”. The December update of Sparky Special Edition iso images feature Linux kernel 6.17, updated packages from Debian and Sparky testing repos as of December 20, 2025, and most changes introduced at the 2025.12 release. The Linux kernels 6.18.2, 6.
20 December, 2025 10:10PM by pavroo
In her book Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of Silicon Valley, published in 2000, Borsook who is based in Palo Alto, California and has previously written for Wired and a host of other industry publications, took aim at what she saw as disturbing trends among the tech industry.
The post A Quarter Century After Cyberselfish, Big Tech Proves Borsook Right appeared first on Purism.
20 December, 2025 05:10AM by Purism
19 December, 2025 11:54AM by Joseph Lee
Hello, Community!
Customers and holders of contributor subscriptions can now download VyOS 1.4.4 release images and the corresponding source tarball. This release adds TLS support for syslog, support for AWS gateway load balancer tunnel handler (on AWS only), an option to match BGP prefix origin validation extended communities in route maps, and more. It also fixes over fifty bugs. Additionally, there's now a proper validation to prevent manually assigned multicast addresses, which may break some old malformed configs, so pay attention to it. Last but not least, there's a deprecation warning for SSH DSA keys that will stop working in VyOS releases after 1.5 due to changes in OpenSSH, so make sure to update your user accounts to more secure algorithm keys while you still have the time.
19 December, 2025 09:22AM by Daniil Baturin (daniil@sentrium.io)
2025 has been a defining year for SKUDONET — not because of a single announcement or isolated launch, but due to sustained progress across product development, security reinforcement and strategic expansion.
Throughout the year, our focus has remained consistent: strengthening the core platform, improving the operational experience for administrators, and ensuring that security and reliability evolve in line with real-world infrastructure demands.
This approach has translated into continuous, incremental improvements rather than disruptive changes, allowing teams to adopt new capabilities without compromising stability in production environments.
Over the course of eight product releases, SKUDONET continued to mature as an application delivery and security platform designed for critical environments.
Across these updates, we introduced:
Rather than isolated enhancements, these updates reflect a continuous effort to simplify daily operations while reinforcing security and performance at scale.
Key areas of evolution included a renewed Web GUI, designed to be faster, more consistent and easier to navigate in complex environments, as well as meaningful progress in RBAC, enabling more precise and adaptable access control models.
Certificate management also saw significant improvements, with extended Let’s Encrypt automation, broader DNS provider support and fully automated renewal workflows. Alongside this, we reduced execution times for critical operations such as farm start/stop actions and API metric retrieval.
During the year, 31 CVEs were resolved, continuously hardening the platform’s attack surface. Beyond vulnerability remediation, SKUDONET focused on reinforcing internal consistency and predictability under load.
Key improvements were made across:
Several updates also introduced additional hardening measures, including:
Together, these enhancements contribute to a platform that behaves more predictably under pressure and is easier to audit and troubleshoot in production.
Reducing operational overhead for administrators was another consistent theme throughout 2025.
Several improvements were introduced to simplify day-to-day operations and reduce manual intervention, including:
Together, these enhancements allow teams to spend less time on routine maintenance tasks and more time on capacity planning, optimization and higher-level architectural decisions.
One of the most significant milestones of the year was the launch of SkudoCloud, SKUDONET’s fully managed SaaS platform for application delivery and security.
SkudoCloud introduces a new operational model in which teams can deploy secure application delivery infrastructure in minutes, without managing the underlying system lifecycle. From the first deployment, users benefit from:
This launch represents a strategic expansion of the SKUDONET ecosystem, complementing on-premise and self-managed deployments with a cloud-native option designed for teams that prioritize simplicity, speed and operational focus.
Alongside product evolution, SKUDONET continued to expand its international presence.
During 2025, seven new partners joined our ecosystem across Europe, Asia and Latin America, strengthening our ability to support customers globally while maintaining close technical collaboration at a regional level:
Virtual Cable (Spain)
Fortiva (Turkey)
SINUX (Indonesia)
Secra Solutions (Spain)
Bluella (India)
Global OMC TECH Inc. (Taiwan)
BCloud Services SAC (Peru)This growth reflects increasing demand for open, transparent and flexible application delivery solutions across diverse markets.
2026 will begin with an important milestone: the launch of the SkudoManager, the SKUDONET Central Console.
This unified interface will enable teams to manage multiple nodes, services and products from a single control plane, providing global infrastructure visibility, centralized user and policy management, and integrated monitoring of farms, certificates, security and performance.
Alongside this, we will continue expanding SkudoCloud and reinforcing the Enterprise Edition’s core architecture, staying aligned with our principles of transparency, performance and security.
The progress achieved in 2025 has been possible thanks to our customers, partners and community. We look forward to continuing this journey together in 2026, building an application delivery and security platform that evolves with real operational needs.
19 December, 2025 09:13AM by Nieves Álvarez
19 December, 2025 03:27AM by xiaofei