February 03, 2026

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

BunsenLabs Carbon Release Candidate 3 iso available

BunsenLabs Carbon Release Candidate 3 iso is available here: https://sourceforge.net/projects/bunsen … hybrid.iso https://sourceforge.net/projects/bunsen … iso.sha256

sha256 sum: 47de769531fc0c99d9e0fa4b095ff280919684e5baae29fe264b9970e962a45f

Unless unexpected bugs come up, this should be the same as the Official Release of Bunsenlabs Carbon.

If you do find a new bug related to the Carbon RC3 iso, please post it in the Bug Reports section, adding a tag [Carbon RC3].

03 February, 2026 12:00AM

February 01, 2026

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2026/01

The 1st monthly Sparky project and donate report of the 2026: – Linux kernel updated up to 6.18.8, 6.12.68-LTS, 6.6.122-LTS – Added new desktop to Sparky testing (9): Labwc – Sparky 2026.01~dev2 Labwc released – changed ‘firefox-sparky’ packaga name to ‘firefox-latest’ Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive.

Source

01 February, 2026 07:26PM by pavroo

January 31, 2026

hackergotchi for Grml developers

Grml developers

Michael Prokop: apt, SHA-1 keys + 2026-02-01

You might have seen Policy will reject signature within a year warnings in apt(-get) update runs like this:

root@424812bd4556:/# apt update
Get:1 http://foo.example.org/debian demo InRelease [4229 B]
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
Get:5 http://foo.example.org/debian demo/main amd64 Packages [1097 B]
Fetched 5326 B in 0s (43.2 kB/s)
All packages are up to date.
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details

root@424812bd4556:/# apt --audit update
Hit:1 http://foo.example.org/debian demo InRelease
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
All packages are up to date.    
Warning:  http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
Audit:  http://foo.example.org/debian/dists/demo/InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is:
   Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:
              No binding signature at time 2024-06-19T10:33:47Z
     because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance
     because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
Audit: The sources.list(5) entry for 'http://foo.example.org/debian' should be upgraded to deb822 .sources
Audit: Missing Signed-By in the sources.list(5) entry for 'http://foo.example.org/debian'
Audit: Consider migrating all sources.list(5) entries to the deb822 .sources format
Audit: The deb822 .sources format supports both embedded as well as external OpenPGP keys
Audit: See apt-secure(8) for best practices in configuring repository signing.
Audit: Some sources can be modernized. Run 'apt modernize-sources' to do so.

If you ignored this for the last year, I would like to tell you that 2026-02-01 is not that far away (hello from the past if you’re reading this because you’re already affected).

Let’s simulate the future:

root@424812bd4556:/# apt --update -y install faketime
[...]
root@424812bd4556:/# export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1 FAKETIME="2026-08-29 23:42:11" 
root@424812bd4556:/# date
Sat Aug 29 23:42:11 UTC 2026

root@424812bd4556:/# apt update
Get:1 http://foo.example.org/debian demo InRelease [4229 B]
Hit:2 http://deb.debian.org/debian trixie InRelease                                 
Err:1 http://foo.example.org/debian demo InRelease
  Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
[...]
Warning: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. OpenPGP signature verification failed: http://foo.example.org/debian demo InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
[...]
root@424812bd4556:/# echo $?
100

Now, the proper solution would have been to fix the signing key underneath (via e.g. sq cert lint &dash&dashfix &dash&dashcert-file $PRIVAT_KEY_FILE > $PRIVAT_KEY_FILE-fixed).

If you don’t have access to the according private key (e.g. when using an upstream repository that has been ignoring this issue), you’re out of luck for a proper fix.

But there’s a workaround for the apt situation (related see apt commit 0989275c2f7afb7a5f7698a096664a1035118ebf):

root@424812bd4556:/# cat /usr/share/apt/default-sequoia.config
# Default APT Sequoia configuration. To overwrite, consider copying this
# to /etc/crypto-policies/back-ends/apt-sequoia.config and modify the
# desired values.
[asymmetric_algorithms]
dsa2048 = 2024-02-01
dsa3072 = 2024-02-01
dsa4096 = 2024-02-01
brainpoolp256 = 2028-02-01
brainpoolp384 = 2028-02-01
brainpoolp512 = 2028-02-01
rsa2048  = 2030-02-01

[hash_algorithms]
sha1.second_preimage_resistance = 2026-02-01    # Extend the expiry for legacy repositories
sha224 = 2026-02-01

[packets]
signature.v3 = 2026-02-01   # Extend the expiry

Adjust this according to your needs:

root@424812bd4556:/# mkdir -p /etc/crypto-policies/back-ends/

root@424812bd4556:/# cp /usr/share/apt/default-sequoia.config /etc/crypto-policies/back-ends/apt-sequoia.config

root@424812bd4556:/# $EDITOR /etc/crypto-policies/back-ends/apt-sequoia.config

root@424812bd4556:/# cat /etc/crypto-policies/back-ends/apt-sequoia.config
# APT Sequoia override configuration
[asymmetric_algorithms]
dsa2048 = 2024-02-01
dsa3072 = 2024-02-01
dsa4096 = 2024-02-01
brainpoolp256 = 2028-02-01
brainpoolp384 = 2028-02-01
brainpoolp512 = 2028-02-01
rsa2048  = 2030-02-01

[hash_algorithms]
sha1.second_preimage_resistance = 2026-09-01    # Extend the expiry for legacy repositories
sha224 = 2026-09-01

[packets]
signature.v3 = 2026-02-01   # Extend the expiry

Then we’re back into the original situation, being a warning instead of an error:

root@424812bd4556:/# apt update
Hit:1 http://deb.debian.org/debian trixie InRelease
Get:2 http://foo.example.org/debian demo InRelease [4229 B]
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
[..]

Please note that this is a workaround, and not a proper solution.

31 January, 2026 01:57PM

January 30, 2026

hackergotchi for Deepin

Deepin

Urgent Security Update | OpenSSL Multiple Vulnerabilities Fixed, Please Upgrade ASAP!

🔔 Dear deepin Users and Community Members, Recently, OpenSSL has released multiple security vulnerability fix announcements, involving 13 security vulnerabilities, including 2 High/Medium-risk vulnerabilities. To ensure the security of your system, we strongly recommend all users upgrade the relevant packages as soon as possible.   I. Vulnerability Information The CVE identifiers involved in this fix are as follows: CVE-2025-9230, CVE-2025-9231, CVE-2025-9232, CVE-2025-15467, CVE-2025-15468, CVE-2025-66199, CVE-2025-68160, CVE-2025-69418, CVE-2025-69419, CVE-2025-69420, CVE-2025-69421, CVE-2026-22795, CVE-2026-22796   Key High/Medium Risk Vulnerability Fixes CVE-2025-15467 | High CMS AuthEnvelopedData Parsing Stack Buffer Overflow: This vulnerability could lead to Remote Code Execution (RCE) under specific conditions. Immediate updating ...Read more

30 January, 2026 10:05AM by xiaofei

hackergotchi for VyOS

VyOS

VyOS Project January 2026 Update

Hello, Community! The belated development update for December 2025 and January 2026 is finally here.

We are getting closer to the 1.5 release but there's also quite a bit of work towards the future. In particular, there's good progress towards replacing the old configuration command completion mechanism with a VyConf-based equivalent, which will allow us to get rid of legacy command definition files eventually.

More immediate improvements include certificate-based authentication for OpenConnect, new operational commands for VPP, support for configuring watchdog timers, and multiple bug fixes.

30 January, 2026 09:00AM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Tails

Tails

Tails 7.4.1

This release is an emergency release to fix critical security vulnerabilities in OpenSSL, a network encryption library used by Tor.

Changes and updates

Included software

  • Update the OpenSSL library to 3.5.4, which fixes DSA 6113-1, a set of vulnerabilities that could be critical. Using this set of vulnerabilities, an malicious Tor relay might be able to deanonymize a Tails user.

    We are not aware of these vulnerabilities being exploited in practice.

  • Update the Tor client to 0.4.8.22.

  • Update Thunderbird to 140.7.0.

Fixed problems

  • Fix Gmail authentication in Thunderbird. (#21384)

  • Add a spinner when opening the Wi-Fi settings from the Tor Connection assistant. (#18594)

For more details, read our changelog.

Known issues

The homepage of Tor Browser incorrectly says you are still using Tails 7.4, even after you have upgraded to 7.4.1. It also links to the release notes for that older version.

If in doubt, to verify that you are using Tails 7.4.1, choose Apps ▸ Tails ▸ About Tails.

Get Tails 7.4.1

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.4.1.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.4.1 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.4.1 directly:

30 January, 2026 12:00AM

January 29, 2026

hackergotchi for ZEVENET

ZEVENET

Why multi-tenant proxies make security decisions harder for applications

In recent weeks, several incidents surfaced where content providers blocked traffic coming from multi-tenant proxies to stop automated attacks or illegal rebroadcasting. The countermeasure reduced the offensive surface, but also denied access to legitimate users travelling through the same channel. It illustrates a common issue: upstream security — security applied at proxies, CDNs or scrubbing centers before traffic reaches the application — does not always retain the context required to make good decisions.

The relevant point is not the individual incident, but what it exposes: when security runs upstream and multi-tenant, the backend loses semantics, session state and part of the operational timeline. This alters how attacks are detected, how they are mitigated, and how user continuity is preserved.

The issue is not that these proxies “fail”, but that their efficiency relies on sharing channel, capacity and enforcement across thousands of customers. The model optimizes cost and scale, but erodes signals that were historically essential for security and operations: origin, semantics, persistence and temporal correlation. Once those signals disappear, security stops being a purely defensive problem and becomes an operational decision problem.

Shared-proxy architectures and their operational trade-offs

Multi-tenant proxies — Cloudflare being the most visible reference — terminate TLS, filter bots, apply WAF rules, absorb DDoS and optimize latency before forwarding requests to the backend. Operationally, the model offers:

  • shared scale
  • economic amortization
  • simplified management

The problem emerges in the least visible layer: traffic identity. When thousands of customers share the same defensive channel, the IP address no longer represents a user, it represents the proxy. For the backend, origin stops being an identity signal and becomes a collective. Attackers, legitimate users and corporate SSO traffic exit through the same door.

Traditional web security largely assumed origin was enough to make decisions. In a multi-tenant model, that signal degrades and the system no longer separates legitimate from abusive behavior with the same clarity.

At that point the decision collapses to two choices:

  • block the channel → stops the attack but penalizes legitimate users
  • allow the channel → preserves continuity but lets part of the attack through

The difficulty is not having two options, but having to choose with incomplete information. That is where the multi-tenant model shows its real cost: it gains efficiency but loses context.

How upstream filtering fragments application context

Context loss is not just about hiding origin or masking IP. In production it appears across multiple planes, and — importantly — not in the same place nor at the same time. This fragments the operational timeline, weakens signals and complicates defensive decision-making.

TLS plane

When TLS negotiation and establishment happen before reaching the application, the backend stops seeing signals that do not indicate attack but do indicate degradation of legitimate clients, such as:

  • renegotiation attempts
  • handshake failures
  • client-side timeouts
  • cipher downgrades
  • inconsistent SNI

During brownouts or incident response, these signals matter because they describe the real client, not the attacker. In a multi-tenant proxy, that degradation disappears and the application only sees “apparently normal” HTTP. For continuity and SLO compliance, that information is lost in the wrong plane.

WAF plane

When filtering occurs before the application — at a proxy or intermediary — another effect appears: the backend sees the symptom but not the cause.
The real circuit is:

Request → WAF/Proxy → Block → END

but for the backend it becomes simply: less traffic

Without correlation between planes, root-cause analysis becomes unreliable. A drop in requests may look like failure, user abandonment or load pressure when it is in fact defensive blocking.

Session plane

In modern architectures, user state does not live in the connection but in the session: identity, role, flow position and transactional continuity. When session lives in a proxy or intermediary layer, the backend loses persistence and affinity. In applications driven by login, payment or transactional actions, this is critical.

The symptoms do not resemble an attack; they resemble broken UX:

  • unexpected logouts
  • interrupted payments
  • inconsistent login flows
  • failover correct from infrastructure perspective but wrong from user perspective

A typical case where infrastructure “works”, but the user churns because the flow cannot complete.

Observability plane

The quietest plane concerns who sees what and when. If logs, metrics and traces stay at the proxy or upstream service, the downstream side — the one closer to application and backend — becomes partial or blind.

Without temporal continuity across planes, the following increase:

  • time-to-detect
  • time-to-mitigate
  • internal noise
  • post-mortem cost

And, more importantly, real-time defensive decisions degrade — precisely where continuity matters.

From origin-based filtering to behavior-based decisions

In recent years, defensive analysis has shifted toward behavior. Where the client comes from matters less than what the client is trying to do. Regular timings, repeated attempts, invalid sequences, actions that violate flow logic, or discrepancies between what the client requests and what the application expects are more stable signals than an aggregated IP.

In short:

Question Traditional signal Relevant signal Defensive value
Where does it come from? IP / ASN / reputation Low (ambiguous in multi-tenant)
What is it trying to do? Behavior / semantics High (context + intent)

Interpreting intent requires three planes that upstream proxies lose by design:

  • session (who and where in the flow)
  • semantics (what action is being attempted)
  • timeline (in what order things occur)

Without those planes, defensive decisions simplify. With them, they can be made precise.

The application-side plane where context actually exists

If context disappears upstream, the question is not “remove the proxy”, but locating where the information lives that distinguishes abuse from legitimate use. That information only exists where three things converge:

  • what the user does
  • what the application expects
  • what the system allows

That point is usually the application or the component immediately before it (typically an ADC or integrated WAF), where session, semantics, protocol, results and transactional continuity coexist.

A practical example:

login() → login_failed() → login_failed() → login_failed()

vs:

login() → 2FA() → checkout() → pay()

For the upstream proxy, both are valid HTTP. For the application, they are different intentions: abuse vs legitimate flow.

What matters here is not “blocking more”, but blocking with context — which in operations becomes the difference between:

  • blocking the channel
  • blocking the behavior

and, in service terms, between losing legitimate users or preserving continuity.

Where SKUDONET fits

SKUDONET operates in that plane closer to the application, without the constraints of the multi-tenant model. The approach is mono-tenant and unified: TLS, session, WAF, load-balancing and observability coexist in the same plane without fragmenting across layers or externalizing identity and semantics.

This has three operational consequences:

1. Origin retains meaning

No aggregation or masking. IP becomes useful again when combined with behavior.

2. Transactional flows maintain continuity

Login, payment, checkout, reservation or any stateful action survives even during active/passive failover.

3. Timeline and semantics correlate

Errors, attempts and results occur in the same place, enabling precise decisions instead of global blocking.

Schematically:

Plane Upstream multi-tenant SKUDONET
Identity Aggregated Individual
Session External Local
Semantics Partial Complete
Observability Fragmented Correlated
Defense Binary Contextual
Continuity Fragile Transactional

From this plane, security stops being “block proxy yes/no” and focuses on blocking abuse while preserving legitimate users.

Conclusion

Multi-tenant proxies solve scale, cost and distribution. But continuity, semantics and intent still live near the application — because it is the only plane where full context exists.

If continuity and application-level context matter to your stack, you can evaluate SKUDONET Enterprise Edition with a 30-day trial.

29 January, 2026 08:23AM by Nieves Álvarez

January 28, 2026

hackergotchi for Deepin

Deepin

January 27, 2026

hackergotchi for Qubes

Qubes

XSAs released on 2026-01-27

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is not affected.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

  • (none)

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • XSA-477
    • This XSA affects only HVMs with shadow paging and tracing enabled. In Qubes OS, shadow paging and tracing are disabled at build time.
  • XSA-478
    • This XSA affects only XAPI, which is an alternative toolstack. Qubes OS uses libxl instead of XAPI.
  • XSA-479
    • This XSA affects only in-VM isolation, which Qubes OS does not rely on for security. We will still provide the fix for this issue at a later date, but it will not be accompanied by a Qubes security bulletin (QSB).

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

27 January, 2026 12:00AM

January 26, 2026

hackergotchi for Deepin

Deepin

hackergotchi for Maemo developers

Maemo developers

Igalia Multimedia contributions in 2025

Now that 2025 is over, it’s time to look back and feel proud of the path we’ve walked. Last year has been really exciting in terms of contributions to GStreamer and WebKit for the Igalia Multimedia team.

With more than 459 contributions along the year, we’ve been one of the top contributors to the GStreamer project, in areas like Vulkan Video, GstValidate, VA, GStreamer Editing Services, WebRTC or H.266 support.

Pie chart of Igalia's contributions to different areas of the GStreamer project: other (30%) vulkan (24%) validate (7%) va (6%) ges (4%) webrtc (3%) h266parse (3%) python (3%) dots-viewer (3%) tests (2%) docs (2%) devtools (2%) webrtcbin (1%) tracers (1%) qtdemux (1%) gst (1%) ci (1%) y4menc (1%) videorate (1%) gl (1%) alsa (1%)Igalia’s contributions to the GStreamer project

In Vulkan Video we’ve worked on the VP9 video decoder, and cooperated with other contributors to push the AV1 decoder as well. There’s now an H.264 base class for video encoding that is designed to support general hardware-accelerated processing.

GStreaming Editing Services, the framework to build video editing applications, has gained time remapping support, which now allows to include fast/slow motion effects in the videos. Video transformations (scaling, cropping, rounded corners, etc) are now hardware-accelerated thanks to the addition of new Skia-based GStreamer elements and integration with OpenGL. Buffer pool tuning and pipeline improvements have helped to optimize memory usage and performance, enabling the edition of 4K video at 60 frames per second. Much of this work to improve and ensure quality in GStreamer Editing Services has also brought improvements in the GstValidate testing framework, which will be useful for other parts of GStreamer.

Regarding H.266 (VVC), full playback support (with decoders such as vvdec and avdec_h266, demuxers and muxers for Matroska, MP4 and TS, and parsers for the vvc1 and vvi1 formats) is now available in GStreamer 1.26 thanks to Igalia’s work. This allows user applications such as the WebKitGTK web browser to leverage the hardware accelerated decoding provided by VAAPI to play H.266 video using GStreamer.

Igalia has also been one of the top contributors to GStreamer Rust, with 43 contributions. Most of the commits there have been related to Vulkan Video.

Pie chart of Igalia's contributions to different areas of the GStreamer Rust project: vulkan (28%) other (26%) gstreamer (12%) ci (12%) tracer (7%) validate (5%) ges (7%) examples (5%)Igalia’s contributions to the GStreamer Rust project

In addition to GStreamer, the team also has a strong presence in WebKit, where we leverage our GStreamer knowledge to implement many features of the web engine related to multimedia. From the 1739 contributions to the WebKit project done last year by Igalia, the Multimedia team has made 323 of them. Nearly one third of those have been related to generic multimedia playback, and the rest have been on areas such as WebRTC, MediaStream, MSE, WebAudio, a new Quirks system to provide adaptations for specific hardware multimedia platforms at runtime, WebCodecs or MediaRecorder.

Pie chart of Igalia's contributions to different areas of the WebKit project: Generic Gstreamer work (33%) WebRTC (20%) Regression bugfixing (9%) Other (7%) MSE (6%) BuildStream SDK (4%) MediaStream (3%) WPE platform (3%) WebAudio (3%) WebKitGTK platform (2%) Quirks (2%) MediaRecorder (2%) EME (2%) Glib (1%) WTF (1%) WebCodecs (1%) GPUProcess (1%) Streams (1%) Igalia Multimedia Team’s contributions to different areas of the WebKit project

We’re happy about what we’ve achieved along the year and look forward to maintaining this success and bringing even more exciting features and contributions in 2026.

0 Add to favourites0 Bury

26 January, 2026 09:34AM by Enrique Ocaña González (eocanha@igalia.com)

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

[PLACEHOLDER] BL Carbon Release Notes

This post is a placeholder so that links can be added to BL live config popups.

The data will be added at release time.

26 January, 2026 12:00AM

January 23, 2026

hackergotchi for ZEVENET

ZEVENET

High availability is not redundancy — it’s operational decision-making

For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.

The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.

From redundancy to operational continuity

Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:

  • Failure (binary crash)
  • Partial failure (works, but incompletely)
  • Brownout (responds, but not on time)
  • Silent drop (no error, but traffic is lost)
  • Control-plane stall (decisions arrive too late)
  • Data-plane stall (traffic is blocked in-path)

The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.

In real incidents, operational ambiguity surfaces like this:

Phenomenon Failure type Detected by health-check User impact LB decision Real complexity
Backend down Binary Yes High Immediate failover Low
Backend slow Brownout Partial High Late / None High
Intermittent timeouts Brownout Not always Medium/High Ambiguous High
WAF blocking Security No High None High
Slow TLS handshake TLS layer Partial Medium N/A Medium
Session saturation Stateful No High Unknown High
Session transfer Operational No Medium Late Medium
DB degradation Backend Partial High Not correlated High

There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.

Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.

The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.

So what does high availability actually mean in 2026?

For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:

  1. understanding degradation vs true unavailability
  2. basing decisions on traffic and context, not just checks
  3. coordinating security, inspection and session
  4. having observability at the same plane that decides failover
  5. treating availability as an operational problem, not as hardware redundancy

Where does SKUDONET fit in this model?

SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.

In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.

A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.

23 January, 2026 10:39AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

Breaking: XDG Adds Native Support for Linyaps

In the world of Linux desktop computing, there exists a foundational "common language" that underpins all interoperability—the XDG specifications, developed and maintained by the freedesktop.org organization. XDG is the critical standard for solving Linux's ecosystem fragmentation and establishing unified resource access protocols. Whether you are an application developer or a distribution maintainer, ensuring your product runs well on a modern Linux desktop necessitates adherence to the XDG standard. It is the key cornerstone enabling the Linux desktop to evolve from "working in silos" to "unified collaboration." From desktop icons and application menus to system notifications and file dialogs, XDG specifications permeate every facet ...Read more

23 January, 2026 09:54AM by xiaofei

January 22, 2026

hackergotchi for ZEVENET

ZEVENET

High availability is not redundancy — it’s operational decision-making

For years, high availability (HA) was treated as a redundancy problem: duplicate servers, replicate databases, maintain a secondary site and ensure that if something failed, there was a plan B waiting. That model worked when applications were monolithic, topologies were simple, and traffic variability was low. Today the environment looks different: applications are split into services, traffic is irregular, encryption is the norm, and infrastructure is distributed. Availability is no longer decided at the machine level, but at the operational plane.

The first relevant distinction appears when we separate binary failures from degradations. Most HA architectures are designed to detect obvious “crashes,” yet in production the meaningful incidents are rarely crashes—they are partial degradations (brownouts): the database responds, but slowly; a backend accepts connections but does not process; the Web Application Firewall (WAF) blocks legitimate traffic; intermittent timeouts create queues. For a basic health-check everything is “up”; for the user, it isn’t.

From redundancy to operational continuity

Operational degradations in production are not homogeneous. In general, we can distinguish at least six categories:

  • Failure (binary crash)
  • Partial failure (works, but incompletely)
  • Brownout (responds, but not on time)
  • Silent drop (no error, but traffic is lost)
  • Control-plane stall (decisions arrive too late)
  • Data-plane stall (traffic is blocked in-path)

The component that arbitrates this ambiguity is the load balancer. Not because it is the most critical part of the system, but because it is the only one observing real-time traffic and responsible for deciding when a service is “healthy,” when it is degraded, and when failover should be triggered. That decision becomes complex when factors like TLS encryption, session handling, inspection, security controls or latency decoupled from load interact. The load balancer does not merely route traffic—it determines continuity.

In real incidents, operational ambiguity surfaces like this:

Phenomenon Failure type Detected by health-check User impact LB decision Real complexity
Backend down Binary Yes High Immediate failover Low
Backend slow Brownout Partial High Late / None High
Intermittent timeouts Brownout Not always Medium/High Ambiguous High
WAF blocking Security No High None High
Slow TLS handshake TLS layer Partial Medium N/A Medium
Session saturation Stateful No High Unknown High
Session transfer Operational No Medium Late Medium
DB degradation Backend Partial High Not correlated High

There is also a persistent misconception between availability and scaling. Scaling answers the question “how much load can I absorb?” High availability answers a completely different one: “what happens when something fails?” An application can scale flawlessly and still suffer a major incident because failover triggered too late, sessions failed to survive backend changes, or the control plane took too long to propagate state.

Encrypted traffic inspection adds another layer. In many environments, TLS inspection and the Web Application Firewall sit on a different plane than the load balancer. In theory this is modular; in practice it introduces coordination. If the firewall blocks part of legitimate traffic, the load balancer sees fewer errors than the system actually produces. If the backend degrades but the firewall masks the problem upstream, there is no clear signal. Availability becomes a question of coupling between planes.

The final problem is often epistemological: who owns the truth of the incident? During an outage, observability depends on who retains context. If the balancing plane, the inspection plane, the security plane and the monitoring plane are separate tools, the post-mortem becomes archaeology: fragmented logs, incomplete metrics, sampling, misaligned timestamps, and three contradictory narratives of the same event.

So what does high availability actually mean in 2026?

For operational teams, the definition that best fits reality is this: High availability is the ability to maintain continuity under non-binary failures.
This implies:

  1. understanding degradation vs true unavailability
  2. basing decisions on traffic and context, not just checks
  3. coordinating security, inspection and session
  4. having observability at the same plane that decides failover
  5. treating availability as an operational problem, not as hardware redundancy

Where does SKUDONET fit in this model?

SKUDONET Enterprise Edition is built around that premise: availability does not depend solely on having an extra node, but on coordinating in a single operational plane load balancing at layers 4 and 7, TLS termination and inspection, security policies, certificate management, and traffic observability. The goal is not to abstract complexity, but to place decision-making and understanding in the same context.

In environments where failover is exceptional, this coupling may go unnoticed. But in environments where degradation is intermittent and traffic is non-linear, high availability stops being a passive mechanism and becomes a process. What SKUDONET provides is not a guarantee that nothing will fail—such a guarantee does not exist—but an architecture where continuity depends less on assumptions and more on signals.

A 30-day evaluation of SKUDONET Enterprise Edition is available for teams who want to validate behavior under real workloads.

22 January, 2026 08:27AM by Nieves Álvarez

January 21, 2026

hackergotchi for Grml developers

Grml developers

Evgeni Golov: Validating cloud-init configs without being root

Somehow this whole DevOps thing is all about generating the wildest things from some (usually equally wild) template.

And today we're gonna generate YAML from ERB, what could possibly go wrong?!

Well, actually, quite a lot, so one wants to validate the generated result before using it to break systems at scale.

The YAML we generate is a cloud-init cloud-config, and while checking that we generated a valid YAML document is easy (and we were already doing that), it would be much better if we could check that cloud-init can actually use it.

Enter cloud-init schema, or so I thought. Turns out running cloud-init schema is rather broken without root privileges, as it tries to load a ton of information from the running system. This seems like a bug (or multiple), as the data should not be required for the validation of the schema itself. I've not found a way to disable that behavior.

Luckily, I know Python.

Enter evgeni-knows-better-and-can-write-python:

#!/usr/bin/env python3

import sys
from cloudinit.config.schema import get_schema, validate_cloudconfig_file, SchemaValidationError

try:
    valid = validate_cloudconfig_file(config_path=sys.argv[1], schema=get_schema())
    if not valid:
        raise RuntimeError("Schema is not valid")
except (SchemaValidationError, RuntimeError) as e:
    print(e)
    sys.exit(1)

The canonical1 version if this lives in the Foreman git repo, so go there if you think this will ever receive any updates.

The hardest part was to understand thevalidate_cloudconfig_file API, as it will sometimes raise an SchemaValidationError, sometimes a RuntimeError and sometimes just return False. No idea why. But the above just turns it into a couple of printed lines and a non zero exit code, unless of course there are no problems, then you get peaceful silence.

21 January, 2026 07:42PM

hackergotchi for Deepin

Deepin

January 20, 2026

hackergotchi for GreenboneOS

GreenboneOS

CVE-2025-64155: In the Wild Exploitation of FortiSIEM for Unauthenticated Root-Level RCE

On January 13th, 2026, Fortinet publicly disclosed and patched CVE-2025-64155 (CVSS 9.8) affecting FortiSIEM along with five additional vulnerabilities across its product line [1][2][3][4][5]. In particular, CVE-2025-64155 represents high-risk exposure; immediately after its release, active exploitation was reported. The flaw was responsibly disclosed to Fortinet almost six months ago (August 2025), by Horizon3.ai. Greenbone includes […]

20 January, 2026 07:53AM by Joseph Lee

January 19, 2026

hackergotchi for Deepin

Deepin

deepin 25.0.10 Release Note

In order to further optimize the deepin 25 system update experience and enhance stability, the deepin 25.0.10 image is now officially released. This update focuses on system installation experience, file management, system interaction, and stability, optimizing multiple high-frequency usage scenarios, fixing a large number of known issues, and improving system smoothness and reliability.   Key Updates in This Release System Installer: Optimized the prompt text for data formatting during full-disk installation, now supporting the option to retain user data and reuse the original account data, configurations, and files. Comprehensive Upgrade of File Manager: Added practical features such as automatic scrolling during file ...Read more

19 January, 2026 05:41AM by xiaofei

January 15, 2026

hackergotchi for Tails

Tails

Tails 7.4

New feature

Persistent language and keyboard layout

You can now save your language and keyboard layout from the Welcome Screen to the USB stick. These settings will be applied automatically when restarting Tails.

If you turn on this option, your language and keyboard layout are saved unencrypted on the USB stick to help you type the passphrase of your Persistent Storage more easily.

Changes and updates

  • Update Tor Browser to 15.0.4.

  • Update Thunderbird to 140.6.0.

  • Update the Linux kernel to 6.12.63.

  • Drop support for BitTorrent download.

    With the ongoing transition from BitTorrent v1 to v2, the BitTorrent v1 files that we provided until now can become a security concern. We don't think that updating to BitTorrent v2 is worth the extra migration and maintenance cost for our team.

    Direct download from one of our mirrors is usually faster.

Fixed problems

  • Fix opening .gpg encrypted files in Kleopatra when double-clicking or selecting Open with Kleopatra from the shortcut menu. (#21281)

  • Fix the desktop crashing when unlocking VeraCrypt volumes with a wrong password. (#21286)

  • Use 24-hour time format consistently in the top navigation bar and the lock screen. (#21310)

For more details, read our changelog.

Get Tails 7.4

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 7.0 or later to 7.4.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 7.4 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 7.4 directly:

15 January, 2026 12:00AM

January 14, 2026

hackergotchi for ZEVENET

ZEVENET

Cloud security works, but not as a unified system

Talking about cloud today is no longer about a technological trend, but about a central piece of the business. More and more companies are moving their infrastructure to cloud providers under the promise of less hardware, less maintenance, fewer licenses and less time spent on activities that do not generate value.

Much of that promise has been fulfilled. Cloud has democratized capabilities that only large organizations could access a few years ago. Launching a service, increasing capacity or deploying a new region is now easier, faster and more accessible.

However, as often happens with technology, the story changes when we zoom into operations. Cloud simplifies infrastructure, but it does not always simplify how that infrastructure is operated. And that nuance affects not only technical teams, but also the business itself.

Cloud providers don’t sell “solutions” — they sell components

The first point of friction does not appear in compute or storage, but in the services that accompany the infrastructure. This includes security, load balancing, TLS certificates, application firewalls, monitoring and observability.

In the cloud provider’s catalog, the technology is there, but it is sold as separate components. Security on one side, certificates on another, observability on another, and advanced capabilities billed as add-ons. The customer does not go without service, but is left with a recurring question: what exactly must be purchased to remain protected and operate reliably?

A less visible aspect also emerges: security is billed per event, per inspection or per volume of traffic. What used to be a hardware expense becomes a bill based on requests, analysis and certificates. Cloud solved hardware, but externalized the operational complexity of security.

Metrics and logs exist, but they are often fragmented, sampled and weakly correlated. Understanding what happened during an incident may require navigating multiple services and data models. Cloud promises security, but it rarely promises explanations.

And at its core this is not a technical problem, but a model problem. Cloud security is commercialized as a product but consumed as a service. And when there is a mismatch between how something is purchased and how it is used, friction eventually appears.

SkudoCloud as an example of the managed approach

This is the context in which SkudoCloud emerges — not to replace the cloud provider or compete as infrastructure, but to resolve the operational coherence between load balancing, security and visibility.

SkudoCloud is a SaaS platform that enables companies to deploy advanced load balancing and application protection without assembling separate modules, tools or services. From a single interface, organizations can:

  • manage SSL/TLS certificates
  • inspect encrypted traffic
  • apply WAF rules
  • distribute load across backends
  • and monitor application behavior

The most evident difference appears in security. In the modular cloud model, the customer must decide what to purchase, which rules to enable, how to correlate logs and how to keep everything updated. In a managed model like SkudoCloud, certificates, WAF, TLS inspection and load balancing behave as one coherent system.

This has direct consequences for the business:

  • it reduces operational uncertainty
  • it improves visibility during incidents
  • and it avoids billing models tied to traffic volume or number of inspections

Instead of acquiring security, companies acquire operability. Instead of assembling components, they obtain an outcome. That is the difference of a managed approach.

Conclusion

Cloud adoption is already a given. The real question now is how to operate it sustainably. Fragmentation was a natural side effect of the migration phase. Unification will likely be the central theme of the operational phase.

Cloud simplified servers. Now it is time to simplify operations.

14 January, 2026 10:25AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

January 13, 2026

hackergotchi for Purism PureOS

Purism PureOS

PureOS Crimson Development Report: December 2025

"Fit and finish" appears in many industries. For much of the software industry, it refers to features that complete a fit for a target audience, ensuring that audience can use the product for their needs. At a frame shop, it means literally fitting the mounted artwork into a frame, then finishing the back of the frame.

At Purism, fit takes on another meaning - making apps fit on screens the size of the Librem 5.

The post PureOS Crimson Development Report: December 2025 appeared first on Purism.

13 January, 2026 10:11PM by Purism

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Forum downtime.

Apologies to users who were hit by forum downtime from ~9:00 to 16:30 Japan time. An upstream server crash combined with an unplanned package upgrade meant some configurations had to be edited. I think all is well now.

13 January, 2026 12:00AM

[DONE] BunsenLabs Carbon Release Candidate 2 iso available for testing

RC3 is now available , so please test that one - links here: https://forums.bunsenlabs.org/viewtopic.php?id=9682

---
As usual it was a longer road than planned, with some unexpected tasks, but there is now a Carbon RC2 candidate iso file available for download here:
https://sourceforge.net/projects/bunsen … hybrid.iso
sha256 checksum: d0beb580ba500e2b562e1f39aa6ec02d03597d8f95d73fd86c1755e3fee1ef7d

If you have a free machine or VM to install it on, please give it some testing!

And please post any bugs here: https://forums.bunsenlabs.org/viewtopic.php?id=9656
That thread is now closed because having multiple bug reports mixed up together was too confusing. Please post any new bugs related to the Carbon RC2 iso in individual threads in the Bug Reports section, adding a tag [Carbon RC2] .

When it seems as if there aren't any bugs left to squash, we can do an Official Release. cool

Last edited by johnraff (Today 07:37:59)

13 January, 2026 12:00AM

January 11, 2026

hackergotchi for SparkyLinux

SparkyLinux

Labwc

There is a new desktop available for Sparkers: Labwc, as well as Sparky 2026.01~dev Labwc ISO image. What is Labwc? Installation on Sparky testing (9): (packages installation only, requires your set up): or (with Sparky settings): via APTus (>= 20260108)-> Desktops-> Labwc or (with Sparky settings): via Sparky testing (9) MinimalGUI/ISO image. Then reboot to take effects…

Source

11 January, 2026 11:42AM by pavroo

January 10, 2026

hackergotchi for Grml developers

Grml developers

Michael Prokop: Bookdump 2025

Foto der hier vorgestellten Bücher

Mein Lesejahr 2025 war mit durchschnittlich bisschen mehr als einem Buch pro Woche vergleichbar mit 2024. Mein Best-Of der von mir 2025 fertig gelesenen Bücher (jene die ich besonders lesenswert fand bzw. empfehlen möchte, die Reihenfolge entspricht dem Foto und stellt keinerlei Reihung dar):

  • Russische Spezialitäten, Dmitrij Kapitelman. Was für ein Feuerwerk von einem Buch, sprachgewaltig, traurig, amüsant.
  • Die Jungfrau, Monika Helfer. Nach Helfers “Die Bagage”, “Löwenherz” und “Vati” war natürlich auch dieses Buch Pflichtlektüre für mich.
  • Das Buch zum Film, Clemens J. Setz. Wunderbare Alltagsbeobachtungen und Bonmots – ich hab eigentlich nur eine Kritik: mit 192 Seiten zu kurz.
  • Wackelkontakt, Wolf Haas. Jaja, ein bekannter Bestseller etc. Aber er ist und bleibt einer meiner Lieblingsautoren. Ich war bei seiner Lesung in Graz und habe das Buch im Anschluss sogar noch ein zweites Mal gelesen, und es keine Sekunde bereut. Sprachkünstler, Hilfsausdruck!
  • Fleisch ist mein Gemüse, Heinz Strunk. Ich liebe Background-Geschichten, speziell wenn es um Musik bzw. das Musikerleben geht, und das ist hier mit dem Ausflug in die Branche der Tanzmusik der Fall. Bis auf einige wenige Ausnahmen flutscht es beim Lesen.
  • Wut und Wertung: Warum wir über Geschmack streiten, Johannes Franzen. Warum eskalieren Konflikte über Geschmack, Kunst und Kanon? Warum ist Streiten über Geschmack eine wichtige Kulturtechnik? Franzen arbeitet das anhand von tatsächlich existierenden Kontroversen und Skandalen auf, lehrreich und anregend.
  • Klapper, Kurt Prödel. Fans von Clemens J. Setz kennen natürlich Prödel, und da ich auch Coming-of-Age-Romanen mag, war das ein doppelter Volltreffer. Ich freue mich schon auf sein neues Buch “Salto”!
  • Hier treibt mein Kartoffelherz, Anna Weidenholzer. Ich kann absolut nichts mehr zu diesem Buch sagen, aber ich hab’s echt gern gelesen.
  • Die Infantin trägt den Scheitel links, Helena Adler. Das Buch hatte einen interessanten Sog auf mich, ich wollte es einfach weiterlesen. Die verspielte Sprache und Wortspiele haben es noch feiner gemacht.
  • Das schöne Leben, Christiane Rösinger. Ich hab Rösingers Bücher von Kathrin Passig empfohlen bekommen (Volltreffer, danke!). Ich hab mir auch alle anderen Bücher von Rösinger (“Berlin – Baku. Meine Reise zum Eurovision Song Contest”, “Zukunft machen wir später: Meine Deutschstunden mit Geflüchteten”, “Liebe wird oft überbewertet”) besorgt, und sehr gerne gelesen.

10 January, 2026 05:29PM

January 09, 2026

hackergotchi for Proxmox VE

Proxmox VE

New Archive CDN for End-of-Life (EOL) Releases

Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older.

The archive is reachable via the following URLs:

To use the archive for an EOL release, you will need to change the domain in the apt repository configuration...

Read more

09 January, 2026 05:43PM by t.lamprecht (invalid@example.com)

January 08, 2026

hackergotchi for GreenboneOS

GreenboneOS

December 2025 Threat Report: Emergency End-of-Year Patches and New Exploit Campaigns

In 2025, Greenbone increased the total number of vulnerability tests in the OPENVAS ENTERPRISE FEED to over 227,000, adding almost 40,000 vulnerability checks. Since the first CVE was published in 1999, over 300,000 software vulnerabilities have been added to MITRE’s CVE repository. CVE disclosures continued to rocket upward, increasing roughly 21% compared to 2024. CISA […]

08 January, 2026 01:05PM by Joseph Lee

January 07, 2026

hackergotchi for Deepin

Deepin

January 06, 2026

hackergotchi for VyOS

VyOS

VyOS 1.4.4 LTS Achieves Nutanix Ready Validation for AOS 7.3

We’re excited to announce that VyOS 1.4.4 LTS has officially achieved Nutanix Ready validation for Nutanix Acropolis Operating System (AOS) 7.3 and AHV Hypervisor 10.3.

This milestone strengthens our collaboration with Nutanix and ensures full interoperability for customers deploying VyOS Universal Router within the Nutanix Cloud Infrastructure solution.

06 January, 2026 02:30PM by Santiago Blanquet (yago.blanquet@vyos.io)

hackergotchi for Deepin

Deepin

January 03, 2026

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

New utility: xml2xfconf

In the process of getting Blob - and Carbon - ready for release, a bug with blob's handling of xfconf settings came up: https://forums.bunsenlabs.org/viewtopic … 79#p148079

It turned out that while xfconf-query doesn't output the type of settings entries, it requires to know the type if adding a new entry. So running 'xfconf-query -c "<channel>" -lv' is not enough for backing up an xfce app which stores its settings in the xfconf  database - which most of them do these days. We need to store the type too. That data is luckily stored in the app's xml file in ~/.config/xfce4/xfconf/xfce-perchannel-xml/ so to back it up, all we need to do is save that file.

In principle it might be possible to restore the settings by copying the xml file back into place, overwriting whatever's there, but the apps don't always respond right away, often needing  a logout/in. There's a better way - if you know the missing type then you can run xfconf-query commands to restore the settings.

So, this script called  xml2xfconf . Passed an xfconf xml file - eg a backed-up copy of one of those in xfce-perchannel-xml/ - it will print out a list of xfconf-query commands to apply those settings to the xfconf database, and they'll take effect immediately. cool

Example usage:

restore=$(mktemp)
xml2xfconf -x /path/to/xfce4-terminal.xml -c xfce4-terminal > "$restore"
bash "$restore"

Here's what got written into $restore:

xfconf-query -c xfce4-terminal -p /font-name -n -t string -s Monospace\ 10
xfconf-query -c xfce4-terminal -p /color-use-theme -n -t bool -s false
xfconf-query -c xfce4-terminal -p /font-allow-bold -n -t bool -s true
xfconf-query -c xfce4-terminal -p /title-mode -n -t string -s TERMINAL_TITLE_REPLACE
xfconf-query -c xfce4-terminal -p /scrolling-lines -n -t uint -s 50000
xfconf-query -c xfce4-terminal -p /font-use-system -n -t bool -s false
xfconf-query -c xfce4-terminal -p /background-mode -n -t string -s TERMINAL_BACKGROUND_TRANSPARENT
xfconf-query -c xfce4-terminal -p /background-darkness -n -t double -s 0.94999999999999996
xfconf-query -c xfce4-terminal -p /color-bold-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold-is-bright -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-background-vary -n -t bool -s false
xfconf-query -c xfce4-terminal -p /color-foreground -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-background -n -t string -s \#2c2c2c
xfconf-query -c xfce4-terminal -p /color-cursor-foreground -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-cursor -n -t string -s \#dcdcdc
xfconf-query -c xfce4-terminal -p /color-cursor-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-selection -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-background -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-selection-use-default -n -t bool -s true
xfconf-query -c xfce4-terminal -p /color-bold -n -t string -s ''
xfconf-query -c xfce4-terminal -p /color-palette -n -t string -s \#3f3f3f\;#705050\;#60b48a\;#dfaf8f\;#9ab8d7\;#dc8cc3\;#8cd0d3\;#dcdcdc\;#709080\;#dca3a3\;#72d5a3\;#f0dfaf\;#94bff3\;#ec93d3\;#93e0e3\;#ffffff
xfconf-query -c xfce4-terminal -p /tab-activity-color -n -t string -s \#aa0000

xml2xfconf has been uploaded in the latest version of bunsen-utilities, so now I'm going to rewrite the bits of BLOB which use xfconf (only a couple of apps actually) to use xml2xfconf and with luck the bug which @Dave75 found will go away.

And then the Carbon release can get rolling again.

It wasn't a welcome interruption, but this new utility might be useful outside Blob for people who want to backup and restore xfce  app settings. smile

03 January, 2026 12:00AM

January 02, 2026

hackergotchi for ZEVENET

ZEVENET

How to Evaluate a WAF in 2026 for SaaS Environments

Web applications and APIs are now the operational core of most digital services. They process transactions, expose business logic, manage identities, and connect distributed systems that evolve continuously. In parallel, the volume and sophistication of attacks has increased, driven by automation, accessible tooling, and cloud-specific attack vectors.

Web Application Firewalls remain a critical part of the security stack—but in 2026, the challenge is no longer whether a WAF is deployed. The real question is whether it can be evaluated, measured, and trusted under real operating conditions, especially when consumed as a service.

As WAFs move to SaaS models, teams delegate infrastructure, scaling, and maintenance to the provider. This simplifies operations, but it also changes the evaluation criteria. When you no longer control the underlying system, visibility, isolation, and predictable behavior become non-negotiable technical requirements.

Evaluating a WAF in 2026 is fundamentally different

Traditional evaluations focused heavily on rule coverage or whether a solution “covers OWASP Top 10.” Those checks still matter—but they no longer reflect production reality.

A modern evaluation must answer practical, operational questions:

  • Can the WAF block malicious traffic without breaking legitimate flows?
  • Does it behave consistently in prevention mode and under load?
  • Can its decisions be observed, explained, and audited?

In SaaS environments, this becomes even more critical. When a false positive blocks production traffic or latency spikes unexpectedly, there is often no lower layer to compensate. The WAF’s behavior is the system’s behavior. If that behavior cannot be measured and understood, the evaluation is incomplete.

Why most SaaS WAF evaluations fall short

Many WAF evaluations fail not due to lack of expertise, but because the process itself is incomplete.
Common pitfalls include:

  • Testing in monitor-only mode instead of prevention
  • Relying on default configurations with no real traffic
  • Ignoring operational limits until production
  • Inability to trace why a request was blocked

In SaaS models, additional constraints often surface late: payload size limits, rule caps, log retention, export restrictions, or rate limits in the control plane. These are not secondary details—they directly affect detection quality and incident response.

A meaningful evaluation must be observable and reproducible. If you cannot trace decisions through logs, correlate them with metrics, and explain them after the fact, the WAF becomes a black box.

Detection quality is defined by false positives, not demos

Detection capability is often summarized by a single number, usually the True Positive Rate (TPR). While important, this metric alone is misleading.

A WAF that aggressively blocks everything will score well in detection tests—and fail catastrophically in production.

Real-world evaluation must consider both sides of the equation: blocking malicious traffic and allowing legitimate traffic to pass. False positives are not a usability issue—especially in API-driven systems, where payload structure, schemas, and request volume amplify the cost of false positives.

At scale, even a low False Positive Rate (FPR) can result in:

  • Broken user flows
  • Failed API calls
  • Increased operational load
  • Pressure to weaken or disable protections

This is where most evaluations break down in practice: not on attack detection, but on how much legitimate traffic is disrupted.

A realistic PoC should include scenarios like:

Source of false positives Real-world example What to test
Complex request bodies Deep JSON, multipart forms Recorded API and UI traffic
Business logic flows Search, filtering, checkout End-to-end navigation
Uploads PDFs, images, metadata Real upload paths
Atypical headers Large cookies, custom headers Reverse proxy captures

In SaaS environments, false positives are even more costly, as tuning depends on provider capabilities, change latency, and visibility into decisions.

SKUDONET Cloud Solution

SkudoCloud was designed to deliver application delivery and WAF capabilities as a SaaS service while preserving the technical properties advanced teams need to operate safely in production: transparent inspection, predictable isolation, and full visibility into traffic and security decisions. The goal is to remove infrastructure overhead without turning operations into a black box.

That same philosophy shapes how WAFs should be evaluated in 2026. Teams should assess real behavior: prevention mode, realistic traffic patterns, false positives, API payloads, and performance under load—especially when the service is managed and the underlying system is not directly accessible.

To support that evaluation, we have documented the full methodology in our technical guide:

👉 Download the full guide:

02 January, 2026 11:03AM by Nieves Álvarez

January 01, 2026

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2025/12

The 12th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.18.2, 6.12.63-LTS, 6.6.119-LTS – Added to “dev” repos: COSMIC desktop – Sparky 2025.12 & 2025.12 Special Editions released Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in January too, please.

Source

01 January, 2026 08:08PM by pavroo

December 30, 2025

hackergotchi for Deepin

Deepin

December 29, 2025

hackergotchi for ZEVENET

ZEVENET

Why Application Delivery Visibility Breaks in Secure Architectures

Modern application delivery architectures are built with the right goals in mind. Load balancers distribute traffic, Web Application Firewalls enforce security policies, TLS protects data in transit, and monitoring systems promise observability across the stack. On paper, everything seems covered.

In real production environments, however, many of these architectures operate with critical blind spots. Especially when security components start making decisions that engineers cannot fully see, trace, or explain. This is rarely caused by a lack of tools. More often, it is the result of how security is embedded into the delivery path.

As security becomes more deeply integrated into application delivery, visibility does not automatically follow.

When security turns into a black box

In most production environments, security is no longer a separate layer. WAFs sit directly in the traffic path, inspecting requests, evaluating rules, applying reputation checks and deciding — in real time — whether traffic is allowed or blocked. TLS inspection happens inline, and policies are often updated automatically.

The problem is not that these decisions exist. The problem is that, very often, they cannot be clearly explained after the fact.

In many deployments, teams quickly run into the same limitations:

  • Engineers cannot determine which specific rule caused a request to be blocked
  • WAF logic is exposed only through high-level categories or abstract scores
  • Encrypted traffic is inspected, but the inspection process itself remains invisible
  • Logs are available, but without enough context to correlate decisions with behaviour

The result is a paradox that experienced teams recognize immediately: security coverage increases, while operational visibility decreases.

Common Security Blind Spots in Application Delivery Architectures

These blind spots rarely appear during normal operation. They tend to surface under pressure: traffic spikes, false positives, performance degradation or partial outages. When they do, troubleshooting becomes significantly more complex, because the information engineers need is often incomplete or fragmented.

1. Encrypted traffic without explainability

TLS encryption is essential, but it fundamentally changes how visibility works. In many application delivery stacks, traffic is decrypted at some point, inspected, and then re-encrypted. Security decisions are made, but the path between request, rule and outcome is not always traceable.

When something breaks, engineers are often left with little more than a generic message: “Request blocked by WAF.”

What is missing is the ability to correlate:

  • the original request,
  • the specific rule or condition involved,
  • the security decision that was applied,
  • and the downstream impact on the application.

Without that correlation, root cause analysis turns into guesswork rather than engineering.

2. Abstracted or hidden WAF rule logic

Many WAF platforms expose protection logic through simplified models such as risk scores, rule categories or predefined profiles. While these abstractions make dashboards easier to read, they remove critical detail from day-to-day operations.

When rule logic cannot be inspected directly:

  • false positives are harder to tune with precision,
  • rule conflicts remain invisible,
  • behaviour changes appear without an obvious trigger.

Over time, this erodes trust in automated protection. Teams stop understanding why something happens and start compensating by weakening policies instead of fixing the underlying issue.

3. Security decisions that impact delivery

Security controls do more than allow or block requests. They influence how connections are handled, how retries behave, how sessions persist, and how backend health is perceived by the delivery layer.

When these effects are not visible, delivery problems are often misdiagnosed:

  • backend instability may actually be selective blocking,
  • uneven load distribution may come from upstream filtering,
  • timeouts may be caused by inspection delays under load.

Engineers end up debugging load balancing logic or application behaviour, while the real cause sits silently inside the security layer.

4. Logs without operational context

Logs are often treated as a substitute for visibility. In practice, they are frequently:

  • sampled or rate-limited,
  • delayed,
  • detached from real-time behaviour,
  • owned or processed externally.

A log entry that explains what happened, but not why, is not observability, it is a post-mortem artifact. In critical environments, teams need actionable insight while an incident is unfolding, not hours later.

What Modern WAF Architectures Must Provide

A WAF integrated into the application delivery path should not act as an opaque enforcement layer. Instead, it should provide visibility at each critical stage of the decision process.

In practical terms, this means enabling teams to:Trace each security decision end to end, from the incoming request to the final action applied.

  • Trace each security decision end to end, from the incoming request to the final action applied.
  • Inspect WAF rule logic directly, without relying on abstract categories or risk scores.
  • Correlate blocked or modified requests with delivery behaviour, such as backend health, session persistence or retries.
  • Analyze encrypted traffic transparently, without losing context once TLS inspection is performed.
  • Maintain consistent visibility under load, during traffic peaks or active incidents.

Without these capabilities, security controls may protect applications, but they also introduce operational blind spots that slow down troubleshooting and increase risk.

Visibility in Secure Application Delivery with SKUDONET

SKUDONET Enterprise Edition is designed around a simple principle: security must protect traffic without breaking visibility.

Instead of treating security as a separate black box, SKUDONET integrates WAF and traffic management into a single, observable application delivery platform. This approach ensures that security decisions remain transparent, traceable and actionable for engineers working in real production conditions.

SKUDONET Application Delivery Visibility

Key aspects of this design include:

  • Full visibility and control over WAF rules and behaviour, allowing administrators to inspect rule logic, modify or disable existing rules, and define new ones based on real traffic patterns and application requirements.
  • Clear correlation between security decisions and application delivery impact, making it possible to understand exactly where a request failed, why it was blocked or modified, and how that decision affected backend behaviour.
  • Transparent inspection of encrypted traffic, preserving full request context throughout the entire lifecycle, from decryption to enforcement and delivery.
  • Actionable logging and diagnostics, designed to explain not only what happened, but why it happened, enabling effective tuning, troubleshooting and auditing.

By removing opacity from security enforcement, SKUDONET helps teams retain control over both protection and performance—especially in high-traffic or business-critical environments where visibility is essential.

A 30-day, fully functional evaluation of SKUDONET Enterprise Edition is available for teams who want to validate this level of visibility and control under real workloads.

29 December, 2025 11:31AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

December 26, 2025

hackergotchi for SparkyLinux

SparkyLinux

COSMIC

There is a new desktop available for Sparkers: COSMIC What is COSMIC? COSMIC desktop is available via Sparky ‘dev’ repositories so the repo has to be enabled to install COSMIC desktop on the top of Sparky 8 stable or testing (9). It uses the Wayland session as default. The Sparky meta package installs ‘xwayland’ package as default, but some application can not work or launch. That’s why I…

Source

26 December, 2025 04:21PM by pavroo

December 25, 2025

hackergotchi for Deepin

Deepin

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Carbon version of bunsen-blob is now available

BLOB, the utility that lets people try different desktop theming sets (eg go back to the Boron look, or even Crunchbang) has been upgraded to 13.1-1 on the Carbon repository. This brings a lot of improvements, like support for xfce4-panel profiles, xfconf settings for xfce4-terminal, flexibility over wallpaper settings with a switch to feh by default, and more.

See the changelog: https://github.com/BunsenLabs/bunsen-bl … /changelog
or all the commits: https://github.com/BunsenLabs/bunsen-bl … ts/carbon/

Right now "BLOB Themes Manager" is commented out of jgmenu's prepend.csv, but if you're on a Carbon system you can install the package 'bunsen-blob' and uncomment that menu line to use it. Please check it out. cool

Soon, an upgraded bunsen-configs will include it in the menu by default, and it will be added to the meta - and iso - package lists. A Release Candidate Carbon iso is not far away...

25 December, 2025 12:00AM

December 23, 2025

hackergotchi for Deepin

Deepin

December 21, 2025

hackergotchi for ArcheOS

ArcheOS

The OpArc Project at the SFSCON

Hello everyone, 

this short post is to let anyone interested know that our presentation at SFSCON, held in Bolzano/Bozen at the NOI Techpark, has been published online. SFSCON stays for SFSCON stands for South Tyrol Free Software Conference and is one of Europe's most established annual conferences on Free Software.

This year we at Arc-Team decided to participate with a talk that summarized our approximately 20 years of experience in applying the Open Source philosophy to archaeology (both in the software and hardware fields).

The presentation was titled "Arc-Team and the OpArc Project" and can be viewed both on the conference's official website (where you can also download a PDF version) and on the conference's YouTube channel

I hope the presentation can be interesting for someone. Have a nice day! 

21 December, 2025 02:47PM by Luca Bezzi (noreply@blogger.com)

hackergotchi for Qubes

Qubes

Qubes OS 4.3.0 has been released!

We’re pleased to announce the stable release of Qubes OS 4.3.0! This minor release includes a host of new features, improvements, and bug fixes. The ISO and associated verification files are available on the downloads page.

What’s new in Qubes 4.3?

  • Dom0 upgraded to Fedora 41 (#9402).
  • Xen upgraded to version 4.19 (#9420).
  • Default Fedora template upgraded to Fedora 42 (older versions not supported).
  • Default Debian template upgraded to Debian 13 (versions older than 12 not supported).
  • Default Whonix templates upgraded to Whonix 18 (older versions not supported).
  • Preloaded disposables (#1512)
  • Device “self-identity oriented” assignment (a.k.a. New Devices API) (#9325)
  • Qubes Windows Tools reintroduced with improved features (#1861).

These are just a few highlights from the many changes included in this release. For a more comprehensive list of changes, see the Qubes OS 4.3 release notes.

How to get Qubes OS 4.3.0

  • If you’d like to install Qubes OS for the first time or perform a clean reinstallation on an existing system, there’s never been a better time to do so! Simply download the Qubes 4.3.0 ISO and follow our installation guide.

  • If you’re currently using Qubes 4.2, learn how to upgrade to Qubes 4.3.

  • If you’re currently using a Qubes 4.3 release candidate (RC), update normally (which includes upgrading any EOL templates and standalones you might have) in order to make your system effectively equivalent to the stable Qubes 4.3.0 release. No reinstallation or other special action is required.

In all cases, we strongly recommend making a full backup beforehand.

Known issues in Qubes OS 4.3.0

Templates restored in 4.3.0 from a pre-4.3 backup may continue to target their original Qubes OS release repos (#8701). After restoring such templates in 4.3.0, you must enter the following additional commands in a dom0 terminal:

sudo qubes-dom0-update -y qubes-dist-upgrade
sudo qubes-dist-upgrade --releasever=4.3 --template-standalone-upgrade -y

This will automatically choose the templates that need to be updated. The templates will be shut down during this process.

Fresh templates on a clean 4.3.0 installation are not affected. Users who perform an in-place upgrade from 4.2 to 4.3 (instead of restoring templates from a backup) are also not affected, since the in-place upgrade process already includes the above fix in stage 4. For more information, see issue #8701.

View the full list of known bugs affecting Qubes 4.3 in our issue tracker.

Support for older releases

In accordance with our release support policy, Qubes 4.2 will remain supported for six months after the release of Qubes 4.3, until 2026-06-21. After that, Qubes 4.2 will no longer receive security updates or bug fixes.

Whonix templates are created and supported by our partner, the Whonix Project. The Whonix Project has set its own support policy for Whonix templates in Qubes. For more information, see the Whonix Support Schedule.

Thank you to our partners, donors, contributors, and testers!

This release would not be possible without generous support from our partners and donors, as well as contributions from our active community members, especially bug reports from our testers. We are eternally grateful to our excellent community for making the Qubes OS Project a great example of open-source collaboration.

21 December, 2025 12:00AM

December 20, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2025.12 Special Editions

There are new iso images of Sparky 2025.12 Special Editions out there: GameOver, Multimedia and Rescue. This release is based on Debian testing “Forky”. The December update of Sparky Special Edition iso images feature Linux kernel 6.17, updated packages from Debian and Sparky testing repos as of December 20, 2025, and most changes introduced at the 2025.12 release. The Linux kernels 6.18.2, 6.

Source

20 December, 2025 10:10PM by pavroo

hackergotchi for Purism PureOS

Purism PureOS

A Quarter Century After Cyberselfish, Big Tech Proves Borsook Right

In her book Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of Silicon Valley, published in 2000, Borsook who is based in Palo Alto, California and has previously written for Wired and a host of other industry publications, took aim at what she saw as disturbing trends among the tech industry.

The post A Quarter Century After Cyberselfish, Big Tech Proves Borsook Right appeared first on Purism.

20 December, 2025 05:10AM by Purism

December 19, 2025

hackergotchi for GreenboneOS

GreenboneOS

New Actively Exploited CVSS 10 Flaw in Cisco AsyncOS Spam Quarantine Remote Access

! Update January 26, 2026 On January 15th, 2025, Cisco released patches for CVE-2025-20393 CVSS 10. Cisco recommends upgrading to a fixed release. The patches are intended to remove the persistence mechanisms observed in the campaign. There are no workarounds; patching is required for complete mitigation. Fixed versions are: • Cisco Secure Email Gateway (SEG) […]

19 December, 2025 11:54AM by Joseph Lee

hackergotchi for VyOS

VyOS

VyOS 1.4.4 released: syslog over TLS, AWS GLB support, and 50+ bug fixes

Hello, Community!

Customers and holders of contributor subscriptions can now download VyOS 1.4.4 release images and the corresponding source tarball. This release adds TLS support for syslog, support for AWS gateway load balancer tunnel handler (on AWS only), an option to match BGP prefix origin validation extended communities in route maps, and more. It also fixes over fifty bugs. Additionally, there's now a proper validation to prevent manually assigned multicast addresses, which may break some old malformed configs, so pay attention to it. Last but not least, there's a deprecation warning for SSH DSA keys that will stop working in VyOS releases after 1.5 due to changes in OpenSSH, so make sure to update your user accounts to more secure algorithm keys while you still have the time.

19 December, 2025 09:22AM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for ZEVENET

ZEVENET

SKUDONET 2025: A Technical Recap of Product, Security and Platform Growth

2025 has been a defining year for SKUDONET — not because of a single announcement or isolated launch, but due to sustained progress across product development, security reinforcement and strategic expansion.

Throughout the year, our focus has remained consistent: strengthening the core platform, improving the operational experience for administrators, and ensuring that security and reliability evolve in line with real-world infrastructure demands.

This approach has translated into continuous, incremental improvements rather than disruptive changes, allowing teams to adopt new capabilities without compromising stability in production environments.

Product Evolution Throughout 2025

Over the course of eight product releases, SKUDONET continued to mature as an application delivery and security platform designed for critical environments.

Across these updates, we introduced:

  • 11 new features
  • 31 functional improvements
  • 23 stability fixes
  • 31 resolved security vulnerabilities (CVEs)

Rather than isolated enhancements, these updates reflect a continuous effort to simplify daily operations while reinforcing security and performance at scale.

Key areas of evolution included a renewed Web GUI, designed to be faster, more consistent and easier to navigate in complex environments, as well as meaningful progress in RBAC, enabling more precise and adaptable access control models.

Certificate management also saw significant improvements, with extended Let’s Encrypt automation, broader DNS provider support and fully automated renewal workflows. Alongside this, we reduced execution times for critical operations such as farm start/stop actions and API metric retrieval.

Security and Reliability Reinforcement

During the year, 31 CVEs were resolved, continuously hardening the platform’s attack surface. Beyond vulnerability remediation, SKUDONET focused on reinforcing internal consistency and predictability under load.
Key improvements were made across:

  • Traffic inspection and validation pipelines, improving consistency and traceability when processing and filtering requests
  • Logging and remote log forwarding, ensuring more reliable log handling and easier integration with external logging systems
  • Internal module stability, particularly within IPDS and RBAC, reducing edge-case behaviour under load

Several updates also introduced additional hardening measures, including:

  • Improved handling of client headers, mitigating spoofing and trust issues in proxied environments
  • More secure cookie insertion in HTTP/S services, with stronger defaults to reduce exposure to common web vulnerabilities
  • Stricter security defaults in the management interface, reinforcing protection of administrative access

Together, these enhancements contribute to a platform that behaves more predictably under pressure and is easier to audit and troubleshoot in production.

Automation and Operational Efficiency

Reducing operational overhead for administrators was another consistent theme throughout 2025.
Several improvements were introduced to simplify day-to-day operations and reduce manual intervention, including:

  • AutoUpdate, enabling systems to automatically check, download and install updates, helping teams stay current with security patches and platform improvements while minimizing maintenance windows
  • End-to-end SSL/TLS automation, covering the full certificate lifecycle from creation to renewal and notification, reducing manual certificate management effort
  • Performance optimizations in the Stats API and backend metrics collection, making integrations with monitoring and automation tools faster and more efficient

Together, these enhancements allow teams to spend less time on routine maintenance tasks and more time on capacity planning, optimization and higher-level architectural decisions.

SkudoCloud: A Strategic Step Forward

One of the most significant milestones of the year was the launch of SkudoCloud, SKUDONET’s fully managed SaaS platform for application delivery and security.

SkudoCloud introduces a new operational model in which teams can deploy secure application delivery infrastructure in minutes, without managing the underlying system lifecycle. From the first deployment, users benefit from:

  • Fully managed application delivery and security, removing the need to operate and maintain the platform
  • Integrated traffic management and protection, aligned with the same delivery and security principles as SKUDONET Enterprise Edition
  • Immediate availability of advanced security controls, applied from the initial deployment
  • A simplified operational model, focused on usage rather than infrastructure management

This launch represents a strategic expansion of the SKUDONET ecosystem, complementing on-premise and self-managed deployments with a cloud-native option designed for teams that prioritize simplicity, speed and operational focus.

Expanding Our Global Partner Ecosystem

Alongside product evolution, SKUDONET continued to expand its international presence.

During 2025, seven new partners joined our ecosystem across Europe, Asia and Latin America, strengthening our ability to support customers globally while maintaining close technical collaboration at a regional level:

  • 🇪🇸 Virtual Cable (Spain)
  • 🇹🇷 Fortiva (Turkey)
  • 🇮🇩 SINUX (Indonesia)
  • 🇪🇸 Secra Solutions (Spain)
  • 🇮🇳 Bluella (India)
  • 🇹🇼 Global OMC TECH Inc. (Taiwan)
  • 🇵🇪 BCloud Services SAC (Peru)

This growth reflects increasing demand for open, transparent and flexible application delivery solutions across diverse markets.

Looking Ahead to 2026

2026 will begin with an important milestone: the launch of the SkudoManager, the SKUDONET Central Console.

This unified interface will enable teams to manage multiple nodes, services and products from a single control plane, providing global infrastructure visibility, centralized user and policy management, and integrated monitoring of farms, certificates, security and performance.

Alongside this, we will continue expanding SkudoCloud and reinforcing the Enterprise Edition’s core architecture, staying aligned with our principles of transparency, performance and security.

Thank You for Being Part of the Journey

The progress achieved in 2025 has been possible thanks to our customers, partners and community. We look forward to continuing this journey together in 2026, building an application delivery and security platform that evolves with real operational needs.

19 December, 2025 09:13AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

December 16, 2025

hackergotchi for Purism PureOS

Purism PureOS

2025 Year-End Sale

Announcing Purism's Year End Sale. Offering 15% off your purchases through the end of the year. Just use YEAREND as your coupon code for hardware purchases through December 31, 2025!
Please note that orders placed after December 17th will not ship until January.

The post 2025 Year-End Sale appeared first on Purism.

16 December, 2025 09:17PM by Purism

hackergotchi for GreenboneOS

GreenboneOS

Greenbone’s OPENVAS SCAN Now Supports the Proxmox VE Hypervisor

Users appreciate when software can easily integrate into their existing IT environment. For vendors, this means supporting a cross-platform mix of operating systems and infrastructure. We’re excited to expand our virtualization platform support, bringing Proxmox VE into our family of supported hypervisors. This addition enables more flexibility for deploying OPENVAS SCAN in diverse IT environments. […]

16 December, 2025 09:11AM by Greenbone AG

hackergotchi for ZEVENET

ZEVENET

Open-Source Software Licensing in the SaaS Era

Open-source software has been one of the most transformative forces in the technology sector. Operating systems, databases, web servers, and encryption libraries that we now consider essential exist thanks to thousands of developers who chose to release their code so that anyone could study it, modify it, and improve it.

This model has enabled companies and organizations to build advanced solutions without relying exclusively on proprietary software. However, this openness also introduces a recurring challenge: how to ensure the sustainability of open-source software in a world where software is no longer distributed, but consumed as a service.

In this context, recent discussions around well-known projects have brought renewed attention to licenses such as AGPL (Affero General Public License), specifically designed to respond to this shift in how software is delivered and consumed. Beyond individual cases, the underlying message is clear: open-source software requires a balance between those who contribute and those who use it.

What Do We Mean by “Open Source” in 2025?

When people talk about open-source software, it is often confused with “free software” in the sense of cost. In reality, the term refers to a set of fundamental freedoms:

  • The freedom to run the program for any purpose
  • The freedom to study how it works
  • The freedom to modify it
  • The freedom to redistribute it, with or without changes

These freedoms are defined and enforced through licenses. Some licenses prioritize maximum adoption, while others aim to ensure that the ecosystem remains collaborative and sustainable.

During the 1990s and early 2000s, the dominant model was traditional software distribution: installers, CDs, and packaged binaries. In that context, licenses such as GPL ensured that if you redistributed the software, you were required to release your modifications.

Today, this landscape has changed completely, most software is delivered as a service. Companies can benefit from open-source software without ever distributing it; they simply run it on their own infrastructure. This shift is precisely where modern licensing models come into play.

2. MIT, Apache, GPL, AGPL: What Actually Sets Them Apart

When discussing open-source licenses, we are not just talking about legal text, but about different collaboration models.
Broadly speaking, there are two major families of licenses:

A) Permissive Software licenses (MIT, BSD, Apache 2.0)

Permissive licenses such as MIT, BSD, or Apache 2.0 allow organizations to take the code, modify it, integrate it into proprietary products, and redistribute it without any obligation to return improvements to the community. They are attractive to companies that want minimal legal constraints and maximum flexibility.

Their primary goal is typically to encourage widespread adoption, leaving the decision to contribute entirely up to each organization.

B) Copyleft Software licenses (GPL, LGPL, AGPL)

Copyleft licenses such as GPL, LGPL, and AGPL follow a different logic. If a project benefits from community-driven development, the improvements made on top of that code should remain accessible to the community. The intent is not to restrict commercial use, but to prevent open-source code from being absorbed into closed solutions without any return to the original project.

AGPL (Affero GPL) emerged to address a specific change in context: the transition from distributed software to software offered as a service. Traditional GPL licenses focused on redistribution—if you shipped a binary to a third party, you were required to provide the source code and your modifications.

Permissive vs Copyleft software license

What was the problem?

In the SaaS model, many companies began using open-source software internally or as part of web services without ever “distributing” it. They simply ran it on their own servers and exposed functionality through APIs or web interfaces. In these cases, modifications could be made without being shared.

AGPL extends the scope of reciprocity: if you offer the software to users over a network, your modifications must be made accessible.

When Open Source Forces a Change in Licensing Models

As open-source software has become deeply embedded in enterprise infrastructures, many projects have reached a point where usage grows faster than contributions. This is a natural outcome of how technology is consumed today, particularly in SaaS and cloud environments.

In recent years, several mature projects have adopted AGPL or dual-licensing models after observing recurring patterns:

  • The software becomes a critical component in enterprise environments
  • Internal or commercial forks evolve independently
  • Improvements remain isolated and never reach the core project
  • The cost of ongoing development falls almost entirely on the original maintainers

The result is that, despite widespread adoption, the project lacks the resources required for long-term sustainability.

In this context, adopting reciprocal licenses such as AGPL is not a restrictive move, but a mechanism to preserve the continuity of the project.

The Role of AGPL in Cloud and SaaS Ecosystems

The shift toward cloud and SaaS models has fundamentally transformed the software lifecycle. Many historical licenses were not designed for environments where software operates exclusively as a remote service.

Licenses such as AGPL introduce mechanisms intended to protect the integrity of open-source ecosystems in this new context. Their purpose is not to limit commercial use, but to ensure that significant improvements do not remain locked inside private implementations—especially when the software underpins critical infrastructure.

From a technical and organizational perspective, this approach provides several benefits:

  • Project cohesion, by preventing parallel versions from diverging into incompatible branches
  • Greater transparency and security, as shared improvements can be audited and reviewed collectively
  • Reduced duplication of effort, allowing innovation to build on a common foundation
  • A more balanced ecosystem, where the cost of evolution is not borne by a single actor

As a result, more projects are adopting hybrid models that combine open foundations, reciprocity mechanisms, and commercial offerings that fund ongoing development. This is not an exception, but a natural response to how software is built and maintained today.

5. Open Source and Long-Term Sustainability

Open-source software remains a fundamental driver of innovation. Its sustainability, however, depends on maintaining a fair balance between those who create and those who rely on it.

As SaaS models continue to redefine software consumption, licensing frameworks evolve alongside them. AGPL and other reciprocal licenses do not aim to restrict adoption, but to ensure that critical projects can continue to grow, improve, and remain viable over time.

Ultimately, the goal is to protect the continuity of the open-source ecosystem in a technical landscape that is changing rapidly.

At SKUDONET, we work closely with open-source technologies in security and application delivery. Understanding how licensing models evolve is a key part of building sustainable infrastructure.

16 December, 2025 07:13AM by Nieves Álvarez

December 15, 2025

hackergotchi for Purism PureOS

Purism PureOS

PureOS Crimson Development Report: November 2025

With our sights set on the beta release milestone, one key component still remains: a way to upgrade from Byzantium to Crimson.

If you're a Linux expert, you might already know how Debian handles release upgrades. Some eager individuals have already upgraded from Byzantium to the Crimson alpha this way. However, we need an easy, graphical upgrade procedure, so everyone with a Librem 5 can get the improvements coming in Crimson.

The post PureOS Crimson Development Report: November 2025 appeared first on Purism.

15 December, 2025 08:59PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Native integration available between Canonical LXD and HPE Alletra MP B10000

The integration combines efficient open source virtualization with high performance, enterprise-grade storage

We are pleased to announce a new,  native integration between Canonical LXD and HPE Alletra. This integration brings together Canonical’s securely designed, open source virtualization platform with HPE’s enterprise-grade storage to deliver a simple, scalable, high-performance experience. 

Enterprise-grade storage meets efficient open source virtualization

HPE Alletra is designed to deliver mission-critical storage at mid range economics, with a consistent data experience across various cloud environments. With this integration, Canonical LXD and MicroCloud users can now provision and manage Alletra block storage directly through the LXD interface, without the need for any third-party plugins or additional abstraction layers. 

The integration enables users to seamlessly create, attach, snapshot, and manage thin-provisioned block volumes as easily as working with local storage, while retaining the full performance, resilience, and enterprise data services of HPE Alletra. 

Simplified operations and scalable performance

HPE Alletra NVMe-based architecture ensures sub-millisecond latency for demanding workloads, with built-in services such as thin-provisioning and data deduplication minimizing storage costs while maintaining consistent performance. Paired with LXD’s lightweight control plane and streamlined UI, users can easily operate their environments, combining the best of open source with enterprise storage functionality.

As demands grow, new volumes and storage capacity can be allocated on the fly. Alletra’s scale-out, modular architecture ensures the platform can expand without disrupting running workloads. 

A foundation for modern infrastructure deployments 

LXD provides a unified open source virtualization platform to run system containers and virtual machines with a focus on efficiency, security, and ease of management. With native support for HPE Alletra, enterprises can now build, deploy, and manage their workloads with enterprise storage guarantees, whether in private clouds, on-premises data centers, or edge environments. 

The combined solution empowers teams to deliver predictable performance for critical and data-intensive workloads while reducing complexity and ensuring agility.

Availability

The integration between LXD and HPE Alletra is available starting with the LXD 6.6 feature release, and requires a HPE Alletra WSAPI version 1. LXD currently supports connecting to HPE Alletra storage through NVMe/TCP or iSCSI protocols. For detailed information, visit Canonical’s LXD documentation.

15 December, 2025 03:57PM

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2025.12

There are new SparkyLinux 2025.12 codenamed “Tiamat” ISO images available of the semi-rolling line. This new release is based on the Debian testing “Forky”. Main changes: – Packages updated from the Debian and Sparky testing repositories as of December 14, 2025. – Linux kernel 6.17.11 (6.18.1, 6.12.62-LTS & 6.6.119-LTS in Sparky repos) – Firefox 140.5.0esr (146.0 in Sparky repos) …

Source

15 December, 2025 09:59AM by pavroo

hackergotchi for Deepin

Deepin

December 14, 2025

hackergotchi for Grml developers

Grml developers

Evgeni Golov: Home Assistant, Govee Lights Local, VLANs, Oh my!

We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.

Our network is not that complicated, but there is a dedicated VLAN for IOT devices. Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily. So far, this has never been a problem.

Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.

The API involves sending JSON over multicast, which the Govee device will answer to.

No devices found on the network

After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:

DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2
DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2

That's not the IP address in the IOT VLAN!

Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.

You need to go to SettingsNetworkNetwork adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.

Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.

14 December, 2025 03:48PM

December 13, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Why you should retire your Microsoft Azure Consumption Commitment (MACC) with Ubuntu Pro

When your organization first signed its Microsoft Azure Consumption Commitment (MACC), it was a strategic step to unlock better pricing and enable cloud growth. However, fulfilling that commitment efficiently requires planning. Organizations often look for ways to retire their MACC that drive strategic value, rather than simply increasing consumption to meet a deadline.

The goal is to meet your commitment while delivering long-term benefits to the business.

With Ubuntu Pro in the Azure Marketplace, you can retire your MACC at 100% of the pretax purchase amount. In practice, this allows you to meet consumption goals on your standard Azure invoice, while securing your open source supply chain and automating compliance.

Turn a spend target into an open source security strategy

Instead of simply increasing consumption to hit a target, effective IT and FinOps teams align their MACC with broader strategic goals. Open source support and security maintenance is a priority for enterprises, as a recent Linux Foundation report shows: 54% of enterprises want long-term guarantees, and 53% expect rapid security patching.

Ubuntu Pro offers both. By choosing software that strengthens your security and operations, you can retire your MACC while funding capabilities your organization prioritizes.

Allocating MACC to Ubuntu Pro is a direct investment in your open source estate:

  • Expanded Security Maintenance (ESM): extend security coverage to the critical open source applications running above the operating system layer. ESM provides up to 15 years of security updates for the OS, plus tens of thousands of packages. You might already see alerts for these missing updates in your Azure portal – learn how to check your exposure in our blog: [A complete security view for every Ubuntu LTS VM on Azure].
  • Kernel Livepatch: reduce maintenance windows by applying critical kernel patches without requiring a reboot for most workloads.
  • Compliance tooling: access options for CIS hardening and FIPS 140-3 validated cryptographic modules to support meeting compliance and regulatory needs.
  • Optional enterprise support: add enterprise SLAs, direct access to Canonical engineers for break-fix and bug-fix, and guidance on operating Ubuntu and ESM-covered packages on Azure.

By choosing Ubuntu Pro, you convert your MACC spend into a maintained open source foundation across the development lifecycle.

Maximize value and streamline procurement

Retiring your commitment should be financially efficient and administratively simple. While standard Marketplace listings are MACC-eligible, many organizations use private offers to secure tailored commercial terms, like custom pricing or volume discounts, without sacrificing eligibility.


We support both standard private offers and multiparty private offers for rollouts involving resellers in the US/UK. In all cases, checking that your purchase counts toward your commitment is straightforward:

  • Confirm Eligibility: verify the listing or private offer is marked as “Azure benefit-eligible.”
  • Purchase Correctly: execute the transaction in the Azure portal under the tenant and subscription tied to your MACC agreement.

This approach guarantees that every dollar spent satisfies your financial goals while delivering the specific security coverage your organization needs.

Ready to align Ubuntu Pro with your MACC? Talk to our team.

13 December, 2025 12:07AM

December 12, 2025

hackergotchi for Grml developers

Grml developers

grml development blog: Grml - new stable release 2025.12 available

We are proud to announce our new stable release 🚢 version 2025.12, code-named ‘Postwurfsendung’!

Grml is a bootable live system (Live CD) based on Debian. Grml 2025.12 brings you fresh software packages from Debian testing/forky, enhanced hardware support and addresses known bugs from previous releases.

Like in the previous release 2025.08, Live ISOs 📀 are available for 64-bit x86 (amd64) and 64-bit ARM CPUs (arm64).

❤️ Thanks ❤️

Once again netcup contributed financially, this time specifically to this release. Thank you, netcup ❤️

12 December, 2025 11:12AM