Visualização normal

Antes de ontemSOC Prime Blog
  • ✇SOC Prime Blog
  • CVE-2026-23918: Critical Apache HTTP/2 Flaw Can Trigger DoS and Possible RCE SOC Prime Team
    Apache has patched CVE-2026-23918, a critical flaw in Apache HTTP Server’s HTTP/2 handling that Apache describes as a “double free and possible RCE.” The issue affects Apache HTTP Server 2.4.66 and was fixed in 2.4.67, released on May 4, 2026. The CVE-2026-23918 vulnerability matters because it can be abused remotely and without authentication. Public reporting says the bug can cause a denial-of-service condition and, under certain conditions, may also open a path to remote code execution, m
     

CVE-2026-23918: Critical Apache HTTP/2 Flaw Can Trigger DoS and Possible RCE

6 de Maio de 2026, 11:13

Apache has patched CVE-2026-23918, a critical flaw in Apache HTTP Server’s HTTP/2 handling that Apache describes as a “double free and possible RCE.” The issue affects Apache HTTP Server 2.4.66 and was fixed in 2.4.67, released on May 4, 2026.

The CVE-2026-23918 vulnerability matters because it can be abused remotely and without authentication. Public reporting says the bug can cause a denial-of-service condition and, under certain conditions, may also open a path to remote code execution, making it one of the most serious issues addressed in Apache’s latest security release.

Apache credits Bartlomiej Dmitruk of striga.ai and Stanislaw Strzalkowski of isec.pl with reporting the flaw. Apache’s own vulnerability page shows it was reported to the security team on December 10, 2025, fixed in source on December 11, 2025, and shipped to users in the 2.4.67 release months later.

CVE-2026-23918 analysis

According to Apache and researcher commentary cited by The Hacker News, the bug is a double-free in mod_http2, specifically in the stream cleanup path. It can be triggered when a client sends an HTTP/2 HEADERS frame and then immediately sends RST_STREAM with a non-zero error code before the stream is fully registered by the multiplexer.

That sequence can cause two callbacks to run in a way that pushes the same stream object into the cleanup array twice. When Apache later destroys the stream entries, memory that has already been freed gets released again. In practical terms, the vulnerability in CVE-2026-23918 is a memory-management flaw that can crash worker processes and, in the right environment, be shaped into code execution.

The denial-of-service path appears to be the easiest outcome. The researchers told The Hacker News that one TCP connection and two HTTP/2 frames are enough to crash a worker in default deployments that use mod_http2 with a multi-threaded MPM. They also noted that MPM prefork is not affected, while the possible RCE path depends on an APR configuration using the mmap allocator, which is said to be the default on Debian-derived systems and in the official httpd Docker image.

As for exploitation maturity, public reporting says the researchers built a working CVE-2026-23918 poc for x86_64 in lab conditions. They also said practical exploitation still needs helpful conditions such as an information leak and favorable memory reuse, so code execution is more demanding than simple service disruption.

At this stage, public details for CVE-2026-23918 point much more clearly to process crashes and worker instability than to widely reproducible RCE in the field. There are also no vendor-published CVE-2026-23918 iocs, so defenders should focus on version exposure, unexpected worker crashes, and suspicious HTTP/2 reset patterns rather than on a stable signature set.

Explore Detections

CVE-2026-23918 Mitigation

The core fix is to upgrade Apache HTTP Server from 2.4.66 to 2.4.67. Apache’s security advisory explicitly recommends moving to the patched version, and SecurityWeek notes that the release fixes 11 vulnerabilities, including this critical HTTP/2 issue.

For immediate triage, defenders should identify internet-facing systems where mod_http2 is enabled and where threaded MPMs are in use. That is the most practical way to detect CVE-2026-23918 exposure because the attack hinges on HTTP/2 request handling, not on a dropped malware artifact or traditional post-exploitation beacon.

If emergency patching is delayed, reducing exposure to HTTP/2 traffic may help shrink the attack surface until updates are applied. The CVE-2026-23918 payload described publicly is not a conventional file or binary but a crafted sequence of HTTP/2 frames designed to force the faulty cleanup path, so network-facing Apache instances should be prioritized first.

From a risk perspective, CVE-2026-23918 affects organizations that rely on Apache HTTP Server 2.4.66 for public web workloads, especially where HTTP/2 is enabled by default or broadly deployed for performance reasons. That includes standard Linux-based web servers as well as containerized deployments using the official Apache image.

FAQ

What is CVE-2026-23918 and how does it work?

It is a critical double-free flaw in Apache HTTP Server’s HTTP/2 handling. A specially timed sequence of HTTP/2 frames can push the same stream object into cleanup twice, leading to worker crashes and potentially enabling remote code execution under favorable conditions.

When was CVE-2026-23918 first discovered?

Apache’s vulnerability page says the issue was reported to the security team on December 10, 2025. The fix landed in source on December 11, 2025, and the patched 2.4.67 release was published on May 4, 2026.

What is the impact of CVE-2026-23918 on systems?

The most immediate impact is denial of service through crashed Apache workers. Public reporting also says the flaw may allow remote code execution, although that path appears more complex and environment-dependent than the crash scenario.

Can CVE-2026-23918 still affect me in 2026?

Yes. Systems can still be exposed in 2026 if they are running Apache HTTP Server 2.4.66 with mod_http2 enabled and have not yet been updated to 2.4.67. The risk is especially relevant for deployments using threaded MPMs.

How can I protect myself from CVE-2026-23918?

Upgrade to Apache HTTP Server 2.4.67 as soon as possible, identify exposed HTTP/2-enabled deployments, and prioritize externally reachable servers for remediation. Where patching cannot happen immediately, reducing HTTP/2 exposure can help lower short-term risk.



The post CVE-2026-23918: Critical Apache HTTP/2 Flaw Can Trigger DoS and Possible RCE appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-0300: Palo Alto PAN-OS Zero-Day Enables Root RCE on Exposed Firewalls SOC Prime Team
    Edge security appliances remain high-value targets, especially when a flaw can be exploited before a patch is widely available. The CVE-2026-0300 vulnerability is a critical buffer overflow in the User-ID Authentication Portal, also known as Captive Portal, in Palo Alto Networks PAN-OS. Palo Alto rates it 9.3/10 when the portal is exposed to the internet or other untrusted networks, and says an unauthenticated attacker can execute arbitrary code with root privileges on affected PA-Series and VM
     

CVE-2026-0300: Palo Alto PAN-OS Zero-Day Enables Root RCE on Exposed Firewalls

6 de Maio de 2026, 09:12

Edge security appliances remain high-value targets, especially when a flaw can be exploited before a patch is widely available. The CVE-2026-0300 vulnerability is a critical buffer overflow in the User-ID Authentication Portal, also known as Captive Portal, in Palo Alto Networks PAN-OS. Palo Alto rates it 9.3/10 when the portal is exposed to the internet or other untrusted networks, and says an unauthenticated attacker can execute arbitrary code with root privileges on affected PA-Series and VM-Series firewalls by sending specially crafted packets.

For teams beginning CVE-2026-0300 analysis, the most important details for CVE-2026-0300 are the exposure conditions: the issue applies only when User-ID Authentication Portal is enabled, and Palo Alto says risk is greatly reduced when access is limited to trusted internal IP addresses. The company also says limited exploitation has already been observed against portals exposed to untrusted IP space or the public internet.

In practice, CVE-2026-0300 affects only PA-Series and VM-Series firewalls configured to use the User-ID Authentication Portal. Prisma Access, Cloud NGFW, and Panorama are not impacted, which makes configuration review as important as version review when triaging exposure.

CVE-2026-0300 analysis

The vulnerability in CVE-2026-0300 is a buffer overflow in PAN-OS’s User-ID Authentication Portal service. According to Palo Alto, exploitation does not require credentials or user interaction, and the attacker’s goal is remote code execution as root through specially crafted network packets. SecurityWeek likewise describes the flaw as a zero-day used to hack some firewall models, underscoring that this is not a theoretical issue.

The publicly described CVE-2026-0300 payload is not a malware file dropped to disk but a malicious packet sequence sent to the Captive Portal component. Neither the vendor advisory nor the cited media reports includes a public CVE-2026-0300 poc, but the confirmed in-the-wild exploitation means defenders should assume capable threat actors already understand the triggering conditions well enough to weaponize them.

From a risk standpoint, CVE-2026-0300 detection should focus on externally reachable Authentication Portal instances and signs of attempted access to that service from untrusted networks. Palo Alto’s advisory does not publish packet-level CVE-2026-0300 iocs, so defenders are better served by identifying exposed portal configurations, narrowing allowed source IP ranges, and prioritizing internet-facing firewalls for remediation.

Explore Detections

CVE-2026-0300 Mitigation

Effective CVE-2026-0300 mitigation starts with reducing exposure before fixes land. Palo Alto recommends either restricting User-ID Authentication Portal access to trusted zones/internal IP addresses or disabling the portal entirely if it is not required. That advice is especially important because, at disclosure, the flaw was still unpatched, with the first wave of fixes expected on May 13, 2026 and additional releases on May 28, 2026 across supported 12.1, 11.2, 11.1, and 10.2 trains.

To Detect CVE-2026-0300 exposure in your environment, verify whether Device > User Identification > Authentication Portal Settings has the portal enabled and determine whether it is reachable from the internet or any untrusted network segment. Palo Alto’s advisory makes clear that customers following this hardening model are at greatly reduced risk compared with deployments that leave the service publicly accessible.

Organizations should also map affected firewalls to Palo Alto’s target fixed versions and prepare an upgrade plan as soon as the relevant release becomes available. Because limited exploitation is already underway, this is a case where configuration hardening and emergency change control should happen in parallel rather than waiting for normal maintenance windows.

FAQ

What is CVE-2026-0300 and how does it work?

CVE-2026-0300 is a critical PAN-OS buffer overflow in the User-ID Authentication Portal (Captive Portal). Palo Alto says an unauthenticated attacker can send specially crafted packets to the service and achieve arbitrary code execution with root privileges on affected PA-Series and VM-Series firewalls.

When was CVE-2026-0300 first discovered?

Palo Alto’s advisory says the issue was discovered in production use and was published on May 5, 2026. The public coverage from The Hacker News and SecurityWeek followed on May 6, 2026.

What is the impact of CVE-2026-0300 on systems?

The impact is severe: unauthenticated remote code execution as root on exposed firewalls. Because the flaw affects security infrastructure at the network edge, successful exploitation could give an attacker privileged control over a highly sensitive enforcement point.

Can CVE-2026-0300 still affect me in 2026?

Yes. Any affected PA-Series or VM-Series firewall can still be at risk in 2026 if it has User-ID Authentication Portal enabled and exposed to untrusted IP addresses or the public internet, especially until the relevant patched PAN-OS release is installed.

How can I protect myself from CVE-2026-0300?

Restrict User-ID Authentication Portal access to trusted internal IPs, disable it if it is unnecessary, and move to Palo Alto’s fixed PAN-OS builds as soon as they are available for your release train. The vendor explicitly says these steps materially reduce risk while active exploitation continues.



The post CVE-2026-0300: Palo Alto PAN-OS Zero-Day Enables Root RCE on Exposed Firewalls appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-41940: Critical cPanel & WHM Authentication Bypass Exposes Hosting Servers to Admin Takeover SOC Prime Team
    A newly disclosed CVE-2026-41940 vulnerability in cPanel & WHM has put internet-facing hosting infrastructure under urgent scrutiny. The flaw carries a CVSS score of 9.8 and can let an unauthenticated remote attacker bypass authentication and gain administrative access, while cPanel’s advisory says the issue affects cPanel software, including DNSOnly, across all versions after 11.40. For defenders, CVE-2026-41940 detection should focus on exposed control panel instances, emergency patch val
     

CVE-2026-41940: Critical cPanel & WHM Authentication Bypass Exposes Hosting Servers to Admin Takeover

30 de Abril de 2026, 09:47

A newly disclosed CVE-2026-41940 vulnerability in cPanel & WHM has put internet-facing hosting infrastructure under urgent scrutiny. The flaw carries a CVSS score of 9.8 and can let an unauthenticated remote attacker bypass authentication and gain administrative access, while cPanel’s advisory says the issue affects cPanel software, including DNSOnly, across all versions after 11.40.

For defenders, CVE-2026-41940 detection should focus on exposed control panel instances, emergency patch validation, and session-file triage rather than malware hunting. Hosting provider KnownHost said the flaw was being actively exploited in the wild, and that a public technical analysis plus exploit code had already been released by watchTowr, raising the likelihood of broader opportunistic abuse.

The business risk is substantial because successful exploitation can give attackers control over the cPanel host, its configurations and databases, and the websites it manages. A simple Shodan query returned roughly 1.5 million exposed cPanel instances, underscoring how much attack surface may be available to both targeted and mass scanning activity.

CVE-2026-41940 analysis

The bug is describes as an authentication bypass rooted in CRLF injection during the login and session-loading process in cPanel & WHM. According to its technical overview, cpsrvd writes a new session file to disk before authentication completes, and an attacker can manipulate the whostmgrsession cookie so attacker-controlled values avoid the expected encryption path and are written into the session file unsanitized.

In practical terms, the vulnerability in CVE-2026-41940 lets an attacker inject arbitrary properties such as user=root into a session file, then trigger a reload so the application treats the session as administrative. That is why this issue is especially dangerous for shared hosting environments and server operators: it is not merely a login bug, but a route to privileged control over the management plane itself.

Unlike a malware dropper, the CVE-2026-41940 payload is a crafted authentication request that abuses newline injection and malformed session values to poison pre-auth session data. A public CVE-2026-41940 poc was already available through third-party research.

Official details for CVE-2026-41940 are broader than the exploit mechanics alone. cPanel says the issue affects cPanel software including DNSOnly, while patched builds were issued for 11.86.0.41, 11.110.0.97, 11.118.0.63, 11.126.0.54, 11.130.0.19, 11.132.0.29, 11.134.0.20, and 11.136.0.5, alongside WP Squared 136.1.7. TheCyberExpress also highlighted that administrators must verify the installed version and restart cpsrvd after updating.

Just as importantly, CVE-2026-41940 affects not only directly exposed cPanel & WHM systems but also operational workflows that rely on pinned builds or disabled automatic updates. That matters because cPanel warned that such servers will not auto-update and must be manually remediated as a priority, while unsupported versions may also remain exposed until organizations move to supported release tracks.

Explore Detections

CVE-2026-41940 Mitigation

The vendor’s primary guidance is straightforward: update immediately to one of the fixed versions using /scripts/upcp –force, confirm the installed build with /usr/local/cpanel/cpanel -V, and restart the service with /scripts/restartsrv_cpsrvd. cPanel also says administrators should manually identify systems where updates are disabled or version pinning prevents automatic remediation.

When patching cannot happen right away, cPanel recommends temporary containment steps that include blocking inbound traffic on ports 2083, 2087, 2095, and 2096 at the firewall or stopping cpsrvd and cpdavd. TheCyberExpress echoed the same short-term advice and noted that some providers restricted panel access while broader patch rollout was underway.

To detect CVE-2026-41940, defenders should use the vendor’s filesystem-based detection script and review suspicious entries under /var/cpanel/sessions. cPanel’s script looks for session artifacts such as token_denied appearing together with cp_security_token, authenticated attributes inside pre-auth sessions, suspicious tfa_verified states, and malformed multi-line password values. Those published checks effectively act as CVE-2026-41940 iocs for post-exploitation triage.

If the script flags likely compromise, cPanel says defenders should purge affected sessions, force password resets for root and all WHM users, audit /var/log/wtmp and WHM access logs, and look for persistence such as cron entries, SSH keys, or backdoors. In other words, CVE-2026-41940 mitigation should be handled as both patching and incident response, not just a routine version upgrade. When patching cannot happen right away, cPanel recommends temporary containment steps that include blocking inbound traffic on ports 2083, 2087, 2095, 2096 and http ports 2082, 2086 at the firewall.

FAQ

What is CVE-2026-41940 and how does it work?

It is a critical cPanel & WHM authentication bypass flaw that stems from session handling and CRLF injection in the login/session-loading flow. Attackers can manipulate pre-auth session data and ultimately create administrator-level access without valid credentials.

When was CVE-2026-41940 first discovered?

The private discovery date has not been publicly disclosed in the sources reviewed. Publicly, cPanel acknowledged the issue in a security advisory published on April 28, 2026.

What is the impact of CVE-2026-41940 on systems?

Successful exploitation can give an unauthenticated attacker administrative access to cPanel & WHM, which can translate into control over the host system, configurations, databases, and hosted websites. In shared hosting environments, that can turn a panel compromise into a full platform compromise.

Can CVE-2026-41940 still affect me in 2026?

Yes. Any exposed system that has not been updated to a fixed build can still be at risk in 2026, especially if automatic updates are disabled, the server is pinned to a vulnerable version, or it is running an unsupported release that has not yet been moved to a supported patched branch.

How can I protect myself from CVE-2026-41940?

Apply the vendor’s patched build immediately, restart cpsrvd, run the detection script against /var/cpanel/sessions, review for suspicious session artifacts, and treat any confirmed hit as a possible compromise requiring session purges, password resets, and log review. Short-term firewall restrictions can reduce exposure, but cPanel make clear that patching is the priority.



The post CVE-2026-41940: Critical cPanel & WHM Authentication Bypass Exposes Hosting Servers to Admin Takeover appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-28950: Apple Fixes iOS Flaw That Retained Deleted Notification Data SOC Prime Team
    Apple has released security updates to address a Notification Services issue in iOS and iPadOS that could cause alerts marked for deletion to remain stored on a device. The fix was delivered in iOS 26.4.2 / iPadOS 26.4.2 and iOS 18.7.8 / iPadOS 18.7.8, where Apple says the problem was resolved through improved data redaction. The issue drew attention because it was patched outside Apple’s normal release cycle and was publicly linked to concerns that deleted notification content could remain
     

CVE-2026-28950: Apple Fixes iOS Flaw That Retained Deleted Notification Data

23 de Abril de 2026, 11:01

Apple has released security updates to address a Notification Services issue in iOS and iPadOS that could cause alerts marked for deletion to remain stored on a device. The fix was delivered in iOS 26.4.2 / iPadOS 26.4.2 and iOS 18.7.8 / iPadOS 18.7.8, where Apple says the problem was resolved through improved data redaction.

The issue drew attention because it was patched outside Apple’s normal release cycle and was publicly linked to concerns that deleted notification content could remain recoverable on affected devices. Based on public reporting, the flaw may have allowed sensitive message previews to persist in internal notification storage longer than users would reasonably expect.

For defenders and privacy-focused users, the key concern is not traditional remote exploitation but unintended data retention. At the time of disclosure, Apple did not publish exploit samples, telemetry artifacts, or a public proof-of-concept, leaving many technical details for CVE-2026-28950 limited to the vendor advisory and media reporting.

CVE-2026-28950 analysis

Apple describes the issue as a logging-related flaw in Notification Services that allowed notifications intended for deletion to be unexpectedly retained on the device. In practice, this means content visible in alerts, such as message previews or other app-generated text, may continue to exist in local storage after the user assumes it has been removed.

Public reporting connected the patch to earlier forensic concerns involving message content recovered from notification storage on iPhones. While Apple did not explicitly confirm those reports as the direct trigger for the update, the description of the flaw closely aligns with the broader privacy risk described in public coverage.

Explore Detections

The main security impact is on confidentiality rather than integrity or availability. The problem is especially relevant in environments where lock-screen notifications or mobile message previews may expose regulated, operational, or otherwise sensitive information. From that standpoint, the CVE-2026-28950 vulnerability is best understood as a privacy and data-remanence issue rather than a conventional code-execution bug.

Public reporting also leaves several gaps. Apple did not assign a public CVSS score in the cited coverage, and there are no published network indicators or forensic signatures that would support classic threat hunting. As a result, organizations should focus on version validation and privacy controls rather than looking for a known CVE-2026-28950 payload or a fixed list of CVE-2026-28950 IOCs.

CVE-2026-28950 Mitigation

The primary response is to install Apple’s fixed releases across affected iPhone and iPad fleets. Security teams should verify that supported devices have moved to the patched versions and prioritize users who regularly handle confidential communications, executive discussions, legal material, or regulated data on mobile devices.

An additional defense-in-depth step is to reduce the amount of sensitive information shown in notifications. Public reporting notes that Signal users, for example, can limit what appears in alerts by changing notification content settings to display less message text. While that does not replace patching, it can reduce exposure where private data might otherwise remain accessible in notification storage.

From an operational perspective, the most practical path is simple: inventory devices, confirm version compliance, and review notification-preview policies for high-risk user groups. This is a more realistic protection strategy than trying to Detect CVE-2026-28950 through conventional threat indicators, because the issue centers on retained local data rather than a well-documented exploit chain.

Additionally, by leveraging SOC Prime’s AI-Native Detection Intelligence Platform backed by top cyber defense expertise, global organizations can adopt a resilient security posture and transform their SOC to always stay ahead of emerging threats tied to zero-day exploitation.

FAQ

What is CVE-2026-28950 and how does it work?

It is an iOS and iPadOS Notification Services flaw that could cause deleted notifications to remain stored on a device. Apple says the problem was caused by a logging issue and addressed it through improved data redaction.

When was CVE-2026-28950 first discovered?

The public sources do not provide a private discovery date. What is confirmed is that Apple released fixes on April 22, 2026.

What is the impact of CVE-2026-28950 on systems?

The main impact is exposure of sensitive notification content that may remain on the device after deletion. This can matter in forensic, privacy, or device-access scenarios where retained alert data could reveal message previews or other confidential content.

Can CVE-2026-28950 still affect me in 2026?

Yes. Devices that have not been updated to the patched releases may still be exposed during 2026, particularly if apps display sensitive content in notifications.

How can I protect myself from CVE-2026-28950?

Install Apple’s updates, verify device compliance, and reduce sensitive notification previews where possible. For privacy-sensitive environments, limiting the amount of message content shown in alerts is a sensible additional safeguard. If you want, I can now also make the meta title, meta description, and excerpt match this less-keyword-stuffed style.



The post CVE-2026-28950: Apple Fixes iOS Flaw That Retained Deleted Notification Data appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-40372: Critical ASP.NET Core Flaw May Let Attackers Gain SYSTEM Privileges SOC Prime Team
    Microsoft has released out-of-band updates for CVE-2026-40372, a high-impact ASP.NET Core privilege-escalation vulnerability tied to the platform’s Data Protection cryptographic APIs. Public reporting says the flaw carries a CVSS score of 9.1 and could allow an unauthenticated attacker to forge authentication material and ultimately obtain SYSTEM privileges on affected systems. The issue stands out not only because of its severity, but also because it was serious enough to trigger an emergen
     

CVE-2026-40372: Critical ASP.NET Core Flaw May Let Attackers Gain SYSTEM Privileges

23 de Abril de 2026, 07:55
CVE-2026-40372 detection

Microsoft has released out-of-band updates for CVE-2026-40372, a high-impact ASP.NET Core privilege-escalation vulnerability tied to the platform’s Data Protection cryptographic APIs. Public reporting says the flaw carries a CVSS score of 9.1 and could allow an unauthenticated attacker to forge authentication material and ultimately obtain SYSTEM privileges on affected systems.

The issue stands out not only because of its severity, but also because it was serious enough to trigger an emergency release outside the normal patch cycle. BleepingComputer reports Microsoft investigated after customers saw decryption failures following the .NET 10.0.6 update, while The Hacker News notes the bug was reported by an anonymous researcher and fixed in ASP.NET Core 10.0.7.

CVE-2026-40372 Analysis

According to Microsoft details cited by both publications, CVE-2026-40372 stems from improper verification of a cryptographic signature in ASP.NET Core. More specifically, the affected Microsoft.AspNetCore.DataProtection 10.0.0–10.0.6 NuGet packages could compute the HMAC validation tag over the wrong bytes of the payload and then discard the computed hash in some cases. That breaks the trust model behind protected application data and opens the door to forged payloads that pass authenticity checks.

The attack surface is narrower than a generic “all ASP.NET Core apps are vulnerable” headline might suggest. The Hacker News says successful exploitation depends on three conditions: the application must use Microsoft.AspNetCore.DataProtection 10.0.6 from NuGet either directly or through a dependent package, the NuGet copy must actually be loaded at runtime, and the application must run on Linux, macOS, or another non-Windows operating system.

Explore Detections

If those conditions are met, the impact can be severe. The affected validation routine may let an attacker forge payloads and decrypt previously protected values stored in items such as authentication cookies, antiforgery tokens, TempData, and OpenID Connect state. Microsoft also says exploitation could enable file disclosure and data modification, although it does not affect availability.

The most dangerous enterprise scenario is privilege escalation through trust abuse rather than noisy code execution. If an attacker can authenticate as a privileged user during the vulnerable window, the application may issue legitimately signed follow-on artifacts to the attacker, including refreshed sessions, API keys, or password-reset links. Those artifacts can remain valid even after the package is upgraded unless defenders also rotate the Data Protection key ring.

CVE-2026-40372 Mitigation

The primary fix is straightforward: update Microsoft.AspNetCore.DataProtection to version 10.0.7 and redeploy affected applications. Microsoft’s guidance, as quoted by BleepingComputer, is to apply the new package as soon as possible so the broken validation routine is corrected and forged payloads are rejected going forward.

That said, patching alone may not fully close the exposure. Both reports note that tokens issued during the vulnerable period can remain valid after upgrading unless the Data Protection key ring is rotated. In practice, organizations should treat key rotation as part of the remediation workflow, especially for internet-facing apps that rely heavily on cookies, antiforgery tokens, password-reset flows, or other signed application state. That last prioritization is an operational inference based on the affected token types and exploit preconditions.

A practical response plan is to identify non-Windows ASP.NET Core applications that loaded the vulnerable NuGet package at runtime, patch them to 10.0.7, rotate the Data Protection key ring, and review whether privileged sessions or other signed artifacts may have been issued while the application was exposed. Where feasible, teams should also consider expiring or reissuing sensitive session material after remediation. The package-and-runtime triage criteria come directly from Microsoft’s published conditions; the token review and reissuance step is a reasonable defensive inference from Microsoft’s warning that legitimately signed tokens may survive the upgrade.

Additionally, by leveraging SOC Prime’s AI-Native Detection Intelligence Platform backed by top cyber defense expertise, global organizations can adopt a resilient security posture and transform their SOC to always stay ahead of emerging threats tied to zero-day exploitation.

FAQ

What is CVE-2026-40372 and how does it work?

CVE-2026-40372 is an ASP.NET Core privilege-escalation flaw in the Data Protection cryptographic APIs. The affected packages can validate the wrong bytes and discard the computed HMAC in some cases, which can let attackers forge protected payloads and abuse application trust mechanisms such as authentication cookies and other signed state.

When was CVE-2026-40372 first discovered?

The precise private discovery date is not stated in the two reports. What is public is that Microsoft released out-of-band fixes on April 22, 2026, and BleepingComputer says Microsoft began investigating after customers reported decryption failures following the .NET 10.0.6 update. The Hacker News also says an anonymous researcher was credited with reporting the flaw.

What is the impact of CVE-2026-40372 on systems?

Successful exploitation can allow forged payloads, disclosure of protected data, file disclosure, data modification, and privilege escalation up to SYSTEM on affected systems. The reports also note that availability is not impacted.

Can CVE-2026-40372 still affect me in 2026?

Yes. Systems may still be exposed in 2026 if they continue to run the vulnerable Data Protection package under the affected conditions, especially on Linux, macOS, or other non-Windows hosts. Even after patching, artifacts issued during the vulnerable window may remain valid until the Data Protection key ring is rotated.

How can I protect myself from CVE-2026-40372?

Update Microsoft.AspNetCore.DataProtection to 10.0.7, redeploy affected applications, rotate the Data Protection key ring, and review whether sensitive signed artifacts such as authentication cookies, refresh sessions, API keys, or reset links should be invalidated or reissued. The package update and key-ring rotation are directly supported by Microsoft’s guidance; invalidation and reissuance are prudent follow-on actions based on the risk Microsoft described.



The post CVE-2026-40372: Critical ASP.NET Core Flaw May Let Attackers Gain SYSTEM Privileges appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • DetectFlow: Deploying Detections at Scale Without the Engineering Overhead Brandi Moore
    The Problem: Achieving Threat Detections at Scale   At SOC Prime, we have spent over a decade making detection engineering easier for organizations of every size. Each year, as threats multiply and environments grow more complex, the traditional approach puts SOC Managers in an impossible position — responsible for coverage they cannot achieve with the tools and team they have. DetectFlow offers a path to deploying detections at scale without the engineering overhead. Here is what it solves: Y
     

DetectFlow: Deploying Detections at Scale Without the Engineering Overhead

22 de Abril de 2026, 08:33
DetectFlow Cuts SIEM Costs and Speeds Threat Detection

The Problem: Achieving Threat Detections at Scale  

At SOC Prime, we have spent over a decade making detection engineering easier for organizations of every size. Each year, as threats multiply and environments grow more complex, the traditional approach puts SOC Managers in an impossible position — responsible for coverage they cannot achieve with the tools and team they have. DetectFlow offers a path to deploying detections at scale without the engineering overhead. Here is what it solves:

  • Your team is drowning in noise, not finding threats: False positives overwhelm analysts and real signals get missed. Alert fatigue isn’t a people problem, it’s a systems problem
  • Your detection coverage has hard limits you can’t engineer around: Running under 512 rules means your team has blind spots across the MITRE ATT&CK matrix that no amount of headcount can close
  • By the time your team sees a threat, the attacker has already moved: Batch processing creates detection delays measured in minutes to hours, turning a containable incident into a breach
  • Your SIEM budget is consumed by data you never needed: Forced ingestion of raw logs at terabyte scale drives storage costs that are impossible to justify to leadership

 

DetectFlow Applied: Cut Costs and add Speed

DetectFlow fundamentally changes the economics and speed of threat detection. Rather than ingesting raw chaos and sorting it out later, DetectFlow:

  • compresses terabytes of raw log data into gigabytes of clean, labeled events (instantly, before anything touches your SIEM). 
  • detection happens in-flight, at wire speed, applying 50,000+ in real time and driving mean time to detect down to 0.005–0.01 seconds
  • the entire data pipeline is governed and filtered before ingestion, so your SIEM only receives normalized, tagged, and pre-validated events resulting in dramatic optimization of your SIEM spend: you’re paying to store and analyze signal, not noise.

 

 

The Endgame: Attack Chains That Tell the Full Story

Where DetectFlow truly separates itself is in how it surfaces what matters. Instead of handing analysts thousands of disjointed, low-context alerts to manually correlate, DetectFlow: 

  • collapses that noise into a prioritized queue of high-probability Attack Chains, complete with AI-generated executive summaries that condense gigabytes of adversary activity into a clear brief. 
  • Threat inference happens in real time, automatically correlating activity across different vectors and hostnames without requiring any manual investigation. 
  • The output isn’t a list of alerts: it’s a decision. Any analyst, regardless of experience level, can immediately understand the full scope of a breach and move directly to remediation.

 To learn more about DetectFlow head to our overview page.

FAQ

How does DetectFlow reduce SIEM costs?

DetectFlow sits upstream of your SIEM, processing raw event streams before they are ever ingested. It compresses terabytes of raw log data down to roughly 7% of the original volume, filtering out the noise and passing only normalized, threat-tagged events into your SIEM. The result is that your SIEM licensing and storage costs are calculated against signal, not raw volume. For organizations ingesting at scale, that shift alone can be the difference between a sustainable security budget and one that is impossible to defend to a CFO.

What is MTTD and how does DetectFlow improve it?

MTTD (Mean Time to Detect) is the measure of how long it takes your team to identify an active threat after it begins. Traditional SIEM architectures rely on batch processing, which means detection queries run on a delay, often 15 minutes or more after an event occurs. DetectFlow applies detection rules in real time, directly against the live data stream, reducing MTTD to between 0.005 and 0.01 seconds. In practical terms, that is the difference between catching an attacker in the first move and discovering a breach after lateral movement has already occurred.

Why can’t we just add more detection rules to our SIEM?

Most enterprise SIEMs have a hard operational ceiling on how many rules can run simultaneously. Microsoft Sentinel, for example, caps at 512. Beyond the rule limit, every additional rule adds query overhead, slows detection, and increases costs. DetectFlow runs detection at the pipeline layer using Apache Flink, where it can apply tens of thousands of Sigma rules simultaneously without those constraints. That is what allows your team to close MITRE ATT&CK coverage gaps that are simply not addressable inside a SIEM architecture.

Does DetectFlow replace our existing SIEM?

No. DetectFlow integrates with your existing SIEM, it does not replace it. It sits in the Kafka pipeline layer before ingestion, and your SIEM receives cleaner, pre-enriched, threat-tagged events through the same connectors it already uses. Your analysts continue working in familiar dashboards. The change they notice is better data quality, fewer false positives, and faster investigations, not a new tool to learn.

What does “Attack Chains” mean and why does it matter for my team?

Attack Chains is how DetectFlow surfaces correlated threats rather than individual alerts. Instead of passing thousands of isolated events to your analysts for manual investigation, DetectFlow uses AI to collapse related activity across different vectors and hostnames into a single prioritized queue, with a three-sentence executive summary of what the adversary is doing. For a SOC Manager, that means your team is triaging a coherent story about an attack in progress, not a pile of disconnected signals that require hours of investigation before the picture becomes clear.



The post DetectFlow: Deploying Detections at Scale Without the Engineering Overhead appeared first on SOC Prime.

UAC-0247 Attack Detection: AGINGFLY Malware Targets Hospitals, Local Governments, and FPV Operators in Ukraine

16 de Abril de 2026, 09:35

Phishing remains one of the most effective tactics in the cybercriminal playbook, particularly when attackers exploit urgent humanitarian themes, trusted online resources, and legitimate system tools to increase victim engagement. Europol also notes that phishing continues to serve as a primary delivery vector for data-stealing malware. This pattern is clearly reflected in the latest activity tracked by CERT-UA, where threat actors used humanitarian-aid themed lures and multi-stage malware delivery to target Ukrainian organizations.

In a CERT-UA article, researchers described a UAC-0247 campaign targeting local self-government bodies, communal healthcare institutions, and likely representatives of Ukraine’s Defense Forces. The operation ultimately deployed AGINGFLY and related malicious tools, combining phishing, deceptive web delivery, and abuse of legitimate Windows utilities to establish access and support follow-on compromise.

CERT-UA’s latest reporting highlights another wave of phishing-driven intrusions targeting Ukraine’s civilian and potentially defense-adjacent sectors. In the campaign described in the article, attackers used humanitarian-aid themed emails to lure victims into opening malicious content that eventually deployed AGINGFLY, a malware family associated with remote access, credential theft, and follow-on post-compromise activity. The observed targets included local self-government bodies, communal healthcare institutions, including clinical and emergency hospitals, and likely individuals connected to FPV drone operations.

Sign up for the SOC Prime Platform to proactively defend your organization against UAC-0247 attacks. Just press Explore Detections below and access a relevant detection rule stack, enriched with AI-native CTI, mapped to the MITRE ATT&CK® framework, and compatible with a wide range of SIEM, EDR, and Data Lake technologies.

Security teams can search the Threat Detection Marketplace using the “UAC-0247” tag to identify relevant detections and monitor related content updates. Cyber defenders can also rely on Uncoder AI to convert raw threat intelligence into performance-optimized queries, document and improve rule logic, and generate Attack Flows based on the latest CERT-UA reporting.

Explore Detections

Analyzing UAC-0247 Attacks Delivering AGINGFLY via Humanitarian-Themed Phishing Lures

According to CERT-UA, the attack chain began with phishing emails disguised as humanitarian aid proposals. Victims were prompted to click a link that redirected either to a legitimate website compromised through cross-site scripting (XSS) or to a fake website generated with AI tools. In both scenarios, the objective was to persuade the victim to download and open an archive containing a malicious LNK file.

Once launched, the shortcut file abused mshta.exe to retrieve and execute a remote HTA file. The HTA displayed a decoy form to distract the victim while simultaneously downloading an executable that injected shellcode into a legitimate process, such as RuntimeBroker.exe. CERT-UA also noted that more recent stages of the campaign relied on a two-stage loader, with the second stage using a proprietary executable format and the final payload additionally compressed and encrypted to complicate detection and analysis.

Among the next-stage components identified in the campaign were RAVENSHELL, which acted as a reverse-shell style stager, SILENTLOOP, a PowerShell-based tool capable of executing commands and obtaining command-and-control data, and AGINGFLY, the primary malware family used in the operation. CERT-UA-linked reporting indicates that AGINGFLY is designed for remote control, data theft, and follow-on compromise activity.

The campaign also supported credential theft, reconnaissance, and lateral movement. Investigators observed the use of tooling to extract data from Chromium-based browsers, access messaging-related data, scan internal networks, and tunnel traffic across compromised environments. In one of the investigated cases, forensic evidence suggested that representatives of Ukraine’s Defense Forces may have been targeted using malicious ZIP archives distributed via Signal and designed to deploy AGINGFLY through DLL side-loading.

To reduce exposure to this activity, CERT-UA recommends restricting the execution of risky file types such as LNK, HTA, and JS, while also limiting or closely monitoring the use of native Windows tools frequently abused in the infection chain, including mshta.exe, powershell.exe, and wscript.exe.

MITRE ATT&CK Context

Leveraging MITRE ATT&CK helps contextualize the latest UAC-0247 activity. Based on the reported TTPs, the most relevant techniques likely include Phishing: Spearphishing Link (T1566.002), Command and Scripting Interpreter, Process Injection (T1055), Web Protocols / WebSockets for C2, Credential Access, and Lateral Movement via tunneling and proxying tools. This mapping reflects the phishing lures, deceptive web delivery, LNK-to-HTA execution, shellcode injection, AGINGFLY deployment, and follow-on credential theft and internal reconnaissance.



The post UAC-0247 Attack Detection: AGINGFLY Malware Targets Hospitals, Local Governments, and FPV Operators in Ukraine appeared first on SOC Prime.

UAC-0255 Attack Detection: Threat Actors Impersonate CERT-UA to Infect Ukrainian Public and Private Sector Organizations With AGEWHEEZE RAT

1 de Abril de 2026, 09:50
UAC-0255 Attack Detection

Phishing remains one of the most effective tools in the cybercriminal arsenal, especially when threat actors abuse the credibility of trusted institutions and familiar digital services to increase victim interaction. In late March 2026, CERT-UA revealed a phishing campaign tracked as UAC-0255 in which attackers impersonated the agency and attempted to infect organizations across Ukraine’s public and private sectors with the AGEWHEEZE RAT.

Detect UAC-0255 Attacks Covered in CERT-UA#21075

Europol notes that phishing remains the main distribution vector for data-stealing malware, reflecting how email- and URL-driven social engineering remains central to malware delivery. The same pattern is visible across the phishing activity CERT-UA has been documenting against Ukraine throughout 2026. 

Earlier this year, CERT-UA reported a UAC-0190 campaign targeting the Ukrainian Armed Forces with the PLUGGYAPE backdoor, and later disclosed UAC-0252 activity in which emails impersonating central executive authorities and regional administrations lured victims into running SHADOWSNIFF and SALATSTEALER payloads. The latest UAC-0255 attack covered in CERT-UA#21075 alert fits the same broader trend, with threat actors now abusing CERT-UA’s own identity to make the lure more convincing and expand targeting across both public and private sector organizations. 

Register for the SOC Prime Platform to proactively detect UAC-0255 and similar attacks at the earliest stages possible. Just press Explore Detections below and access a relevant detection rule stack, enriched with AI-native CTI, mapped to the MITRE ATT&CK® framework, and compatible with multiple SIEM, EDR, and Data Lake technologies.

Explore Detections

Security experts can also use the “CERT-UA#21075” tag based on the relevant CERT-UA alert identifier to search for the detection stack directly and track any content changes. For more rules to detect adversary-related attacks, cyber defenders can search the Threat Detection Marketplace library using the “UAC-0255” tag.

Cybersecurity professionals can also rely on Uncoder AI to analyze threat intelligence in real time, generate Attack Flows, Sigma rules, simulations and validations, design detections in 56 languages, and create custom agentic workflows. Visit https://socprime.ai/ to learn more.

Analyzing UAC-0255 Attacks Impersonating CERT-UA to Deploy AGEWHEEZE

On March 26–27, 2026, CERT-UA identified a phishing campaign in which attackers impersonated the agency and urged recipients to download password-protected archives from the Files.fm service, including “CERT_UA_protection_tool.zip” and “protection_tool.zip.” The archives contained malicious content presented as specialized software to be installed by targeted organizations. 

Malicious emails were distributed broadly across Ukraine and targeted government organizations, medical centers, security firms, educational institutions, financial organizations, software development companies, and other entities, highlighting the campaign’s reach across both public and private sectors.

​​CERT-UA#21075 alert also details the discovery of the fraudulent website cert-ua[.]tech, which reused materials from the official cert.gov.ua website and included instructions for downloading the fake protection tool. This helped the attackers reinforce the legitimacy of the lure and increase the chances of user interaction by abusing trust in Ukraine’s Computer Emergency Response Team.

The executable offered for installation was determined to be a multifunctional remote access malware strain tracked by CERT-UA as AGEWHEEZE. AGEWHEEZE is a Go-based RAT that supports a broad set of remote administration capabilities. In addition to standard functions such as command execution and file management, the malware can stream screen content, emulate mouse and keyboard input, interact with the clipboard, manage processes and services, and open URLs on the compromised host.

The malware’s command-and-control infrastructure was hosted on the network of French provider OVH (AS16276). On port 8443/tcp, researchers observed a web page titled “The Cult” containing an authentication form, while the HTML source included russian-language strings noting about blocked access to the service. CERT-UA also found that the associated self-signed SSL certificate had been created on March 18, 2026, and that the Organization field contained the value “TVisor.”

During a review of the AI-generated cert-ua[.]tech website, CERT-UA found embedded references to the CyberSerp Telegram channel, including the phrase “With Love, CYBER SERP.” On March 28, 2026, the same Telegram channel publicly claimed responsibility for the attack, helping remove uncertainty around the technical attribution. Based on these findings, CERT-UA assigned the activity the identifier UAC-0255.

Despite the breadth of targeting, CERT-UA assessed the attack as unsuccessful. Investigators identified only several infected personal devices belonging to employees of educational institutions, and the response team provided the necessary practical and methodological assistance. 

MITRE ATT&CK Context

Leveraging MITRE ATT&CK offers in-depth insight into the latest UAC-0255 phishing campaign impersonating CERT-UA. The table below displays all relevant Sigma rules mapped to the associated ATT&CK tactics, techniques, and sub-techniques.

Tactics

Techniques

Sigma Rules

Initial Access

Phishing: Spearphishing Attachment (T1566)

Execution

Scheduled Task/Job: Scheduled Task (T1053.005)



Defense Evasion

Obfuscated Files or Information (T1027)

Command and Control

Application Layer Protocol: Web Protocols (T1071.001)



Ingress Tool Transfer (T1105)





The post UAC-0255 Attack Detection: Threat Actors Impersonate CERT-UA to Infect Ukrainian Public and Private Sector Organizations With AGEWHEEZE RAT appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • Telemetry Pipeline: How It Works and Why It Matters in 2026 Steven Edwards
    A telemetry pipeline has become a core layer in modern security operations because teams no longer send data from applications, infrastructure, and cloud services straight into a single backend and hope for the best. In 2026, most environments are distributed across cloud, hybrid, and on-prem systems, which means more services, more data sources, more formats, and more operational complexity for teams that already struggle to keep visibility, control costs, and respond quickly.  Splunk’s State
     

Telemetry Pipeline: How It Works and Why It Matters in 2026

25 de Março de 2026, 08:31
Delemetry Data Pipeline

A telemetry pipeline has become a core layer in modern security operations because teams no longer send data from applications, infrastructure, and cloud services straight into a single backend and hope for the best. In 2026, most environments are distributed across cloud, hybrid, and on-prem systems, which means more services, more data sources, more formats, and more operational complexity for teams that already struggle to keep visibility, control costs, and respond quickly. 

Splunk’s State of Security 2025 found that 46% of security professionals spend more time maintaining tools than defending the organization. Cisco’s research adds that 59% deal with too many alerts, 55% face too many false positives, and 57% lose valuable investigation time because of gaps in data management. When too much raw telemetry flows into the stack without filtering, enrichment, or routing, the result is higher bills, slower investigations, and more noise for already stretched teams.

That is why telemetry pipelines are gaining momentum. They give organizations a control layer to normalize, enrich, route, and govern telemetry before it reaches SIEM, observability, or storage platforms. What began primarily as a way to control volume and cost is quickly becoming a must for modern security operations. Gartner suggests that by 2027, 40% of all log data will be processed through telemetry pipeline products, up from less than 20% in 2024.

As that model matures, the next logical step is not just to manage telemetry better, but to make it useful earlier. If teams are already adding a pipeline to reduce noise, control spend, and improve routing, it makes sense to move part of the detection process closer to the stream itself rather than waiting for every event to land in downstream tools first. Solutions like SOC Prime’s DetectFlow act as an additional detection layer running directly on the stream. Instead of using the pipeline only for transport and optimization, DetectFlow applies tens of thousands of Sigma rules on live Kafka streams with Apache Flink, tags and enriches events in flight, and helps teams act on higher-value signals much earlier in the flow.

What Is Telemetry?

Before talking about telemetry pipelines, it is important to define telemetry itself.

Telemetry is the evidence systems leave behind while they run. It shows how applications, infrastructure, and services behave in real time, including performance, failures, usage, and health. 

For enterprises, that evidence is valuable because it shows what users are actually experiencing, where bottlenecks form, when failures begin, and where suspicious activity starts to flicker. For security teams, telemetry is even more important because it becomes the raw material for detection, investigation, hunting, and response.

Put differently, telemetry is the trail of digital footprints your environment leaves behind. Useful on its own, but much more powerful when it is organized before the tracks disappear into the mud.

What Are the Main Types of Telemetry Data?

Most teams work with four main telemetry categories grouped under the MELT model: Metrics, Events, Logs, and Traces.

Metrics

Metrics are numerical measurements collected over time, such as CPU usage, memory consumption, latency, throughput, request volume, and error rate. They help teams track system health, identify trends, and spot anomalies before they become visible outages.

Events

Events capture notable actions or state changes inside a system. They usually mark something important that happened, such as a user login, a deployment, a configuration update, a purchase, or a failover. Events are especially useful because they often connect technical activity to business activity.

Logs

Logs are timestamped records of discrete activity inside an application, system, or service. They provide detailed evidence about what happened, when it happened, and often who or what triggered it. Logs are essential for debugging, troubleshooting, auditing, and security investigations.

Traces

Traces show the end-to-end path of a request as it moves across different services and components. They help teams understand how systems interact, how long each step takes, and where delays or failures occur. Traces are especially valuable in distributed systems and microservices environments.

Some platforms also break telemetry into more specific categories, such as requests, dependencies, exceptions, and availability signals. These help teams understand incoming operations, external service calls, failures, and uptime. 

Telemetry Data Pros and Cons

Telemetry data can be one of the most valuable assets in modern operations, but only when it is managed with purpose. Done well, it gives teams a real-time view of how systems behave, how users interact with services, and where risks or inefficiencies begin to form. Done poorly, it becomes just another stream of noisy, expensive data.

Telemetry Data Benefits

The biggest advantage of telemetry is visibility. By collecting and analyzing metrics, logs, traces, and events, teams can see what is happening across applications, infrastructure, and services in real time.

Key benefits include:

  • Real-time visibility into system health, performance, and user activity
  • Proactive issue detection by spotting anomalies before they turn into outages or incidents
  • Improved operational efficiency through automated monitoring and faster workflows
  • Faster troubleshooting by giving teams the context needed to identify root causes quickly
  • Better decision-making through data-backed insights for product, operations, and security teams

To get the full value, telemetry needs to be consolidated and handled consistently. A unified telemetry layer helps reduce mess across tools, improves scalability, and makes data easier to analyze and act on.

Telemetry Data Challenges

Telemetry also comes with real challenges, especially as data volumes grow. The most common ones include:

  • Security and privacy risks when sensitive data is collected or stored without strong controls
  • Legacy system integration across different formats, sources, and older technologies
  • Rising storage and ingestion costs when too much low-value data is kept in expensive platforms
  • Tool fragmentation makes correlation and investigation harder
  • Interoperability issues when systems do not follow consistent standards or schemas

This is exactly why telemetry strategy matters. The goal is not to collect more data for the sake of it, but to collect the right data, shape it early, and route it where it creates the most value. In cybersecurity, that difference is critical. The right telemetry can speed up detection and response, while unmanaged telemetry can bury important signals under cost and noise.

How to Analyze Telemetry Data 

The best way to analyze telemetry data is to stop treating analysis as the last step. In practice, good analysis starts much earlier, with clear goals, structured collection, smart routing, and storage policies that keep useful data accessible without flooding downstream tools. 

Define Goals

Start with the question behind the data. Are you trying to improve performance, reduce MTTR, monitor customer experience, detect security threats, or control SIEM costs? Once that is clear, decide which signals matter most and which KPIs will show progress. For a product team, that may be latency and error rate. For a SOC, it may be detection coverage, false positives, and investigation speed. This is also the stage to set privacy and compliance boundaries so teams know what data should be collected, masked, or excluded from the start. 

Configure Collection

Once goals are clear, configure the tools that will collect the right telemetry from the right places. That usually means deciding which applications, hosts, cloud services, APIs, endpoints, and identity systems should send logs, metrics, traces, and events. It also means setting practical rules for sampling, field selection, filtering, and schema consistency.

Shape and Route the Data 

Before data reaches SIEM, observability, or storage platforms, it should be shaped to fit the goal. That can mean normalizing records into consistent schemas, enriching events with identity or asset context, filtering noisy data, redacting sensitive fields, and routing each signal to the destination where it creates the most value.

Store Data With Intent

Not all telemetry needs the same retention period, storage tier, or query speed. High-value operational and security data may need to stay hot for rapid search and alerting, while bulk historical data can move to cheaper long-term storage. The key is to align retention with investigation needs, compliance obligations, and cost tolerance. 

Analyze, Alert, and Refine

Only after that foundation is in place does analysis become truly useful. Dashboards, alerts, anomaly detection, and visualizations work much better when the underlying telemetry is already clean, consistent, and routed with purpose. Machine learning and AI can make this process more effective by helping teams spot unusual patterns, detect anomalies faster, and identify changes that may be easy to miss in high-volume environments.

That is especially important in security operations, where the real challenge is turning telemetry into better decisions with less noise. This is exactly why a pipeline-based approach becomes so valuable. When telemetry is already being normalized, enriched, and routed upstream, analysis can start earlier, before raw events pile up in costly SIEM platforms.

Solutions like DetectFlow placе detection logic, threat correlation, and Agentic AI capabilities directly in the pipeline. At the pre-SIEM stage, DetectFlow can correlate events across log sources from multiple systems, while Flink Agent and AI help surface the attack chains that matter in real time and reduce false positives. In practice, that means teams can move detection left and deliver cleaner, richer, and more actionable signals downstream.

Telemetry and Monitoring: Main Difference

Telemetry and monitoring are closely related, but they are not the same thing. Telemetry is the process of collecting and transmitting data from systems and applications. It captures raw signals such as metrics, logs, traces, and events, then sends them to a central place for analysis. Monitoring is what teams do with that data to understand system health, performance, and availability. It turns telemetry into dashboards, alerts, and reports that help people act on what they see.

The difference matters because many organizations still build their strategy around dashboards and alerts alone. Monitoring is important, but it is only one use of telemetry. Security teams also rely on telemetry for investigation, hunting, root-cause analysis, and detection engineering. In other words, telemetry is the foundation, while monitoring is one of the ways that foundation is used.

In fact, telemetry is like the nervous system, constantly gathering signals from every part of the body. Monitoring is like the brain, interpreting those signals and deciding what needs attention. Telemetry feeds monitoring. Without telemetry, there is nothing to monitor. Without monitoring, telemetry remains a raw signal with no clear action attached.

What Is a Telemetry Pipeline?

A telemetry pipeline is the operating layer between telemetry sources and telemetry destinations. It collects signals from applications, hosts, cloud platforms, APIs, identity systems, endpoints, and networks, then processes that data before sending it onward.

The easiest way to think about it is that telemetry sources produce data, but the pipeline gives that data direction. Without a pipeline, downstream tools become catch-all warehouses. With a pipeline, telemetry can be standardized, routed by value, and governed according to policy. That is especially important for security operations, where one class of data may need real-time detection while another belongs in lower-cost retention or long-term investigation storage.

From a business perspective, the value is straightforward:

  • Lower cost by reducing unnecessary downstream ingestion
  • Better signal quality through normalization and enrichment
  • Less analyst fatigue by cutting noisy, low-value events earlier
  • More flexibility to send each data type where it creates the most value
  • Stronger governance through filtering, redaction, and policy-based routing

 

How Does the Telemetry Pipeline Work?

At a high level, a telemetry pipeline works through three core stages: ingest, process, and route. Together, these stages turn raw telemetry from many sources into clean, useful data to act on.

Ingest

The first stage is ingestion. This is where the pipeline collects telemetry from across the environment: applications, cloud services, containers, endpoints, identity systems, network tools, and infrastructure components. In modern environments, this stage must handle multiple signal types at once, including logs, metrics, traces, and events, often arriving at very different volumes and speeds.

Process

The second stage is processing, and this is where most of the value is created. Data is cleaned, normalized, enriched, filtered, and optimized before it reaches downstream systems. That can include removing duplicates, standardizing schemas, enriching records with identity or threat context, redacting sensitive fields, or reducing noisy data that creates cost without adding much value.

This is also where optimization and governance come in. Instead of treating all telemetry as equally important, teams can shape data according to business and security priorities. High-value signals can be enriched and preserved. Low-value records can be reduced, tiered, or dropped. Sensitive information can be handled according to the compliance policy. In other words, processing is where the pipeline stops being a transport mechanism and becomes a control mechanism. 

Route

The final stage is routing. Once telemetry has been shaped, the pipeline sends it to the right destinations. Security-relevant events may go to a SIEM or an in-stream detection layer. Operational metrics may go to observability tooling. Bulk logs may go to lower-cost storage. Archived data may be retained for compliance or long-term investigation. The point is that the same data no longer has to go everywhere in the same form.

By integrating collection, processing, and routing into one flow, a telemetry pipeline turns data from a flood into a controlled stream. It does not just move telemetry. It makes telemetry usable.

What Kind of Companies Need Telemetry Data Pipelines?

Any company running modern digital systems needs telemetry. The real difference is how urgently it needs to manage that telemetry well. Telemetry pipelines become especially important when blind spots are expensive, which usually means complex infrastructure, regulated data, customer-facing services, or constant security pressure. AWS’s observability guidance is explicitly built for cloud, hybrid, and on-prem environments, which already describes most enterprise estates.

That need shows up across many industries. Technology and SaaS companies rely on telemetry pipelines to protect uptime and customer experience. Financial institutions use them to monitor transactions, improve fraud detection, and keep audit data under control. Healthcare organizations use them to balance reliability with privacy and compliance. Retailers, telecom providers, manufacturers, logistics firms, and public-sector agencies need them because scale and continuity leave very little room for guesswork.

For security teams, the case is even sharper. Telemetry becomes the evidence layer behind detection, triage, investigation, and response. That is why the better question is no longer whether a company needs telemetry, but whether it is still treating telemetry like raw exhaust, or finally managing it like the strategic asset it has become.

How SOC Prime Turns Telemetry Pipelines Into Detection Pipelines

Telemetry pipelines started as a smarter way to move, shape, and control data before it reached expensive downstream platforms. SOC Prime extends that idea further with DetectFlow, which turns the pipeline into an active detection layer instead of using it only for transport and optimization. 

DetectFlow can run tens of thousands of Sigma detections on live Kafka streams, chain detections at line speed, drastically reduce the volume of potential alerts, and surface attack chains that are then further correlated and pre-triaged by Agentic AI before they hit the SIEM. It also brings real-time visibility, in-flight tagging and enrichment, and ensures infrastructure scalability that goes beyond traditional SIEM limits. That moves detection left, closer to the data, earlier in the flow, and far less dependent on costly downstream solutions.

For cybersecurity teams, that is the larger takeaway. Telemetry pipelines are not just an observability upgrade or a cost-control tactic. They are becoming a core part of modern cyber defense. And when detection logic, correlation, and AI move into the pipeline itself, telemetry stops being just something teams store and search later, instead acting on it in real time.

 



The post Telemetry Pipeline: How It Works and Why It Matters in 2026 appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-20643: Vulnerability in WebKit Navigation API May Bypass Same Origin Policy Daryna Olyniychuk
    Just a little over a month after fixing the actively exploited CVE-2026-20700 zero-day, Apple has now issued its first Background Security Improvements release to address CVE-2026-20643, a WebKit vulnerability that could allow maliciously crafted web content to bypass the Same Origin Policy, one of the browser’s core security boundaries. The issue in the limelight adds to the constantly rising vulnerability threat. Experts forecast that 2026 will be the first year to surpass 50,000 published CV
     

CVE-2026-20643: Vulnerability in WebKit Navigation API May Bypass Same Origin Policy

18 de Março de 2026, 10:01
CVE-2026-20643 in WebKit Navigation API fixed by Apple

Just a little over a month after fixing the actively exploited CVE-2026-20700 zero-day, Apple has now issued its first Background Security Improvements release to address CVE-2026-20643, a WebKit vulnerability that could allow maliciously crafted web content to bypass the Same Origin Policy, one of the browser’s core security boundaries.

The issue in the limelight adds to the constantly rising vulnerability threat. Experts forecast that 2026 will be the first year to surpass 50,000 published CVEs, with a median estimate of 59,427 and a realistic possibility of far higher totals. At the same time, the NIST has already recorded over 13K+ vulnerabilities this year, underscoring the growing scale defenders must monitor.

Sign up for the SOC Prime Platform to access the global marketplace of 800,000+ detection rules and queries made by detection engineers, updated daily, and enriched with AI-native threat intel to proactively defend against emerging threats. 

Just click the Explore Detections below and immediately reach the extensive detection stack filtered out by “CVE” tag. All detections are compatible with dozens of SIEM, EDR, and Data Lake formats and are mapped to MITRE ATT&CK®. 

Explore Detections

Security experts can also leverage Uncoder AI to accelerate detection engineering end-to-end by generating rules directly from live threat reports, refining and validating detection logic, visualizing Attack Flows, converting IOCs into custom hunting queries, and instantly translating detection code across diverse language formats.

CVE-2026-20643 Analysis

CVE-2026-20643 affects WebKit, the browser engine behind Safari and a wide range of Apple web content handling across iPhone, iPad, and Mac. Apple’s advisory says the flaw could allow maliciously crafted web content to bypass the Same Origin Policy because of a cross-origin issue in the Navigation API.

Notably, the Same Origin Policy is one of the web’s foundational protections. It is meant to stop one website from reaching into the data, sessions, or active content of another. When this boundary is breached, a malicious webpage may access data from another site, undermining one of the basic rules browsers rely on to keep web activity separate and private.

The exposure is broader than Safari alone. WebKit powers Safari, many third-party browsers on iOS and iPadOS, and in-app web views across Apple platforms. In practice, that means the vulnerable component is exercised not only when a user browses the web directly, but also when apps load embedded web content. 

Apple has not mentioned that CVE-2026-20643 was exploited in the wild, and its advisory focuses on the technical impact rather than observed attack activity. Still, the issue resides in a high-exposure component that processes untrusted web content constantly. In enterprise environments, a flaw that weakens browser isolation can increase the risk of session abuse, cross-site data access, and follow-on compromise through malicious or compromised web content. 

What makes Apple’s latest release especially notable is how the vendor delivered the fix. Background Security Improvements is designed to ship smaller security patches between full software updates. It is currently available on the latest versions of iOS, iPadOS, and macOS. In the case of CVE-2026-20643, Apple used the new mechanism to push a WebKit fix directly to supported devices instead of waiting for a broader release.

CVE-2026-20643 Mitigation

Apple addressed CVE-2026-20643 through its first Background Security Improvements release for supported iPhone, iPad, and Mac devices. The fix was shipped as the corresponding “(a)” update for iOS 26.3.1, iPadOS 26.3.1, macOS 26.3.1, and macOS 26.3.2, with Apple citing improved input validation as the remediation. Security researcher Thomas Espach was credited with reporting the flaw.

Apple says Background Security Improvements are managed from the Privacy & Security menu. Apple recommends keeping Automatically Install enabled so devices receive these fixes between normal software releases.

Notably, if Background Security Improvements are turned off, the device will not receive these protections until they are included in a later software update. Apple also says that removing an installed Background Security Improvement reverts the device to the baseline software version without any applied background security patches. For that reason, the safest path is to leave automatic installation on and avoid removing the update unless a compatibility issue makes it necessary.

Additionally, by leveraging SOC Prime’s AI-Native Detection Intelligence Platform backed by top cyber defense expertise, global organizations can adopt a resilient security posture and transform their SOC to always stay ahead of emerging threats tied to zero-day exploitation.

FAQ

What is CVE-2026-20643 and how does it work?

CVE-2026-20643 is a WebKit vulnerability affecting iOS, iPadOS, and macOS. Apple describes it as a cross-origin issue in the Navigation API that may allow maliciously crafted web content to bypass the Same Origin Policy.

When was CVE-2026-20643 disclosed?

Apple published the security advisory for CVE-2026-20643 on March 17, 2026, alongside its first Background Security Improvements release covering this flaw.

What is the impact of CVE-2026-20643 on systems?

The main impact is a breakdown in browser isolation. If exploited, the flaw may let malicious web content bypass the Same Origin Policy, which is designed to prevent one site from accessing data or active content from another.

Can CVE-2026-20643 still affect me in 2026?

Yes. Devices that have not received the relevant Background Security Improvements release, or where those protections were disabled or removed, may still remain exposed while running affected versions.

How can I protect from CVE-2026-20643?

Install the applicable Background Security Improvements release for your current Apple OS version and make sure Automatically Install is enabled under Privacy & Security so future fixes are applied without delay.



The post CVE-2026-20643: Vulnerability in WebKit Navigation API May Bypass Same Origin Policy appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • Observability Pipeline: Managing Telemetry at Scale Steven Edwards
    Observability began as a visibility problem. Yet, today it is framed just as much as a control challenge because teams have to manage the floods of telemetry moving daily through the business environment. Most organizations already collect large volumes of logs, metrics, events, and traces. The issue now lies in managing tons of that data before it reaches expensive downstream tools. Gartner defines observability platforms as systems that ingest telemetry to help teams understand the health, pe
     

Observability Pipeline: Managing Telemetry at Scale

18 de Março de 2026, 07:48

Observability began as a visibility problem. Yet, today it is framed just as much as a control challenge because teams have to manage the floods of telemetry moving daily through the business environment. Most organizations already collect large volumes of logs, metrics, events, and traces. The issue now lies in managing tons of that data before it reaches expensive downstream tools. Gartner defines observability platforms as systems that ingest telemetry to help teams understand the health, performance, and behavior of applications, services, and infrastructure. That matters because when systems slow down or fail, the impact reaches far beyond the technical side, affecting revenue, customer sentiment, and brand perception.

This creates a familiar paradox. Complex environments require broad telemetry coverage, yet large data volumes can quickly become expensive and difficult to manage. When every signal is forwarded by default, useful insight gets mixed with duplication, low-value data, and rising storage and processing costs. Gartner reports observability spend rising around 20% year over year, with many organizations already spending more than $800,000 annually. The trend shows that by 2028, 80% of enterprises without observability cost controls will overspend by more than 50%.

The pressure is pushing teams to look for more control earlier in the flow. Observability pipelines answer that need by giving teams a practical way to filter, enrich, transform, and route data before it turns into noise, waste, and operational drag downstream.

The same logic is starting to shape cybersecurity operations as well. This is where tools like SOC Prime’s DetectFlow enter the picture. DetectFlow moves the detection layer directly into the pipeline, enabling SOC teams to run tens of thousands of Sigma rules to live Kafka streams using Apache Flink, tagging, enriching, and chaining events at the pre-SIEM stage to scale without the usual vendor caps on speed, capacity, or cost.

What Is an Observability Pipeline?

An observability pipeline is the solution that moves telemetry from sources to destinations while performing tasks like transformation, enrichment, and aggregation. Specifically, it takes in logs, metrics, traces, and events, then prepares that data before it reaches monitoring platforms, SIEMs, data lakes, or long-term storage. Along the way, observability pipelines can filter noisy data, enrich records with context, aggregate high-volume streams, secure sensitive fields, and route each data type to the destination where it makes the most sense.

This becomes important as telemetry grows across microservices, containers, cloud services, and distributed systems. Without a pipeline, teams often forward everything by default, which increases cost, adds noise, and makes data handling harder to manage across multiple tools and environments.

Observability pipelines help solve several common challenges:

  • Data overload. High telemetry volume makes it harder to separate useful signals from low-value data, especially when logs, metrics, and traces arrive from many different systems at once.
  • Rising storage and processing costs. Sending all data to downstream platforms drives up ingest, indexing, and retention costs, even when much of that data adds little value.
  • Noisy data. Duplicate, low-priority, or low-context telemetry can overwhelm the signals that actually matter for troubleshooting, security, and performance analysis.
  • Compliance & security risks. Logs and telemetry streams may contain personal or regulated data, which increases compliance and privacy risks when it is forwarded or stored without proper masking or redaction.
  • Complex Infrastructure. Teams often need to send different data sets to different destinations, such as monitoring tools, SIEMs, and lower-cost storage, which becomes difficult to manage without a central control plane.
  • Migration and vendor flexibility. Pipelines make it easier to reshape and reroute telemetry for new tools or parallel destinations without rebuilding collection from scratch.

In simple terms, an observability pipeline gives teams more control over telemetry. It helps organizations keep the useful signals, improve context, and send each stream where it fits.

How Observability Pipelines Work

At a practical level, observability pipelines create a single flow for handling telemetry data. Instead of managing multiple handoffs between sources and destinations, teams can work through one control layer that prepares data for different operational and security use cases.

Collect

The first step is gathering data from across the organizational environment. That can include application logs, infrastructure metrics, cloud events, container data, and security records. Bringing those inputs into one pipeline gives teams a more consistent starting point and reduces the need for separate connections between every source and every tool.

Process

Once data enters the pipeline, it can be adjusted to match the needs of the business. Teams may standardize formats, enrich records with metadata, remove duplicate events, mask sensitive fields, or reduce unnecessary detail. This step helps make the data more usable, whether the goal is troubleshooting, compliance, long-term retention, or security analysis.

Route

After processing, the pipeline sends data to the right destination. High-priority records may go to a monitoring platform or SIEM for immediate visibility, while other data can be archived, stored in a data lake, or routed to lower-cost storage. This makes it easier to support different teams without forcing every system to handle the same data in the same way.

Benefits of Using Observability Pipeline

An observability pipeline helps teams to manage growing telemetry volumes, improve data quality, and control how information is used across operations and security. As environments become more distributed, that kind of control matters more for cost, performance, and faster decision-making.

Some of the main benefits include:

  • Lower storage and processing costs. An observability pipeline helps reduce unnecessary spend by filtering low-value events, deduplicating records, and sending only the right data to high-cost platforms. This keeps teams from paying top price for data that adds little value.
  • Better signal quality. When noisy or incomplete telemetry is cleaned up earlier, the data that reaches downstream tools becomes easier to search, analyze, and act on. That helps teams focus on what actually matters instead of sorting through clutter.
  • Faster troubleshooting and investigations. Better-prepared data speeds up incident response. Operations teams can identify performance issues faster, while security teams can get cleaner and more relevant records into SIEMs and other detection tools without overwhelming analysts with noise.
  • Stronger compliance and data protection. Logs and telemetry may contain sensitive or regulated information. A pipeline makes it easier to mask, redact, or route that data properly before it is stored or shared, which supports compliance and reduces risk.
  • More flexibility across tools and teams. Different teams need different views of the same data. An observability pipeline makes it easier to route specific streams to monitoring platforms, data lakes, SIEMs, or lower-cost storage without rebuilding collection every time requirements change.
  • Better scalability for modern environments. As infrastructure grows across cloud, containers, and distributed systems, pipelines help organizations scale telemetry handling in a more controlled and sustainable way.

In its essence, the value of an observability pipeline comes down to control. It helps teams cut waste, improve signal quality, support security and compliance, and make better use of telemetry across the business.

Observability Pipeline in the Cloud

Cloud environments make observability harder because they add more motion, more dependencies, and far more telemetry to manage. Microservices, containers, Kubernetes, and short-lived workloads all produce signals that change quickly and accumulate quickly. In Chronosphere’s cloud-native observability research summary, 87% of engineers said cloud-native architectures have made discovering and troubleshooting incidents more complex, and 96% said they feel stretched to their limits.

That complexity creates a practical problem for the business. Teams need broad visibility to understand what is happening across cloud services, applications, and infrastructure, but forwarding everything by default quickly becomes expensive and hard to manage. Experts describe the market shift as a move from volume to value, driven by rising telemetry costs, AI workloads, and the need for more disciplined visibility.

This is where observability pipelines become especially useful in the cloud. A pipeline gives teams a control layer between data sources and downstream tools, so they can filter noisy records, enrich important ones, and route each stream to the right destination. That means less waste in premium platforms, better-quality signals for troubleshooting, and more flexibility across monitoring, storage, and security tools. In cloud-native environments, that kind of control is no longer a nice extra.

The cloud angle also matters for cybersecurity. Security teams rely on the same cloud telemetry for threat detection, investigation, and compliance, but raw volume can overwhelm SIEMs and bury the events that matter. An observability pipeline helps earlier in the flow by reducing noise, improving context, and sending higher-value records to the right systems. That is also where SOC Prime’s DetectFlow fits naturally, moving detection closer to ingestion so teams can evaluate, enrich, and correlate events before they become downstream overload.

Observability Pipeline: A Smarter Layer for Security Operations

An observability pipeline gives teams something they increasingly need across modern environments: control before data turns into cost, noise, and slow decision-making. The more telemetry organizations collect, the more important it becomes to filter, enrich, transform, and route it with purpose. That makes observability pipelines useful far beyond monitoring alone. They help improve data quality, keep downstream platforms efficient, and create a stronger foundation for both operations and security.

Notably, security teams face the same telemetry problem, but with higher stakes. SIEMs have practical limits, rule counts do not scale forever, and too much raw data can put enourmous burned onto security analysis. This is where DetectFlow adds a meaningful value layer, extending observability pipeline logic into threat detection by moving detection closer to the ingestion layer.

DetectFlow runs tens of thousands of Sigma detections on live Kafka streams using Apache Flink, correlates events across multiple log sources at the pre-SIEM stage, and uses Flink Agent plus active threat context for AI-powered analysis. In practice, that means SOC teams can reduce noise earlier, surface attack chains faster, and improve investigative clarity before downstream tools get overwhelmed.

SOC Prime DetectFlow Dashboard

 



The post Observability Pipeline: Managing Telemetry at Scale appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-3910: Chrome V8 Zero-Day Used for In-the-Wild Attacks Daryna Olyniychuk
    Chrome zero-days continue to pose a major risk for cyber defenders. Earlier this year, Google patched CVE-2026-2441, the first actively exploited Chrome zero-day of 2026. Now, another emergency update has been released, fixing two more flaws already exploited in the wild, CVE-2026-3910 in Chrome’s V8 JavaScript and WebAssembly engine and CVE-2026-3909, an out-of-bounds write bug in Skia. Google describes CVE-2026-3910 as an inappropriate implementation issue in Chrome V8. In essence, a crafted
     

CVE-2026-3910: Chrome V8 Zero-Day Used for In-the-Wild Attacks

13 de Março de 2026, 10:33

Chrome zero-days continue to pose a major risk for cyber defenders. Earlier this year, Google patched CVE-2026-2441, the first actively exploited Chrome zero-day of 2026. Now, another emergency update has been released, fixing two more flaws already exploited in the wild, CVE-2026-3910 in Chrome’s V8 JavaScript and WebAssembly engine and CVE-2026-3909, an out-of-bounds write bug in Skia.

Google describes CVE-2026-3910 as an inappropriate implementation issue in Chrome V8. In essence, a crafted HTML page may allow a remote attacker to execute arbitrary code inside the browser sandbox. 

The latest Chrome emergency patch lands against an increasing zero-day threat. Google Threat Intelligence Group tracked 90 zero-days exploited in the wild in 2025, up from 78 in 2024, and found that enterprise technologies accounted for 43 cases, or a record 48% of observed exploitation.

Register for SOC Prime’s AI-Native Detection Intelligence Platform, backed by cutting-edge technologies and top cybersecurity expertise to outscale cyber threats and build a resilient cybersecurity posture. Click Explore Detections to access the comprehensive collection of SOC content for vulnerability exploit detection, filtered by the custom “CVE” tag.

Explore Detections

Detections from the dedicated rule set can be applied across 40+ SIEM, EDR, and Data Lake platforms and are mapped to the latest MITRE ATT&CK® framework v18.1. Security teams can also leverage Uncoder AI to accelerate detection engineering end-to-end by generating rules directly from live threat reports, refining and validating detection logic, auto-visualizing Attack Flows, converting IOCs into custom hunting queries, and instantly translating detection code across diverse language formats.

CVE-2026-3910 Analysis 

According to Google’s security advisory, CVE-2026-3910 is a high-severity vulnerability in V8, the JavaScript and WebAssembly engine used by Chrome. It can be triggered through a crafted HTML page and may allow arbitrary code execution inside the browser sandbox. Because V8 processes active content during normal browsing, exploitation can begin with something as simple as visiting a malicious or compromised website.

The risk is substantial because Chrome is deeply embedded in daily enterprise work. An actively exploited V8 flaw can turn ordinary browsing into a path for credential theft, malicious code delivery, or broader compromise, especially when combined with other bugs or phishing.

Google has confirmed that CVE-2026-3910 is being exploited in the wild, but has not published technical details about the exploitation chain. 

The same Chrome update also fixed CVE-2026-3909, a high-severity out-of-bounds write vulnerability in the Skia graphics library. Google says the flaw is also being exploited in the wild. Because it affects another core browser component and was fixed in the same emergency release, organizations should apply the full update without delay rather than focus on CVE-2026-3910 alone.

CVE-2026-3910 Mitigation

The recommended mitigation is to update Chrome immediately to the latest patched Stable Channel build. Google says the fixed desktop versions are 146.0.7680.75 and 146.0.7680.76 for Windows and macOS and 146.0.7680.75 for Linux. Because Google has confirmed in-the-wild exploitation, organizations should prioritize the update across employee endpoints, administrator workstations, and shared systems used for browsing.

Organizations using Chromium-based browsers such as Microsoft Edge, Brave, Opera, and Vivaldi should also monitor for corresponding vendor patches, since those products may inherit exposure from the same underlying codebase. 

Additionally, by leveraging SOC Prime’s AI-Native Detection Intelligence Platform backed by top cyber defense expertise, global organizations can adopt a resilient security posture and transform their SOC to always stay ahead of emerging threats tied to zero-day exploitation.

FAQ

What is CVE-2026-3910 and how does it work?

CVE-2026-3910 is a high-severity vulnerability in Chrome’s V8 JavaScript and WebAssembly engine. Google describes it as an inappropriate implementation flaw that can be triggered with a crafted HTML page, allowing a remote attacker to execute arbitrary code inside the browser sandbox.

When was CVE-2026-3910 first discovered?

Google’s advisory says the vulnerability was reported on March 10, 2026.

What is the impact of CVE-2026-3910 on systems?

The main risk is that malicious web content could trigger code execution inside Chrome’s browser sandbox. In real attacks, that can turn routine browsing into an entry point for credential theft, malware delivery, or further compromise when paired with other techniques.

Can CVE-2026-3910 still affect me in 2026?

Yes. Any Chrome installation that has not yet been updated to the patched build may still be exposed. Google explicitly says exploits for CVE-2026-3910 exist in the wild.

How can I protect from CVE-2026-3910?

Update Chrome to version 146.0.7680.75 or 146.0.7680.76 on Windows and macOS or 146.0.7680.75 on Linux, then relaunch the browser to make sure the patched build is running. Organizations using Chromium-based alternatives should apply vendor fixes as soon as they become available.



The post CVE-2026-3910: Chrome V8 Zero-Day Used for In-the-Wild Attacks appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-21262: SQL Server Zero-Day Fixed in Microsoft’s March Patch Tuesday Release Daryna Olyniychuk
    The beginning of 2026 has brought a wave of zero-day vulnerabilities affecting Microsoft products, including the actively exploited Windows Desktop Window Manager flaw (CVE-2026-20805), the Microsoft Office zero-day (CVE-2026-21509) that prompted an out-of-band fix, and the Windows Notepad RCE bug (CVE-2026-20841). Microsoft’s March Patch Tuesday release keeps defenders busy again, this time shifting attention to CVE-2026-21262, a publicly disclosed SQL Server Elevation of Privilege (EoP) vulne
     

CVE-2026-21262: SQL Server Zero-Day Fixed in Microsoft’s March Patch Tuesday Release

12 de Março de 2026, 10:46
CVE-2026-21262 zero-day in SQL Server

The beginning of 2026 has brought a wave of zero-day vulnerabilities affecting Microsoft products, including the actively exploited Windows Desktop Window Manager flaw (CVE-2026-20805), the Microsoft Office zero-day (CVE-2026-21509) that prompted an out-of-band fix, and the Windows Notepad RCE bug (CVE-2026-20841). Microsoft’s March Patch Tuesday release keeps defenders busy again, this time shifting attention to CVE-2026-21262, a publicly disclosed SQL Server Elevation of Privilege (EoP) vulnerability that puts enterprise environments at risk. 

Microsoft describes CVE-2026-21262 as an improper access control flaw that allows an authorized attacker to elevate privileges over a network. The bug carries a CVSS score of 8.8 and was one of two publicly disclosed zero-days addressed in March’s Patch Tuesday. While there is no confirmed evidence of active exploitation, the combination of public exposure, low attack complexity, and the possibility of privilege escalation inside a core database platform makes this one hard to dismiss as a routine patch.

In view of Microsoft’s broad reach across enterprise and consumer environments, vulnerabilities in its products can have a devastating impact. BeyondTrust reported that Microsoft disclosed a record 1,360 vulnerabilities in 2024, with Elevation of Privilege flaws being a top category. That continued into 2025, when Microsoft patched 1,129 vulnerabilities across the year, while EoP issues stayed at 50% of all fixes as of December 2025. Google Threat Intelligence Group adds another layer of context. It tracked 90 in-the-wild zero-days in 2025 and found that enterprise technologies made up a record 48% of observed exploitation.

Sign up for SOC Prime Platform to access the world’s largest detection intelligence dataset backed by an AI-powered product suite, helping SOC teams seamlessly handle everything from threat detection to simulation. Defenders can drill down to a relevant detection stack for vulnerability exploitation activity by pressing Explore Detections.

Explore Detections

All rules are mapped to the latest MITRE ATT&CK® framework and are compatible with multiple SIEM, EDR, and Data Lake platforms. Additionally, each rule comes packed with broad metadata, including CTI references, attack flows, audit configurations, and more.

Cyber defenders can also use Uncoder AI to streamline their detection engineering routine. Turn raw threat reports into actionable behavior rules, test your detection logic, map out attack flows, turn IOCs into hunting queries, or instantly translate detection code across languages backed by the power of AI and deep cybersecurity expertise behind every step.

CVE-2026-21262 Analysis

Microsoft’s March 2026 Patch Tuesday addressed over 80 vulnerabilities, including two publicly disclosed zero-days. Across the release, privilege escalation flaws dominated, with the total list containing 46 EoP bugs, 18 RCE flaws, 10 information disclosure bugs, 4 denial-of-service issues, 4 spoofing vulnerabilities, and 2 security feature bypass flaws. 

CVE-2026-21262 stands out because it affects SQL Server, a platform many organizations rely on to run core applications and store high-value data. Successful exploitation can let attackers move from a low-privileged authenticated account to SQL sysadmin, which effectively means full control over the affected database instance. From there, hackers can access or alter data, change configuration, create new logins, or establish persistence inside the SQL environment.

The flaw does not provide initial access on its own. An attacker still needs valid credentials and network reachability to a vulnerable SQL Server instance. That limitation matters, but it should not create false confidence. In many enterprise environments, low-privileged database accounts are spread across applications, integration services, automation tooling, and legacy workloads, which makes post-compromise abuse a realistic scenario. 

Microsoft’s March Patch Tuesday release also included several other vulnerabilities defenders should keep in focus. The second publicly disclosed zero-day is a .NET denial-of-service flaw (CVE-2026-26127). Microsoft also fixed two notable Office remote code execution bugs (CVE-2026-26110, CVE-2026-26113), which can be exploited through the Preview Pane. Another important issue is an Excel information disclosure flaw (CVE-2026-26144)  that researchers say could potentially be abused to exfiltrate data through Copilot Agent mode.

CVE-2026-21262 Mitigation

According to Microsoft’s advisory, organizations running SQL Server should first identify the exact product version and current build, then install the March 10 security update that matches the instance’s servicing path. 

Notably, the vendor distinguishes between the GDR path, which delivers security fixes only, and the CU path, which includes both security and functional fixes. If an instance has been following the GDR track, install the matching GDR package. If it has already been receiving CU releases, install the corresponding CU security update. Microsoft also notes that organizations can move from GDR to CU once, but cannot roll back from CU to GDR afterward.

The affected supported branches and corresponding updates include the following:

Alongside patching, defenders should review SQL logins and role assignments, reduce unnecessary privileges for service and application accounts, restrict network exposure to database servers, and monitor for unusual permission changes or newly assigned high-privilege roles. Because exploitation requires valid credentials, it is also worth reviewing embedded database credentials, shared service accounts, and secrets management practices across the environment. 

Also, by enhancing the defenses with SOC Prime’s AI-Native Detection Intelligence Platform, SOC teams can source detection content from the largest and up-to-date repository, seamlessly adopt the full pipeline from detection to simulation into their security processes, orchestrate workflows in their natural language, and smoothly navigate the ever-changing threat landscape while strengthening defenses at scale.

FAQ

What is CVE-2026-21262 and how does it work?

CVE-2026-21262 is a high-severity Elevation of Privilege vulnerability in Microsoft SQL Server. Microsoft describes it as an improper access control flaw that allows an authorized attacker to elevate privileges over a network. In practice, that means an attacker with valid low-privileged access to a vulnerable SQL Server instance may be able to abuse the flaw to gain far higher permissions

When was CVE-2026-21262 first discovered?

The vulnerability was officially disclosed and published on March 10, 2026, as part of Microsoft’s March Patch Tuesday release. Microsoft credited Erland Sommarskog with discovering the flaw.

What is the impact of CVE-2026-21262 on systems?

CVE-2026-21262 can let an authenticated attacker escalate privileges inside a vulnerable SQL Server instance, potentially reaching SQL sysadmin-level access. In practical terms, that could give an attacker broad control over the database environment, including the ability to access or alter sensitive data, change server settings, create new logins, and establish persistence within the affected SQL Server instance.

Can CVE-2026-21262 still affect me in 2026?

Yes. Any unpatched supported SQL Server deployment can still be exposed in 2026 if it is running a vulnerable build and an attacker has valid credentials plus network access to the instance. The flaw was publicly disclosed, which increases the chance of follow-on abuse even though Microsoft had not listed it as actively exploited at release time.

How can you protect from CVE-2026-21262?

Microsoft’s guidance is to identify your exact SQL Server version and then install the matching March 2026 security update for that servicing path. That means applying the correct GDR or CU package for SQL Server 2016 SP3, 2017, 2019, 2022, or 2025, depending on your current branch.



The post CVE-2026-21262: SQL Server Zero-Day Fixed in Microsoft’s March Patch Tuesday Release appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • SOC Prime Launches DetectFlow Enterprise To Enhance Security Data Pipelines with Agentic AI Andrii Bezverkhyi
    BOSTON, MA — March 12, 2026 — SOC Prime today announced the release of DetectFlow Enterprise, a solution that brings real-time threat detection to the ingestion layer, turning data pipelines into detection pipelines. Running tens of thousands of Sigma detections on live Kafka streams with millisecond MTTD using Apache Flink, DetectFlow Enterprise enables security teams to detect, tag, enrich, and correlate threat data in flight before data reaches downstream systems such as SIEM, EDR, and Data
     

SOC Prime Launches DetectFlow Enterprise To Enhance Security Data Pipelines with Agentic AI

12 de Março de 2026, 05:03
SOC Prime releases DetectFlow enterprise

BOSTON, MAMarch 12, 2026SOC Prime today announced the release of DetectFlow Enterprise, a solution that brings real-time threat detection to the ingestion layer, turning data pipelines into detection pipelines.

Running tens of thousands of Sigma detections on live Kafka streams with millisecond MTTD using Apache Flink, DetectFlow Enterprise enables security teams to detect, tag, enrich, and correlate threat data in flight before data reaches downstream systems such as SIEM, EDR, and Data Lakes. This gives organizations a way to expand detection coverage earlier in the processing flow, enrich security telemetry before downstream analysis, and scale detection on infrastructure they already have.

As detection volumes continue to grow, many SOC teams face the same set of operational challenges, such as delayed detections, rising ingestion costs, infrastructure bottlenecks, fragmented visibility across tools, and difficulty scaling rule coverage without adding more operational overhead. DetectFlow Enterprise is designed to address those pressures by moving detection closer to the data pipeline itself, where events can be inspected, enriched, and correlated in real time.

This release reflects a practical shift in how detection is operationalized. Rather than treating the pipeline as a transport layer alone, DetectFlow Enterprise turns it into an active part of the detection workflow. Teams can manage detections from cloud or local sources, stage and validate updates, and roll out changes safely with full traceability and zero downtime. This new architectural approach also establishes DetectFlow Enterprise as a foundation for unified CI/CD workflows across the SOC Prime Platform, supporting more scalable and efficient security operations.

Teams can also run thousands of detections directly on streaming pipelines with real-time visibility and in-flight tagging and enrichment. They can correlate events across multiple log sources at the pre-SIEM stage, helping surface the attack chains that matter in real time while reducing noise and false positives.

By performing correlation before data reaches the SIEM, DetectFlow Enterprise allows teams to evaluate full telemetry streams against thousands of rules without the performance and cost trade-offs of downstream ingestion. Built on SOC Prime’s Detection Intelligence dataset, shaped by 11 years of continuous threat research and detection engineering, DetectFlow uses Flink Agent to assemble detections, events, and relevant active threat context for AI-powered analysis. This helps security teams surface high-confidence attack chains, improve investigative clarity, and accelerate response to critical threats.

I have spent most of my career working across threat detection, SIEM, EDR, and SOC operations, and one challenge remained constant. Detection logic was always constrained by the performance and economics of the underlying stack. With DetectFlow Enterprise, we are giving teams a way to move beyond those constraints by turning the data pipeline into an active detection layer, running rules at stream speed, enriching telemetry in flight, and helping organizations scale detection without rearchitecting the rest of their security environment.

Andrii Bezverkhyi, CEO and Founder of SOC Prime

DetectFlow is designed to work with existing ingestion architecture, requiring no changes to established SIEM workflows. It supports both air-gapped and cloud-connected deployments, allowing organizations to keep data under their control while extending detection across the broader security ecosystem. It can achieve an MTTD of 0.005–0.01 seconds and help organizations increase rule capacity on existing infrastructure by up to ten times.

About SOC Prime

SOC Prime has built and operates the world’s largest AI-Native Detection Intelligence Platform for SOC teams. Trusted by over 11,000 organizations, the company delivers real-time, cross-platform detection intelligence that helps security teams to anticipate, detect, validate, and respond to cyber threats faster and more effectively.

Pioneering Security-as-Code approach, SOC Prime’s Detection Intelligence is applied to over 56 SIEM, EDR, Data Lake, and Data Pipeline platforms. The company continuously improves its breadth and quality of threat coverage, shipping top-quality signals for AI SOCs and security analysts.

For more information, visit https://socprime.com or follow us on LinkedIn & X.



The post SOC Prime Launches DetectFlow Enterprise To Enhance Security Data Pipelines with Agentic AI appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • SIEM vs Log Management: Observability, Telemetry, and Detection Steven Edwards
    Security teams are no longer short on data. They are drowning in it. Cloud control plane logs, endpoint telemetry, identity events, SaaS audit trails, application logs, and network signals keep expanding, while the SOC is still expected to deliver faster detection and cleaner investigations. That is why SIEM vs log management is not just a tooling debate. It is a telemetry strategy question about what to retain as evidence, what to analyze for real-time detection, and where to do the heavy lift
     

SIEM vs Log Management: Observability, Telemetry, and Detection

5 de Março de 2026, 05:34
SIEM vs Log Management: Rethinking Security Data Workflows

Security teams are no longer short on data. They are drowning in it. Cloud control plane logs, endpoint telemetry, identity events, SaaS audit trails, application logs, and network signals keep expanding, while the SOC is still expected to deliver faster detection and cleaner investigations. That is why SIEM vs log management is not just a tooling debate. It is a telemetry strategy question about what to retain as evidence, what to analyze for real-time detection, and where to do the heavy lifting.

Observability programs accelerate the flood. More telemetry can mean better visibility, but only if the SOC can trust it, normalize it, enrich it, and query it fast enough to keep pace with active threats. At scale, the cost and operational burden show up quickly across both SIEM and log management. PwC highlights how rising data volumes and cost models can push teams to limit ingestion and create blind spots, while alert overload and performance constraints make it harder to separate real threats from noise. Speed is also unforgiving. Verizon reports the median time for users to fall for phishing is less than 60 seconds, while breach lifecycles remain measured in months.

That is why many SOCs are adopting a security data pipeline mindset. It means processing telemetry before it lands in your tools, so you control what gets stored, what gets indexed, and what gets analyzed. Solutions like SOC Prime’s DetectFlow add even more value by turning a data pipeline into a detection pipeline through in-flight normalization and enrichment, running thousands of Sigma rules on streaming data, and supporting value-based routing. Low-signal noise can stay in lower-cost log storage for retention, search, and forensics, while only enriched, detection-tagged events flow into the SIEM for triage and response. The outcome is lower SIEM ingestion and alert noise costs without sacrificing investigation history.

SIEM vs Log Management: Definitions

Before comparing tools, it helps to align on what each category is designed to do, because overlapping feature checklists can hide fundamentally different objectives.

Gartner defines SIEM around a customer need to analyze event data in real time for early detection and to collect, store, investigate, and report on log data for detection, investigation, and incident response. In other words, SIEM is a security-focused system of record that expects heterogeneous data, correlates it, and supports security operations workflows.

Log management has a different center of gravity. NIST describes log management as the process and infrastructure for generating, transmitting, storing, analyzing, and disposing of log data, supported by planning and operational practices that keep logging consistent and reliable. In fact, log management is how you keep the raw evidence searchable and retained at scale, while SIEM is where you operationalize security analytics and response.

The practical difference shows up when you ask two questions:

  • What is the unit of value? For log management, it is searchable records and operational visibility. For SIEM, it’s detection fidelity and incident context.
  • Where does analytics happen? In log management, analytics often supports exploration and troubleshooting. In SIEM, analytics is built for threat detection, alerting, triage, and case management

 

What Is a Log Management System?

A log management system is the operational backbone for ingesting and organizing logs, so teams can search, retain, and use them to understand what happened.

Log management is often the first place teams see the economics of telemetry. Many organizations don’t need to run expensive correlation on every log line. Instead, they store more data cheaply and retrieve it quickly when an incident demands it. That’s why log management is frequently paired with data routing and filtering approaches that reduce noise before it reaches higher-cost analytics layers.

For security teams, log management becomes truly valuable when it produces high-integrity, well-structured telemetry that downstream detections can rely on, without forcing the SIEM to act as a catch-all storage sink.

What Is a SIEM?

A SIEM stands for Security Information and Event Management. It is designed to centralize security-relevant telemetry and turn it into detections, investigations, and reports. Normally, SIEM is described as supporting threat detection, compliance, and incident management through the collection and analysis of security events, both near real-time and historical, across a broad scope of log and contextual data sources.

But SIEMs face structural pressures as telemetry grows. Common pain points in traditional SIEM approaches include skyrocketing data volumes and cost, alert overload, and scalability and performance constraints when searching and correlating large datasets in real time. Those pressures matter because defenders already operate on unfavorable timelines. IBM’s Cost of a Data Breach report shows breach lifecycles still commonly span months, which makes efficient investigation and reliable telemetry critical.

So while SIEM remains central for security analytics and response, many teams now treat it as the destination for curated, detection-ready data, not the place where all telemetry must land first.

SIEM vs Log Management: Main Features

A useful way to compare SIEM and log management is to map them to the security data lifecycle: collect, transform, store, analyze, and respond. Log management does most of the work in collect through store, with fast search to support investigations. SIEM concentrates on analyzing through response, where correlation, enrichment, alerting, and case management are expected to work under pressure.

Log management features typically cluster around collect, transform, store, and search:

  • Ingestion at scale: agents, syslog, API pulls, cloud-native integrations
  • Parsing and field extraction: schema mapping, pipeline transforms, enrichment for searchability
  • Retention and storage controls: tiering, compression, cost governance, access policies
  • Search and exploration: fast queries for troubleshooting and forensic hunting

SIEM features concentrate on analyzing and responding:

  • Security analytics and correlation: rules, detections, behavioral patterns, cross-source joins
  • Context and enrichment: identity, asset inventory, threat intel, entity resolution
  • Alert management: triage workflows, suppression, prioritization, reporting
  • Case management: investigations, evidence tracking, compliance reporting

 

SOC Prime vs Log Management

In other words, log management optimizes for retention and retrieval, and SIEM optimizes for detection and action. Yet, traditional SIEM approaches strain when the platform becomes both the telemetry lake and the correlation engine, especially under rising ingestion costs and alert noise. That is why many teams treat log management as the evidence layer, SIEM as the decision layer, and a pipeline layer as the control plane that shapes what flows into each.

Benefits of Using Log Management and SIEMs

Log management and SIEM are most effective when they’re treated as complementary layers in a single security data strategy.

Log management delivers depth and durability. It helps teams retain more raw evidence, troubleshoot operational issues that look like security incidents, and preserve the grounds needed for later forensics. This becomes essential when threat hypotheses emerge after the fact (for example, learning a new indicator days later and needing to search back in time).

SIEM delivers security outcomes: detection, prioritization, and incident workflows. A well-tuned SIEM program can reduce “needle-in-a-haystack” work by correlating events across identities, endpoints, networks, and cloud control planes.

The best security programs get three benefits from combining both:

  • Cost control: store more, analyze less expensively by default, and route high-value data to SIEM.
  • Better investigations: keep deep history in log platforms while SIEM tracks detections and cases.
  • Higher signal quality: normalize and enrich logs so detections fire on consistent fields rather than brittle strings.

 

How SOC Prime Can Improve the Work of SIEM & Log Management

SOC Prime brings the SIEM and log management story together as a single end-to-end workflow.

You start with Attack Detective to audit your SOC and map gaps to MITRE ATT&CK, so you know which telemetry and techniques you are missing. Then, Threat Detection Marketplace becomes the sourcing layer where you pull context-enriched detections aligned to those gaps and the latest TTPs. Uncoder AI acts as a detection-engineering booster, making the content operational and portable to any native formats your SIEM, EDR, or Data Lake actually runs, while also helping refine and optimize the logic so it performs at scale.

DetectFlow is the final layer that turns a data pipeline into a detection pipeline and enables full detection orchestration. Running tens of thousands of Sigma rules on live Kafka streams with sub-second MTTD using Apache Flink, DetectFlow tags and enriches events in flight before they reach your security stack and routes outcomes by value. This removes the need for SIEM min-maxing around rule limits and performance tradeoffs, because detection scale shifts to the stream layer, where it grows with your infrastructure, not vendor caps. For SIEM, it delivers cleaner, enriched, detection-tagged signals for triage and response. For log management, it preserves deep retention while making searches and investigations faster through normalized fields and attached detection context.

SOC Prime DetectFlow



The post SIEM vs Log Management: Observability, Telemetry, and Detection appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-21385: Google Patches Qualcomm Zero-Day Exploited in Targeted Android Attacks Daryna Olyniychuk
    Steady cadence of Android zero-days marked as exploited in the wild makes its path to 2026. Following CVE-2025-48633 and CVE-2025-48572, two Android Framework bugs Google flagged for active exploitation, defenders keep seeing the same familiar pattern. Mobile-chain vulnerabilities can move fast from limited attacks to real enterprise risk when patching lags.  In March 2026, that storyline continues with CVE-2026-21385, a high-severity vulnerability in a Qualcomm Graphics subcomponent. Google’s
     

CVE-2026-21385: Google Patches Qualcomm Zero-Day Exploited in Targeted Android Attacks

4 de Março de 2026, 11:46

Steady cadence of Android zero-days marked as exploited in the wild makes its path to 2026. Following CVE-2025-48633 and CVE-2025-48572, two Android Framework bugs Google flagged for active exploitation, defenders keep seeing the same familiar pattern. Mobile-chain vulnerabilities can move fast from limited attacks to real enterprise risk when patching lags. 

In March 2026, that storyline continues with CVE-2026-21385, a high-severity vulnerability in a Qualcomm Graphics subcomponent. Google’s Android Security Bulletin warns that there are indications that CVE-2026-21385 may be under limited, targeted exploitation.

As of early 2026, data indicates that 2025 was a record-breaking year for cybersecurity vulnerabilities, with Android remaining a primary target for mobile threats. The first half of 2025 saw Android malware rise by 151%, according to Malwarebytes. More vulnerabilities and more mobile malware together shrink the margin for delayed patching, especially when attackers focus on high-value targets.

Sign up for SOC Prime Platform, aggregating the world’s largest detection intelligence dataset and offering a complete product suite that empowers SOC teams to seamlessly handle everything from detection to simulation. The Platform features a large collection of rules addressing critical exploits. Just press Explore Detections and immediately drill down to a relevant detection stack filtered by “CVE” tag.

Explore Detections

All rules are mapped to the latest MITRE ATT&CK® framework and are compatible with multiple SIEM, EDR, and Data Lake platforms. Additionally, each rule comes packed with broad metadata, including CTI references, attack flows, audit configurations, and more.

Cyber defenders can also use Uncoder AI to streamline their detection engineering routine. Turn raw threat reports into actionable behavior rules, test your detection logic, map out attack flows, turn IOCs into hunting queries, or instantly translate detection code across languages backed by the power of AI and deep cybersecurity expertise behind every step.

CVE-2026-21385 Analysis

Google has recently issued its March 2026 Android Security Bulletin, addressing 129 security vulnerabilities across multiple components, including the Framework, System, and hardware-related areas such as Qualcomm drivers. Google confirmed that one of the fixed flaws, CVE-2026-21385 in a Qualcomm display and graphics component, has signals of real-world abuse. 

While Google did not provide further details about the attacks, Qualcomm described the bug in its own advisory as an integer overflow or wraparound in the Graphics subcomponent that can be exploited by a local attacker to trigger memory corruption. The vendor also notes that CVE-2026-21385 affects 235 Qualcomm chipsets, expanding exposure across device models and OEM update timelines.

Qualcomm stated it was alerted to the vulnerability on December 18 by Google’s Android Security team and notified customers on February 2. CVE-2026-21385 has also been added to CISA’s Known Exploited Vulnerabilities catalog as of March 3, 2026, requiring Federal Civilian Executive Branch agencies to apply fixes by March 24, 2026.

CVE-2026-21385 Mitigation

Fixes for CVE-2026-21385 were included in the second part of the March 2026 Android updates, delivered to devices as the 2026-03-05 security patch level. This patch level addresses over 60 vulnerabilities across Kernel and third-party components, including Arm, Imagination Technologies, MediaTek, Unisoc, and Qualcomm.

The first part of the March updates, rolling out as the 2026-03-01 security patch level, contains fixes for over 50 vulnerabilities in the Framework and System components, including critical issues that could lead to remote code execution and denial of service.

Devices running a security level of 2026-03-05 or higher contain patches for all vulnerabilities listed in the March 2026 bulletin. In enterprise environments, it is important to apply the latest security updates provided for each device model, validate patch levels across managed devices, and prioritize remediation for high-risk users where update rollout is slow or device diversity complicates coverage.

FAQ

What is CVE-2026-21385 and how does it work?

CVE-2026-21385 is a high-severity vulnerability in a Qualcomm Graphics subcomponent, described as an integer overflow or wraparound that can lead to memory corruption.

When was CVE-2026-21385 first discovered?

Qualcomm states it was alerted to the vulnerability on December 18, 2025, by Google’s Android Security team. Qualcomm then notified customers on February 2, 2026, and Google addressed it in the March 2026 Android Security Bulletin.

What is the impact of CVE-2026-21385 on organizations and users?

Because CVE-2026-21385 is a memory corruption flaw and is flagged for limited, targeted exploitation, it can create a path to device compromise on unpatched Android systems. For organizations, this can translate into a higher risk of credential theft, access to corporate apps and data on the device, and follow-on intrusion activity if the compromised user has privileged access. For individual users, exploitation can mean loss of device integrity and exposure of sensitive personal or work information until the device is updated.

Can CVE-2026-21385 still affect me in 2026?

Yes. Devices that have not received the March 2026 Android Security Bulletin updates, or are running a security patch level below 2026-03-05, may remain exposed.

How can you protect from CVE-2026-21385?

Update Android devices to the latest available security release for your device model and verify the security patch level is 2026-03-05 or higher.



The post CVE-2026-21385: Google Patches Qualcomm Zero-Day Exploited in Targeted Android Attacks appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • UAC-0252 Attack Detection: SHADOWSNIFF and SALATSTEALER Fuel Phishing Campaigns in Ukraine Daryna Olyniychuk
    Since January 2026, CERT-UA has been tracking a series of intrusions attributed to UAC-0252 and built around SHADOWSNIFF and SALATSTEALER infostealers. The campaigns rely on well-crafted phishing lures, payload staging on legitimate infrastructure, and user-driven execution of disguised EXE files. Detect UAC-0252 Attacks Covered in CERT-UA#20032 According to the Phishing Trends Q2 2025 research by Check Point, phishing remains a core tool for cybercriminals, and the impersonation of widely trus
     

UAC-0252 Attack Detection: SHADOWSNIFF and SALATSTEALER Fuel Phishing Campaigns in Ukraine

3 de Março de 2026, 10:46

Since January 2026, CERT-UA has been tracking a series of intrusions attributed to UAC-0252 and built around SHADOWSNIFF and SALATSTEALER infostealers. The campaigns rely on well-crafted phishing lures, payload staging on legitimate infrastructure, and user-driven execution of disguised EXE files.

Detect UAC-0252 Attacks Covered in CERT-UA#20032

According to the Phishing Trends Q2 2025 research by Check Point, phishing remains a core tool for cybercriminals, and the impersonation of widely trusted, high-usage brands continues to rise. Against the backdrop of more coordinated and sophisticated operations aimed at critical infrastructure and government organizations, CISA published its 2025–2026 International Strategic Plan to advance global risk reduction and improve collective resilience.

Sign up for the SOC Prime Platform to proactively defend your organization against UAC-0252 attacks. Just press Explore Detections below and access a relevant detection rule stack, enriched with AI-native CTI, mapped to the MITRE ATT&CK® framework, and compatible with a wide range of SIEM, EDR, and Data Lake technologies.

Explore Detections

Security experts can also use the “CERT-UA#20032” tag based on the relevant CERT-UA alert identifier to search for the detection stack directly and track any content changes. For more rules to detect adversary-related attacks, cyber defenders can search the Threat Detection Marketplace library using the “UAC-0252” tag.

SOC Prime users can also rely on Uncoder AI to create detections from raw threat reports, document and optimize rule code, and generate Attack Flows in a couple of clicks. By leveraging threat intel from the latest CERT-UA alert, teams can easily convert IOCs into performance-optimized queries ready to hunt in the chosen SIEM or EDR environment.

IOC-to-query conversion via Uncoder based on UAC-0252 IOCs from CERT-UA

Analyzing UAC-0252 Attacks Using SHADOWSNIFF and SALATSTEALER

Since January 2026, CERT-UA has been tracking repeated phishing campaigns targeting entities in Ukraine. The email messages are crafted to impersonate central government bodies or regional administrations and typically urge recipients to update mobile apps used in widely deployed civilian and military systems.

CERT-UA#20032 alert describes two common delivery paths. In the first one, the email includes an attached archive that contains an EXE file. The attacker relies on the recipient to open the archive and run the executable. In the second one, the email contains a link to a legitimate website that is vulnerable to cross-site scripting (XSS). When the victim visits the page, the injected JavaScript runs in the browser and downloads an executable file onto the computer. In both scenarios, CERT-UA notes that the EXE files and scripts are hosted on the legitimate GitHub service, which helps the activity blend into normal web traffic and makes basic domain blocking less effective in many environments.

During January and February 2026, CERT-UA confirmed that the activity used several malicious tools, including SHADOWSNIFF, SALATSTEALER, and DEAFTICK. 

SHADOWSNIFF was reported as being hosted on GitHub, while SALATSTEALER is commonly described as a Go-based infostealer that targets browser credentials, steals active sessions, and collects crypto-related data, operating under a Malware-as-a-Service (MaaS) model. In the same toolset, CERT-UA also reported DEAFTICK, a primitive backdoor written in Go that likely helps attackers maintain basic access on compromised hosts and support follow-on actions.


During repository analysis, CERT-UA reports discovering a program with characteristics of a ransomware encryptor, internally named «AVANGARD ULTIMATE v6.0». The same GitHub ecosystem also contained an archive with an exploit for WinRAR (CVE-2025-8088), a path traversal issue in Windows WinRAR that can enable arbitrary code execution via crafted archives and has been reported as exploited in the wild. This suggests the operators were not only stealing credentials, but also experimenting with additional tooling that could expand impact.

Based on the investigation details and the tooling overlaps, including experiments with publicly available instruments, CERT-UA links the described activity to individuals discussed in the «PalachPro» Telegram channel, while continuing to track the campaign under UAC-0252.

MITRE ATT&CK Context

Leveraging MITRE ATT&CK offers in-depth insight into the latest UAC-0252 phishing campaigns targeting Ukrainian entities. The table below displays all relevant Sigma rules mapped to the associated ATT&CK tactics, techniques, and sub-techniques.

Tactics

Techniques

Sigma Rules

Initial Access

Phishing: Spearphishing Attachment (T1566)


Execution

Exploitation for Client Execution (T1203)

User Execution: Malicious File (T1204.002)

Persistence

Boot or Logon Autostart Execution: Registry Run Keys / Startup Folder (T1547.001)

Defense Evasion

Masquerading: Masquerade Task or Service (T1036.004)

Masquerading: Match Legitimate Resource Name or Location (T1036.005)

Process Injection: Process Hollowing (T1055.012)

Impair Defenses: Disable or Modify Tools (T1562.001)

Hide Artifacts: Hidden Files and Directories (T1564.001)

Hide Artifacts: File/Path Exclusions (T1564.012)

Command and Control

Application Layer Protocol: Web Protocols (T1071.001)




The post UAC-0252 Attack Detection: SHADOWSNIFF and SALATSTEALER Fuel Phishing Campaigns in Ukraine appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-20127: Cisco SD-WAN Zero-Day Exploited Since 2023 Daryna Olyniychuk
    New day, new vulnerability in the spotlight. We’re once again seeing how quickly weaponized flaws in widely deployed platforms turn into real operational risk. Coverage of maximum-severity Cisco bugs (CVE-2025-20393, CVE-2026-20045), as well as the Dell RecoverPoint zero-day CVE-2026-22769, shows that attackers are increasingly prioritizing edge-facing infrastructure that quietly controls traffic flows, identity paths, and service availability. That story continues with CVE-2026-20127, a critic
     

CVE-2026-20127: Cisco SD-WAN Zero-Day Exploited Since 2023

26 de Fevereiro de 2026, 08:56
CVE-2026-20127 in Cisco Catalyst SD-WAN Controller

New day, new vulnerability in the spotlight. We’re once again seeing how quickly weaponized flaws in widely deployed platforms turn into real operational risk. Coverage of maximum-severity Cisco bugs (CVE-2025-20393, CVE-2026-20045), as well as the Dell RecoverPoint zero-day CVE-2026-22769, shows that attackers are increasingly prioritizing edge-facing infrastructure that quietly controls traffic flows, identity paths, and service availability.

That story continues with CVE-2026-20127, a critical authentication bypass affecting Cisco Catalyst SD-WAN Controller (formerly vSmart) and Cisco Catalyst SD-WAN Manager (formerly vManage). Cisco Talos reports the flaw is being actively exploited and tracks the activity as UAT-8616, assessing with high confidence that a highly sophisticated threat actor has been exploiting it since at least 2023.

GreyNoise’s 2026 State of the Edge Report shows why confirmed exploitation in edge-facing network control systems demands urgent action. In H2 2025, GreyNoise observed 2.97 billion malicious sessions from 3.8 million unique source IPs targeting internet-facing infrastructure, underscoring how quickly exploitation traffic scales once attackers focus on an exposed surface.

Register for SOC Prime’s AI-Native Detection Intelligence Platform, backed by cutting-edge technologies and top cybersecurity expertise to outscale cyber threats and build a resilient cybersecurity posture. Click Explore Detections to access the comprehensive collection of SOC content for vulnerability exploit detection, filtered by the custom “CVE” tag.

Explore Detections

Detections from the dedicated rule set can be applied across multiple SIEM, EDR, and Data Lake platforms and are mapped to the latest MITRE ATT&CK® framework v18.1. Security teams can also leverage Uncoder AI to accelerate detection engineering end-to-end by generating rules directly from live threat reports, refining and validating detection logic, auto-visualizing Attack Flows, converting IOCs into custom hunting queries, and instantly translating detection code across diverse language formats.

CVE-2026-20127 Analysis

Cisco Talos describes CVE-2026-20127 as an issue that allows an unauthenticated remote attacker to bypass authentication and obtain administrative privileges on the affected system by sending crafted requests. Cisco’s public advisory ties the root cause to a peering authentication mechanism that is not working properly.

A successful exploit can let an attacker log in to a Catalyst SD-WAN Controller as an internal, high-privileged, non-root account, then use that access to reach NETCONF and manipulate SD-WAN fabric configuration. That kind of control-plane access is exactly what makes SD-WAN incidents so disruptive, as the attackers are in a position to shape how the network behaves.

Multiple government and partner advisories describe a common post-exploitation path. After exploiting CVE-2026-20127, actors have been observed adding a rogue peer and then moving toward root access and long-term persistence within SD-WAN environments. Talos adds that intelligence partners observed escalation involving a software version downgrade, exploitation of CVE-2022-20775, and then restoration back to the original version, a sequence that can complicate detection if teams only validate the “current” running version.

Because exploitation is confirmed and impacts systems used to manage connectivity across sites and clouds, CISA issued Emergency Directive 26-03 for U.S. federal civilian agencies, with an accelerated requirement to complete required actions by 5:00 PM (ET) on February 27, 2026. FedRAMP also relayed the same urgency to cloud providers supporting federal environments. 

CVE-2026-20127 Mitigation 

According to Cisco’s advisory, CVE-2026-20127 affects Cisco Catalyst SD-WAN Controller and Cisco Catalyst SD-WAN Manager regardless of device configuration, across these deployment types:

  • On-Prem Deployment
  • Cisco Hosted SD-WAN Cloud
  • Cisco Hosted SD-WAN Cloud – Cisco Managed
  • Cisco Hosted SD-WAN Cloud – FedRAMP Environment 

Cisco also notes there are no workarounds that fully address this vulnerability. The durable fix is upgrading to a patched release, with the exact fixed versions listed in Cisco’s advisory under the Fixed Software section.

Users are urged to start by prioritizing patching as the only complete remediation and verify the fixes are actually in place across every in-scope Catalyst SD-WAN Controller and Manager instance.

Next, to reduce the attack surface while users patch and validate, CISA and the UK NCSC guidance emphasize restricting network exposure, placing SD-WAN control components behind firewalls, and isolating management interfaces from untrusted networks. In parallel, SD-WAN logs should be forwarded to external systems so attackers cannot easily erase local evidence.

Finally, it is better to treat this as both a patching and an investigation event. Cisco recommends auditing /var/log/auth.log for entries like “Accepted publickey for vmanage-admin” coming from unknown or unauthorized IP addresses, then comparing those source IPs against the configured System IPs listed in the Manager UI (WebUI > Devices > System IP). If users suspect compromise, Cisco advises engaging Cisco TAC and collecting the admin-tech output (for example, via request admin-tech) so it can be reviewed.

Because the reported activity can include version downgrade and unexpected reboot behavior as part of the post-compromise chain, public guidance also recommends checking the following logs for downgrade/reboot indicators:

  • /var/volatile/log/vdebug
  • /var/log/tmplog/vdebug
  • /var/volatile/log/sw_script_synccdb.log

To strengthen coverage beyond patching and mitigation steps, rely on the SOC Prime Platform to reach the world’s largest detection intelligence dataset, adopt an end-to-end pipeline that spans detection through simulation while streamlining security operations and speeding up response workflows, reduce engineering overhead, and stay ahead of emerging threats.

 

FAQ

What is CVE-2026-20127 and how does it work?

CVE-2026-20127 is a critical authentication bypass in Cisco Catalyst SD-WAN Controller and SD-WAN Manager that lets an unauthenticated attacker send crafted requests and gain administrative access due to a broken peering authentication check.

When was CVE-2026-20127 first discovered?

Cisco disclosed it in late February 2026, while Cisco Talos reports evidence that CVE-2026-20127 has already been exploited in real attacks since at least 2023.

What risks does CVE-2026-20127 pose to systems?

It can hand attackers control-plane access, enabling them to add a rogue peer, change SD-WAN fabric configuration via NETCONF, and move toward persistence and root-level control, including downgrade-and-restore activity tied to chaining with CVE-2022-20775.

Can CVE-2026-20127 still affect me in 2026?

Yes. If you have not patched, or you patched without checking for compromise, you may still be at risk.

How can you protect from CVE-2026-20127?

Upgrade to Cisco’s fixed releases, restrict exposure of SD-WAN control components, and review logs for signs of suspicious access; involve Cisco TAC if anything looks abnormal.



The post CVE-2026-20127: Cisco SD-WAN Zero-Day Exploited Since 2023 appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • What Is a Security Data Pipeline Platform: Key Benefits for Modern SOC Steven Edwards
    Security teams are drowning in telemetry: cloud logs, endpoint events, SaaS audit trails, identity signals, and network data. Yet many programs still push everything into a SIEM, hoping detections will sort it out later. The problem is that “more data in the SIEM” doesn’t automatically translate into better detection. It often translates into chaos. Many SOCs admit they don’t even know what they’ll do with all that data once it’s ingested. The SANS 2025 Global SOC Survey reports that 42% of SOC
     

What Is a Security Data Pipeline Platform: Key Benefits for Modern SOC

24 de Fevereiro de 2026, 13:23

Security teams are drowning in telemetry: cloud logs, endpoint events, SaaS audit trails, identity signals, and network data. Yet many programs still push everything into a SIEM, hoping detections will sort it out later.

The problem is that “more data in the SIEM” doesn’t automatically translate into better detection. It often translates into chaos. Many SOCs admit they don’t even know what they’ll do with all that data once it’s ingested. The SANS 2025 Global SOC Survey reports that 42% of SOCs dump all incoming data into a SIEM without a plan for retrieval or analysis. Without upstream control over quality, structure, and routing, the SIEM becomes a dumping ground where messy inputs create messy outcomes: false positives, brittle detections, and missing context when it matters most.

That pressure shows up directly in the analyst experience. A Devo survey found that 83% of cyber defenders are overwhelmed by alert volume, false positives, and missing context, and 85% spend substantial time gathering and connecting evidence just to make alerts actionable. Even the mechanics of SIEM-based detection can work against you. Events must be collected, parsed, indexed, and stored before they’re reliably searchable and correlatable.

Cost is part of the same story. Forrester notes that “How do we reduce our SIEM ingest costs?” is one of the top inquiry questions it gets from clients. The practical answer is data pipeline management for security: route, reduce, redact, enrich, and transform logs before they hit the SIEM. Done well, this reduces spend and makes telemetry usable by enforcing consistent fields, stable schemas, and healthier pipelines so data turns into detections.

The demand pushes security teams to borrow a familiar idea from the data world. ETL stands for Extract, Transform, Load. It pulls data from multiple sources, transforms it into a consistent format, and then loads it into a target system for analytics and reporting. IBM describes ETL as a way to consolidate and prepare data, and notes that ETL is often batch-oriented and can be time-consuming when updates need to be frequent. Security increasingly needs the real-time version of this concept because a security signal loses value when it arrives late.

That is why event streaming has become so relevant. Apache Kafka sees event streaming as capturing events in real time, storing streams durably, processing them in real time or later, and routing them to different destinations. In security terms, this means you can normalize and enrich telemetry before detections depend on it, monitor telemetry health so the SOC does not go blind, and route the right data to the right place for response, hunting, or retention.

This is where Security Data Pipeline Platforms (SDPP) enter the picture. An SDPP is the solution located between sources and destinations that turns raw telemetry into governed, security-ready data. It handles ingestion, normalization, enrichment, routing, tiering, and data health so downstream systems can rely on clean and consistent events instead of compensating for broken schemas and missing context.

What Is a Security Data Pipeline Platform (SDPP)?

A Security Data Pipeline Platform (SDPP) is a centralized system that ingests security telemetry from many sources, processes it in-flight, and delivers it to one or more destinations, including SIEM, XDR, SOAR, and Data Lakes. The SDPP job is to take raw security data as it arrives, shape it properly, and deliver it downstream in a form that is consistent, enriched, and ready for detection and response. The shift is subtle but important. Instead of treating log management as “collect and store,” an SDPP treats it as “collect, improve, then distribute.”

In practice, SDPPs commonly support:

  • Collection from agents, APIs, syslog, cloud streams, and message buses
  • Parsing and normalization to consistent schemas (e.g., OCSF-style concepts)
  • Enrichment with asset, identity, vulnerability, and threat intel context
  • Filtering and sampling to reduce noise and control spend
  • Routing to multiple destinations (and different formats per destination)

Unlike legacy data pipelines that mainly move data from point A to point B, an SDPP adds intelligence and governance. It treats security data as a managed capability that can be standardized, observed, and adapted as environments change. That matters as teams adopt hybrid SIEM plus Data Lake strategies, scale cloud infrastructure for detection & response, and standardize telemetry for correlation & automation.

What Are the Key Capabilities of a Security Data Pipeline?

A security data pipeline turns raw telemetry into something usable before it hits your security stack. The most effective pipelines do two things at once. They improve data quality, and they control where data goes, how long it stays, and what it looks like when it arrives.

Ingest at Scale

A modern security data pipeline must collect continuously, not occasionally. That means cloud logs, SaaS audit feeds, endpoint telemetry, identity signals, and network data, pulled via APIs, agents, and streaming transports.

Transform in Flight

In-flight transformation is where the pipeline earns its value. As data flows, fields are parsed, key attributes are extracted, and formats are normalized into stable schemas. This reduces errors from inconsistent data and keeps correlation logic portable across tools. At the same time, noise can be filtered, events sampled, and privacy or redaction rules applied in a controlled, measurable, and reversible way. The result is clean, reliable data that’s ready for detection and action as it moves through the system.

Enrich With Context

Enrichment transforms daily SOC work by bringing context to the data before it reaches analysts. Instead of spending time manually gathering information, the pipeline adds identity and asset details, environment tags, vulnerability insights, and threat intelligence so events are ready for triage and correlation.

Route and Tier

Routing is where telemetry becomes truly governed. Instead of sending all data to a single destination, the pipeline applies policies to deliver the right events to SIEM, XDR, SOAR, and Data Lakes. Data is stored by value, with clear hot, warm, and cold retention paths, and can be accessed quickly when investigations require it. By handling different formats and subsets for each tool, routing keeps the pipeline organized, consistent, and fully managed across environments, turning raw streams into reliable, actionable telemetry.

Monitor Data Health

Pipelines need their own observability. Missing data, unexpected schema changes, or sudden spikes and drops can create blind spots that may only be noticed during an incident. A strong Security Data Pipeline Platform provides observability across the system, making these issues visible early and supporting safe rerouting if a destination fails.

AI Assistance

Teams are increasingly comfortable with relevant AI assistance in pipelines, especially for repetitive tasks like parser generation when formats change, drift detection, clustering similar events, and QA. The goal is not autonomous decision-making. It is a faster, more consistent pipeline operation with human control.

Detect in Stream

Some teams are now running detections directly in the data stream, turning their pipelines into active detection layers. Tools like SOC Prime’s DetectFlow enable this by applying tens of thousands of Sigma rules to live Kafka streams using Apache Flink, tagging and enriching events in real time before they reach systems like SIEM. The goal is not to replace centralized analytics, but to prioritize critical events earlier, improve routing, and reduce mean time to detect (MTTD).

What Challenge SDPPs Help to Solve?

Security Data Pipeline Platforms exist because modern SOC pain is not only “too many logs.” It is the friction between data collection and real detection outcomes. When telemetry is late, inconsistent, expensive to store, and hard to query at scale, the SOC ends up working around the data instead of working on threats. The main challenges SDPPs help solve are the following:

  • Data arrives too late to be useful. SIEM-based detection is not instant. Events must be collected, parsed, ingested, indexed, and stored before they are reliably searchable and correlatable. In real environments, correlation can take 15+ minutes depending on ingestion and processing load. SDPPs reduce this gap by shaping telemetry in-flight so downstream systems receive cleaner, normalized events sooner, and by routing high-priority data on faster paths when needed.
  • “Store everything” breaks the budget. Event data growth makes the default approach unaffordable. Even if you can pay to ingest everything, you still end up indexing and retaining huge volumes that do not improve detection outcomes. SDPPs help teams set clear policies, so high-value security events go to real-time systems, while bulk or long-retention logs are routed to cheaper tiers with predictable rehydration during investigations.
  • Detection logic can’t keep up with log volume. Average SOCs deploy roughly 40 rules per year, while practical SIEM rule programs and performance limits often cap usable coverage in the hundreds. More telemetry lands, but detection content does not scale at the same pace. SDPPs close the gap by reducing noise, stabilizing schemas, and preparing data so each rule has a higher signal value and works more consistently across environments.
  • ETL is not enough on its own. ETL is great for extracting, transforming, and loading data for analytics and reporting, often in batch. Security needs the continuous version of that idea. Telemetry arrives as a stream, formats change frequently, and detections need consistent schemas plus health monitoring to stay reliable. SDPPs complement ETL-style workflows by providing security-specific processing for streaming logs, schema drift handling, and operational observability.
  • Threats iterate faster than your query budget. AI-driven campaigns can evolve malicious payloads in minutes, which punishes workflows that depend on slow query cycles and manual evidence stitching. SIEMs also impose practical ceilings, including hard caps like under 1,000 queries per hour, depending on platform and licensing. SDPPs help by making each query more effective through normalization and enrichment, and by reducing the need for brute-force querying via smart routing, filtering, and tagging upstream.

What Are the Benefits of a Security Data Pipeline Platform?

When security teams talk about “too much data,” they rarely mean they want less visibility. They mean the work has become inefficient. Analysts waste time stitching context together, detections break when schemas drift, and leaders end up paying for ingest that does not move risk down.

A Security Data Pipeline Platform changes the day-to-day reality by putting one layer in charge of how telemetry is prepared and where it goes. For SOC teams, that means events arrive cleaner, more consistent, and easier to investigate. For the business, it means you can scale detection and retention without turning SIEM spend and operational noise into a permanent paycheck.

Therefore, key benefits of using Security Data Pipeline Platforms include the following:

  • Less noise, more signal. By filtering low-value events, deduplicating repeats, and adding context before events reach alerting systems, the SDPP helps analysts focus on what actually matters.
  • Lower SIEM and storage spend. The pipeline controls what gets sent to expensive destinations, routing high-value events to real-time systems while pushing bulk telemetry to cheaper tiers.
  • Less manual burden and rework. Transformation and routing rules live once in the pipeline instead of being rebuilt across tools and environments.
  • Stronger governance and compliance. Centralized policies simplify privacy controls, data residency constraints, and retention rules.
  • Fewer blind spots and surprises. Silence detection and telemetry health monitoring surface missing logs, drift, and delivery failures before incidents do.

How a Security Data Pipeline Platform Can Help Your Business?

At a business level, a Security Data Pipeline Platform is about making security operations predictable. When telemetry is governed upstream, leadership gets clearer answers to three questions that usually stay messy in mature environments: what data matters, where it should live, and what it should cost to operate at scale.

One practical impact is budget planning that survives data growth. Instead of treating ingestion as an uncontrollable variable, the pipeline makes volume a managed policy. You can set targets, prove what was reduced, and preserve the context that supports detection and compliance. That predictability turns cost reduction into operational freedom rather than a risky cut.

Another impact is standardization that unlocks reuse. When normalization is done once and applied everywhere, detection content and correlation logic can be reused across environments instead of being rewritten per source or per destination. That reduces the hidden maintenance costs that slow rollouts and drain engineering time.

A third impact is flexibility without lock-in. Intelligent routing and tiering let you align data to purpose, not vendor limitations. High-priority telemetry stays hot for response, broader datasets support hunting in cheaper stores, and long-retention logs can be archived with a clear rehydration path for investigations. The pipeline keeps the data layer stable while destinations evolve.

Finally, pipelines support operational assurance. Many organizations worry more about missing telemetry than noisy telemetry because quiet failures create blind spots that surface during incidents and audits. A pipeline that monitors source health and drift makes gaps visible early and improves confidence in security reporting.

Unlocking More SDPP Value With SOC Prime DetectFlow

Security data pipelines already help you collect, shape, and route telemetry with intent. SOC Prime’s DetectFlow adds an in-stream detection layer that turns your data pipeline into a detection pipeline. It runs Sigma rules on live Kafka streams using Apache Flink, tags, and enriches matching events in-flight, and routes high-priority matches downstream without changing your SIEM ingestion architecture.

Detect Flow, in-stream detection layer for SDPP

This directly targets the detection coverage gap. There are 216 MITRE ATT&CK techniques and 475 sub-techniques, yet the average SOC ships ~40 rules per year, and many SIEMs start to struggle around ~500 custom rules. DetectFlow is built to run tens of thousands of Sigma rules at stream speed with sub-second MTTD versus 15+ minutes common in SIEM-first pipelines. Because it scales with your infrastructure, you avoid vendor caps, keep data in your environment, support air-gapped or cloud-connected deployments, and unlock up to 10× rule capacity on existing infrastructure.

DetectFlow vs Traditional Approach: Benefits for SOC Teams

For more details, reach out to us at sales@socprime.com or kick off your journey at socprime.com/detectflow.



The post What Is a Security Data Pipeline Platform: Key Benefits for Modern SOC appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • CVE-2026-22769: Critical Dell RecoverPoint Zero-Day Exploited in the Wild Daryna Olyniychuk
    SOC Prime has recently covered a wave of actively exploited zero-days across major ecosystems, including Apple’s CVE-2026-20700 and Microsoft’s CVE-2026-20805, alongside a fresh Chrome zero-day case. But the avalanche of threats keeps marching into 2026. Recently, researchers from Mandiant and Google Threat Intelligence Group (GTIG) detailed the active exploitation of CVE-2026-22769, a maximum-severity hardcoded-credential vulnerability in Dell products. The spotlight is on Dell RecoverPoint fo
     

CVE-2026-22769: Critical Dell RecoverPoint Zero-Day Exploited in the Wild

18 de Fevereiro de 2026, 09:15
CVE-2026-22769 Zero-Day in Dell

SOC Prime has recently covered a wave of actively exploited zero-days across major ecosystems, including Apple’s CVE-2026-20700 and Microsoft’s CVE-2026-20805, alongside a fresh Chrome zero-day case. But the avalanche of threats keeps marching into 2026. Recently, researchers from Mandiant and Google Threat Intelligence Group (GTIG) detailed the active exploitation of CVE-2026-22769, a maximum-severity hardcoded-credential vulnerability in Dell products.

The spotlight is on Dell RecoverPoint for Virtual Machines, a VMware-focused backup and disaster recovery solution that has become the target of an in-the-wild zero-day campaign attributed to suspected China-nexus activity. Tracked with a CVSS score of 10.0, CVE-2026-22769 has reportedly been exploited by the China-linked cluster UNC6201 since at least mid-2024, enabling attackers to establish access and deploy multiple malware families, including BRICKSTORM and GRIMBOLT.

SOC Prime Platform helps security teams close the gap between “a CVE was disclosed” and “we have detection intel.” Sign up now to access the world’s largest detection intelligence dataset, backed by advanced solutions to take your SOC to the next level. Click Explore Detections to reach vulnerability-focused detection content pre-filtered by the “CVE” tag. 

Explore Detections

All rules are compatible with dozens of SIEM, EDR, and Data Lake formats and mapped to MITRE ATT&CK®. Additionally, each rule is enriched with extensive metadata, including CTI references, Attack Flow visualization, triage recommendations, audit configurations, and more.

Security teams can also leverage Uncoder AI to accelerate detection engineering end-to-end by generating rules directly from live threat reports, refining and validating detection logic, converting IOCs into custom hunting queries, and instantly translating detection code across diverse language formats.

CVE-2026-22769 Analysis

In its advisory from February 17, 2026, Dell describes CVE-2026-22769 as a hardcoded credential vulnerability in RecoverPoint for Virtual Machines prior to 6.0.3.1 HF1, and assigns it a highest severity rating. Dell warns that an unauthenticated remote attacker who knows the hardcoded credential could gain unauthorized access to the underlying operating system and even establish root-level persistence. 

GTIG and Mandiant’s investigation adds the operational detail behind that impact. Security experts observed activity against the appliance’s Apache Tomcat Manager, including web requests using the admin username that resulted in the deployment of a malicious WAR file containing the SLAYSTYLE web shell. The researchers then traced this back to hard-coded default credentials for the admin user in Tomcat Manager configuration at /home/kos/tomcat9/tomcat-users.xml. Using those credentials, an attacker could authenticate to Tomcat Manager and deploy a WAR via the /manager/text/deploy endpoint, leading to command execution as root on the appliance. 

UNC6201 is assessed to have used this foothold for lateral movement, persistence, and malware deployment, with the earliest identified exploitation dating back to mid-2024. The initial access vector was not confirmed in these cases, but GTIG notes UNC6201 is known for targeting edge appliances as an entry point.

The post-compromise tooling also evolved over time. Mandiant reports finding BRICKSTORM binaries and then observing a replacement with GRIMBOLT in September 2025. GRIMBOLT is described as a C# backdoor compiled using native ahead-of-time (AOT) compilation and packed with UPX, providing remote shell capability while using the same C2 as BRICKSTORM. The researchers note it is unclear whether the swap was a planned upgrade or a response to incident response pressure.

The activity did not stop at the RecoverPoint appliance. Mandiant reports that UNC6201 pushed deeper into victims’ virtualized environments by creating temporary virtual network ports on VMware ESXi servers, effectively spinning up hidden network connectivity commonly referred to as “Ghost NICs.” This technique allowed the attackers to move quietly from compromised VMs into broader internal networks and, in some cases, toward SaaS environments.

Researchers also report overlaps between UNC6201 and another China-nexus cluster tracked as UNC5221, known for exploiting Ivanti zero-days and previously linked in reporting to Silk Typhoon, though GTIG notes these clusters are not considered identical.

CVE-2026-22769 Mitigation

Dell’s remediation guidance is clear, but it requires follow-through. For the 6.x line, Dell points customers to upgrade to 6.0.3.1 HF1 or apply the vendor remediation script referenced in the advisory, and it also provides migration/upgrade paths for affected 5.3 service pack builds.

To strengthen coverage beyond patching, rely on the SOC Prime Platform to reach the world’s largest detection intelligence dataset, adopt an end-to-end pipeline that spans detection through simulation while streamlining security operations and speeding up response workflows, reduce engineering overhead, and stay ahead of emerging threats.

FAQ

What is CVE-2026-22769 and how does it work?

CVE-2026-22769 is a critical hardcoded-credential vulnerability in Dell RecoverPoint for Virtual Machines. The flaw allows an unauthenticated remote attacker with knowledge of the hardcoded credential to gain unauthorized access to the underlying operating system and achieve root-level persistence.

When was CVE-2026-22769 first discovered?

Dell published its advisory on February 17, 2026, while GTIG and Mandiant report the earliest identified exploitation activity occurred in mid-2024.

What risks does CVE-2026-22769 pose to organizations?

Successful exploitation can provide remote access to the appliance and enable root-level persistence, which can support malware deployment, stealthy long-term access, and pivoting deeper into VMware and enterprise infrastructure.

Can CVE-2026-22769 still affect me in 2026?

Yes. If RecoverPoint for Virtual Machines is running a vulnerable version prior to 6.0.3.1 HF1, or an affected 5.3 build that has not been upgraded per Dell guidance, the environment can remain exposed.

How can you protect from CVE-2026-22769?

Apply Dell’s remediation immediately by upgrading to 6.0.3.1 HF1 or using the vendor’s remediation script path, then confirm version compliance across all appliances and related management surfaces.



The post CVE-2026-22769: Critical Dell RecoverPoint Zero-Day Exploited in the Wild appeared first on SOC Prime.

❌
❌