Visualização de leitura

EU Faces Criticism Over Surveillance Technology Exports to Rights Violators

EU Surveillance Technology

The European Union is facing renewed criticism over its failure to stop the export of surveillance technology to governments accused of human rights violations, according to a new report released by Human Rights Watch. The report claims that despite the EU’s landmark Dual-Use Regulation introduced in 2021, EU surveillance technology tool are still reaching countries where they are allegedly used to target journalists, activists, academics, and other critical voices. The 54-page report, titled “Looking the Other Way: EU Failure to Prevent Surveillance Exports to Rights Violators,” raises concerns about weak oversight, limited transparency, and gaps in enforcement within the EU’s surveillance technology export framework.

EU Surveillance Technology Exports Continue Despite Safeguards

The report highlights that the majority of EU member states host companies involved in the development and export of surveillance technology. These tools include intrusion software and telecommunication interception systems capable of monitoring private communications and tracking individuals. According to Human Rights Watch, the growing global use of commercial spyware and related surveillance technology has become a major human rights concern. Governments in several countries have allegedly used such technologies to suppress dissent, monitor opposition voices, and restrict civic freedoms. The EU’s Dual-Use Regulation was introduced to regulate exports of technologies that could serve both civilian and military purposes. The regulation aimed to strengthen oversight of surveillance technology exports by requiring member states to assess the human rights records of destination countries before approving sales. The law also introduced transparency and reporting obligations requiring EU member states to share export licensing data with the European Commission for inclusion in annual public reports. However, Human Rights Watch argues that the implementation of these measures has fallen short of their intended purpose.

Human Rights Watch Flags Weak Oversight and Transparency

A major focus of the report is the EU’s 2024 implementation guidelines for the Dual-Use Regulation. Human Rights Watch claims the guidelines weakened transparency requirements and limited public access to meaningful information about surveillance technology exports. The organization said the reporting system currently does not provide enough detail to determine whether exports are contributing to human rights abuses. To investigate further, Human Rights Watch submitted freedom of information requests to all 27 EU member states seeking data on surveillance technology licensing and exports. The findings revealed several examples of exports to countries with documented records of surveillance-related rights violations. Among the cases highlighted were exports of surveillance tools from Bulgaria to Azerbaijan in 2022 and telecommunication interception systems exported from Poland to Rwanda in 2023. The report states that these exports included technologies capable of intercepting communications and conducting intrusive digital surveillance. Human Rights Watch also criticized both EU institutions and member states for frequently citing trade secrets, national security, and international relations as reasons for withholding export information from public scrutiny.

Concerns Over Surveillance Technology and Human Rights

The report argues that surveillance technology can directly threaten several fundamental rights, including privacy, freedom of expression, freedom of assembly, and in some cases even the right to life and protection from torture. Human Rights Watch said journalists, activists, humanitarian workers, and anti-corruption investigators are among those most vulnerable to misuse of surveillance tools. The organization warned that digital surveillance can expose confidential sources, restrict independent reporting, and create risks to personal safety. According to the report, the EU remains one of the largest hubs for commercial surveillance technology companies globally. A 2024 report by Google’s Threat Analysis Group reportedly found that nearly all major commercial surveillance companies mentioned in its research were based in the EU.

European Commission Faces Pressure Ahead of 2026 Review

The European Commission is expected to begin a formal evaluation of the Dual-Use Regulation in September 2026. Human Rights Watch is urging the commission, the European Parliament, and EU member states to strengthen the rules governing surveillance technology exports during the review process. The organization is calling for stricter human rights due diligence requirements, stronger export controls, and greater transparency in reporting. It also wants surveillance companies to conduct more detailed assessments of whether their products could be used to facilitate rights abuses. In response to questions raised in the report, the European Commission stated that licensing decisions for dual-use exports are handled by individual EU member states. The commission also defended certain reporting limitations, saying that detailed disclosures could reveal commercially sensitive information or identify companies involved in exports. Still, Human Rights Watch argues that the current framework is failing to provide effective oversight. Zach Campbell, senior surveillance researcher at Human Rights Watch, said the EU needs “real transparency” to ensure that the regulation works as intended and prevents European surveillance technology from enabling abuse worldwide.

Microsoft May 2026 Patch Tuesday Fixes 120 Vulnerabilities, No Zero-Day Exploits Reported

May 2026 Patch Tuesday

Microsoft has rolled out its May 2026 Patch Tuesday security updates, delivering fixes for approximately 120 vulnerabilities across Windows, Microsoft Office, networking services, and enterprise platforms. Unlike several recent monthly releases, this update contains no publicly disclosed or actively exploited zero-day vulnerabilities, making it a relatively less chaotic cycle for IT and security teams.  Even without emergency-level exploits, the Microsoft May 2026 Patch Tuesday release remains significant due to the large number of critical flaws addressed. The company confirmed that the update resolves 17 critical vulnerabilities, including 14 remote code execution (RCE) flaws, two elevation-of-privilege issues, and one information disclosure vulnerability. 

Microsoft May 2026 Patch Tuesday: Vulnerabilities That Demand Attention 

One of the most important areas covered in the May 2026 Patch Tuesday update involves multiple vulnerabilities affecting Microsoft Office applications, particularly Word and Excel.  According to Microsoft, attackers could exploit these flaws by tricking users into opening malicious files. Several of the vulnerabilities can also be triggered through the preview pane, allowing remote code execution without fully opening the attachment.  Because Office documents remain a common attack vector in phishing campaigns, security professionals are strongly recommending that organizations prioritize deployment of these updates, especially in environments where employees regularly receive external attachments. 

Windows GDI Flaw Allows Exploitation Through Microsoft Paint 

Among the noteworthy issues patched during Microsoft’s May 2026 Patch Tuesday rollout is CVE-2026-35421, a Windows GDI remote code execution vulnerability.  The flaw can be exploited through a malicious Enhanced Metafile (EMF) image opened in Microsoft Paint. Successful exploitation could allow attackers to execute arbitrary code on the victim’s machine.  Although the attack requires user interaction, researchers warned that image-based attacks are often effective because users may not recognize specially crafted files as dangerous. 

SharePoint and DNS Vulnerabilities Raise Enterprise Security Concerns 

Another major vulnerability addressed in the May 2026 Patch Tuesday release is CVE-2026-40365, a remote code execution flaw affecting Microsoft SharePoint Server.  Microsoft stated that an authenticated attacker could use the vulnerability to launch a network-based attack capable of remotely executing code on vulnerable SharePoint systems. Since SharePoint environments often store sensitive internal data and business documents, the flaw is expected to receive close attention from enterprise administrators.  The company also patched CVE-2026-41096, a serious Windows DNS Client remote code execution vulnerability. The flaw involves improper handling of specially crafted DNS responses sent by attacker-controlled DNS servers.  The issue stems from a heap-based buffer overflow condition in Windows NetLogon functionality. A successful attack could corrupt system memory and allow remote code execution without requiring authentication. 

Dynamics 365 Vulnerability Carries Near-Maximum Severity Score 

Another critical issue fixed during the Microsoft May 2026 Patch Tuesday cycle is CVE-2026-42898, a remote code execution vulnerability affecting on-premises versions of Microsoft Dynamics 365.  The flaw received a CVSS severity score of 9.9 and requires no user interaction for exploitation. Researchers warned that attacks targeting Dynamics 365 environments could have widespread consequences because the platform frequently connects with multiple enterprise systems and sensitive databases.  Previous attacks involving Dynamics infrastructure have exposed privileged business information, making this vulnerability especially concerning for large organizations.

Windows 11 Cumulative Updates Introduce New Features 

As part of the May 2026 Patch Tuesday rollout, Microsoft released Windows 11 cumulative updates KB5089549 and KB5087420 for versions 25H2, 24H2, and 23H2.  The updates are mandatory because they contain the latest security fixes and stability improvements.  After installation: 
  • Windows 11 25H2 updates to build 26200.8457  
  • Windows 11 24H2 updates to build 26100.8457  
  • Windows 11 23H2 updates to build 22631.7079  
Microsoft confirmed that versions 25H2 and 24H2 share the same underlying update structure, meaning users receive identical fixes and improvements across both versions. 

Xbox-Inspired Desktop Experience Added to Windows 11 

One of the more noticeable additions included in the Microsoft May 2026 Patch Tuesday update is a new Xbox-style desktop experience for PCs.  The feature is designed to provide a console-like interface on Windows devices. Alongside the visual changes, Microsoft also introduced reliability improvements for the taskbar and enhancements to Windows Hello authentication.  The update improves both Windows Hello Face recognition reliability and the persistence of fingerprint authentication across system upgrades. 

File Explorer Receives Major Improvements 

File Explorer received several updates in the latest May 2026 Patch Tuesday release.  Microsoft expanded archive support to include formats such as: 
  • uu  
  • cpio  
  • xar  
  • NuGet Packages (nupkg)  
The company also improved how File Explorer preserves View and Sort preferences in folders like Downloads and Documents when applications directly launch those locations.  Additionally, Microsoft fixed a white flash issue that sometimes appeared in dark mode while opening “This PC” or resizing the Details pane.  Explorer.exe reliability was also enhanced to reduce crashes and improve overall responsiveness. 

Input, Voice Typing, and Haptic Feedback Enhancements 

The Microsoft May 2026 Patch Tuesday updates introduced several improvements to input and accessibility features.  Compatible devices can now provide haptic feedback during actions such as snapping windows or aligning PowerPoint objects. Current supported hardware includes: 
  • Surface Slim Pen 2  
  • ASUS Pen 3.0  
  • MSI Pen 2  
Microsoft added that support for additional peripherals, including select mouse devices, could arrive in future hardware updates.  Voice typing on the touch keyboard also received a redesign. The updated interface removes the previous full-screen overlay and displays animations directly on the dictation key to reduce distractions. In addition, Microsoft introduced the Arabic 101 Legacy keyboard layout for users who prefer the earlier Arabic keyboard configuration.

Storage, Printing, and Performance Updates Included 

Several broader system improvements were bundled into the May 2026 Patch Tuesday release.  Microsoft increased the FAT32 formatting limit through the command line from 32GB to 2TB. The update also improves storage settings performance when viewing large disk volumes. Additional changes include: 
  • Reduced memory usage in Delivery Optimization  
  • Improved audio driver compatibility with midisrv.exe  
  • Better taskbar system tray reliability  
  • Enhanced startup application performance  
  • Improved monitor color profile persistence  
  • Simplified kiosk mode app configuration  
The update also introduces a new icon identifying printers that support Windows Protected Print Mode. 

Microsoft Introduces More Secure Batch File Processing 

Microsoft added a new security-focused feature aimed at administrators and enterprise policy managers.  The May 2026 Patch Tuesday update introduces a secure processing mode for batch files and Command Prompt scripts. When enabled, the feature prevents batch files from being modified during execution.  Administrators can activate the setting using the following registry path:  Registry Key: HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor  Value Name: LockBatchFilesWhenInUse  The feature can also be enabled through Application Control for Business policies. 

OpenAI Introduces AI Security Platform as Cyber Defense Race Heats Up

OpenAI Daybreak

OpenAI has officially entered the AI cybersecurity race with the launch of OpenAI Daybreak, a new initiative focused on helping security teams identify, validate, and fix software vulnerabilities faster using artificial intelligence. Announced through the company’s LinkedIn post, OpenAI described Daybreak as its vision for “a new era of cyber defense,” where AI systems can assist defenders across secure code reviews, vulnerability analysis, remediation, and threat investigation workflows. The launch reflects a growing industry trend in which AI companies are positioning advanced language models as cybersecurity tools capable of reducing the time between vulnerability discovery and remediation. While AI-generated coding tools have often raised concerns around insecure code generation, companies are now increasingly focusing on using AI defensively to strengthen software security practices. According to OpenAI, AI models are already changing how security teams operate by enabling them to reason across large codebases, identify subtle vulnerabilities, validate fixes, and analyze unfamiliar systems more efficiently. However, the company also acknowledged that advanced AI cybersecurity capabilities require “trust, verification, safeguards, and accountability,” particularly as AI systems become more capable of handling sensitive defensive workflows.

What Is OpenAI Daybreak?

At the center of the announcement is OpenAI Daybreak, a cybersecurity-focused platform powered by GPT-5.5 and Codex, OpenAI’s coding-focused agentic system. OpenAI said the platform is designed to help organizations move from vulnerability discovery to remediation faster while improving visibility into the entire security workflow. The system combines AI reasoning with coding automation to support several defensive security functions, including:
  • Secure code reviews
  • Threat modeling
  • Patch validation
  • Malware analysis
  • Dependency risk analysis
  • Remediation guidance
  • Vulnerability triage
  • Detection engineering
One of the more notable capabilities highlighted by OpenAI is the platform’s ability to generate and test patches directly within repositories. According to the company, these workflows operate under monitored and controlled access models while also producing audit-ready reports that help security teams verify remediation activity. The emphasis on auditability suggests OpenAI is attempting to address one of the biggest concerns surrounding AI in cybersecurity: the need for accountability and human oversight in automated decision-making.

OpenAI Introduces Tiered Cybersecurity Access

OpenAI is rolling out Daybreak through three different access levels depending on the sensitivity and complexity of cybersecurity operations. The first layer uses GPT-5.5 for broader security assistance and general workflows. The second tier, GPT-5.5 with Trusted Access for Cyber, is aimed at defensive cybersecurity tasks such as secure code review, malware analysis, vulnerability triage, detection engineering, and patch validation. The highest tier is powered by GPT-5.5-Cyber, which OpenAI says is intended for specialised and authorised workflows including penetration testing, red teaming, and controlled validation exercises. The structured access model indicates OpenAI is taking a cautious approach toward releasing advanced cyber capabilities, especially as concerns grow around dual-use AI systems that can potentially be misused by threat actors.

AI Cybersecurity Competition Continues to Grow

The launch of OpenAI Daybreak also comes at a time when AI companies are increasingly competing to establish themselves in cybersecurity operations. Recently, Anthropic introduced Claude Mythos, a cybersecurity-focused AI system that the company claimed could identify software vulnerabilities at a scale beyond what human experts can typically achieve. However, Anthropic stated that Claude Mythos would not be released publicly due to risks associated with its advanced cyber capabilities. That contrast highlights a broader debate currently shaping the AI cybersecurity sector. While companies see AI as a major force multiplier for defenders, there are ongoing concerns about how powerful cyber-focused AI models should be deployed, monitored, and restricted. For OpenAI, Daybreak appears to position the company toward enterprise-controlled and monitored security environments rather than open public access.

AI’s Role in Cyber Defense Is Expanding

The launch of OpenAI Daybreak reflects how rapidly AI is becoming embedded into cybersecurity workflows. Security teams are increasingly under pressure to manage growing attack surfaces, software complexity, and faster-moving threats, making automation and AI-assisted analysis more attractive. At the same time, the rollout of advanced cyber-focused AI systems is likely to intensify discussions around governance, oversight, and responsible deployment. With companies like OpenAI and Anthropic now building specialised cybersecurity AI platforms, the next phase of cyber defense may increasingly depend on how effectively organizations balance AI-driven speed with security safeguards and human verification.

Europe Warned Against AI Skills Gap as Experts Outline Possible 2040 Futures

AI skills development

A new outlook from the European Labour Authority and the European Commission’s Directorate-General for Employment, Social Affairs and Inclusion has highlighted how Europe’s approach to AI skills development could shape the future of work by 2040. The report presents several possible futures driven by artificial intelligence adoption, ranging from economic growth and new career opportunities to rising inequality, job insecurity and weakened worker protections. At the centre of all scenarios is one common factor: whether governments, employers and institutions invest early in workforce skills development. According to the findings, AI could create a future where learning becomes more accessible, career growth becomes flexible and workers are better equipped to adapt to changing industries. However, the report also warns that without strong investment in AI skills development, Europe risks widening the gap between workers who can adapt to new technologies and those left behind.

AI-Powered Workplace Could Deepen Inequality 

One of the scenarios described in the report imagines a future where artificial intelligence transforms workplaces so rapidly that many jobs become unrecognisable. In this version of 2040, governments and employers fail to provide adequate workforce training, leaving employees responsible for adapting on their own. The report notes that workers with strong digital and technical skills are likely to benefit the most in such an environment. Meanwhile, employees without access to learning opportunities could struggle to remain employable as automation reshapes industries. The consequences go beyond employment challenges. The report points to growing financial pressure, declining physical and mental wellbeing, and increased social inequality as possible outcomes of an AI transition that does not include inclusive skills development policies. Another scenario paints an even more severe picture of the future. In this case, AI technologies and automation dominate nearly every aspect of work and daily life. A small number of powerful organisations control much of the AI ecosystem, influencing policymaking, economic systems and broader social structures. Under this model, companies rely heavily on automation while reducing investment in employee development. Workers across industries lose jobs as AI systems take over tasks previously performed by humans. The report also warns that weak regulation and limited government oversight could leave workers with little protection. Trade unions, according to the scenario, lose influence in defending labour rights and fair working conditions. The concentration of power among major AI players could also threaten democratic systems while creating environmental concerns linked to large-scale AI infrastructure and energy use.

Slow AI Adoption May still Create a ‘Missed Opportunity’ for Europe

The report also explores a more moderate future in which AI adoption progresses gradually rather than aggressively. While this path appears less disruptive, researchers argue that it could still create long-term problems if Europe fails to prioritise AI skills development. In this “missed opportunity” scenario, the slower pace of AI adoption prevents businesses and workers from fully benefiting from innovation. The report suggests that Europe could lose out on productivity gains, new products and emerging industries if organisations hesitate to adopt AI technologies at scale. For workers, the impact could mean fewer opportunities to move into creative and high-value roles often associated with AI-driven industries. Instead, advanced tasks and innovation-related jobs may remain concentrated among a small group of highly skilled professionals, while much of the workforce continues performing repetitive or lower-value work. Employers may avoid the disruption linked to rapid automation, but they could also fall behind in global competitiveness due to limited innovation and slower operational improvements.

AI Skills Development Seen as Central to Europe’s AI Future

Despite outlining several concerning futures, the report emphasises that these outcomes are not inevitable. Instead, it argues that coordinated action between governments, businesses, educational institutions and workers can help create a more balanced and inclusive AI economy. The European Labour Authority stresses that ongoing workforce skills development will play a central role in determining whether AI benefits society broadly or primarily advantages a small section of the population. The report calls for greater collaboration in promoting lifelong learning, digital education and accessible training programmes that help workers adapt to evolving technologies. It also highlights the importance of policies that support fair AI adoption while protecting workers’ rights and ensuring technological progress contributes to long-term economic and social stability. As Europe continues shaping its AI strategy, the findings serve as a reminder that the future of work may depend less on the technology itself and more on how societies prepare people to work alongside it.

National Technology Day 2026: India’s AI Growth Puts Security in Focus

National Technology Day 2026

As India marks National Technology Day, industry leaders say the country’s technology ambitions are now closely tied to cybersecurity, AI infrastructure, and digital resilience. With businesses rapidly adopting artificial intelligence, cloud platforms, and connected systems, experts believe the next phase of growth will depend on how securely and responsibly these technologies are deployed. Across industries, organisations are moving beyond experimental AI projects and integrating intelligent systems directly into operations, customer engagement, healthcare, infrastructure, and enterprise decision-making. At the same time, cybersecurity leaders are warning that the rise of AI-driven environments is also creating faster and more sophisticated cyber threats.

National Technology Day 2026 Reflects India’s AI-First Push

According to Ritesh Kapadia, Field Chief Technology Officer, iLink Digital, technology discussions are increasingly centred around how AI systems behave and interact within organisations rather than just the tools themselves. Kapadia said AI is evolving from passive software into active systems capable of analysing context, triggering actions, and supporting enterprise decisions. He noted that organisations are gradually building “AI-first enterprises” where intelligence becomes part of daily workflows instead of operating as a separate technology layer. "Technology conversations today are becoming less focused on tools and more focused on behaviour. AI systems are evolving from passive platforms into active collaborators that can analyse context, trigger actions and support enterprise decision making. This shift is laying the foundation for AI first enterprises, where intelligence is embedded into everyday operations, workflows and business decisions rather than functioning as a separate layer of technology." He added that enterprises are focusing on connected systems that can respond intelligently while maintaining governance and operational clarity. The growing use of AI across enterprise environments is also increasing cybersecurity concerns. Security teams are now dealing with automated attacks, deepfakes, AI-assisted vulnerability discovery, and identity-based threats that can move at machine speed. National Technology Day

Cybersecurity, Core Part of Digital Transformation

Sunil Sharma, Managing Director & VP – Sales (India & SAARC) at Sophos, said National Technology Day 2026 is a reminder that innovation and cybersecurity must grow together. According to Sharma, organisations can no longer depend only on traditional or reactive security models. Businesses are now being pushed toward continuous threat monitoring and real-time response frameworks as attackers use AI to scale operations faster than before. He also highlighted identity security as a major challenge for enterprises managing cloud systems, remote access environments, and interconnected digital ecosystems. “The threat landscape is evolving rapidly,” Sharma said, pointing to deepfakes, automated attacks, and AI-driven vulnerability discovery as some of the biggest emerging concerns. Industry leaders believe cyber resilience is becoming equally important as digital transformation, especially as Indian enterprises continue accelerating cloud adoption and AI integration.

AI Infrastructure and Data Centres Gain Importance

Technology executives also stressed the importance of building infrastructure capable of supporting India’s growing AI ecosystem. AS Prasad, Vice President, Product Management, Vertiv, said the future of AI will depend heavily on infrastructure decisions being made today, particularly around power systems, cooling technologies, and data centre architecture. "The next decade of AI will be won in the infrastructure layer, in the power systems, the cooling architecture, and the data center design decisions being made right now. Prasad noted that AI workloads require scalable and reliable infrastructure to operate efficiently at enterprise and national levels. That view was echoed by Narendra Sen, Founder & CEO, RackBank & NeevCloud, who described data centres as critical to India’s digital future. Sen said India’s policy initiatives, including the IndiaAI Mission and data localisation efforts, are creating momentum for sovereign AI infrastructure and homegrown cloud ecosystems. He added that infrastructure readiness will determine how effectively India can scale AI adoption across industries and government systems.

Responsible AI Adoption Expands Across Industries

The life sciences sector is also witnessing increased AI adoption as companies look to improve operational efficiency and decision-making. Duraisamy Rajan Palani (Durai), Founder and CEO of Archimedis Digital, said AI is helping accelerate innovation in drug discovery, clinical trials, and patient engagement. However, he noted that as AI systems move beyond automation and begin supporting expert-level decisions, accuracy, accountability, and regulatory compliance become increasingly important. Industry experts say responsible AI adoption will remain a key focus area as organisations balance innovation with governance requirements. Meanwhile, Vikram Prabakar highlighted how technology is also being used to address sustainability and inclusion challenges. He said AI-powered waste traceability and digital recycling platforms are helping improve transparency and efficiency while supporting India’s broader sustainability goals.

India’s Technology Growth Also Depends on Skilled Talent

While India continues to invest heavily in AI infrastructure and digital transformation, experts say the shortage of specialised talent remains a growing challenge. Milind Shah, Managing Director, Randstad Digital India, said demand for professionals skilled in AI, cybersecurity, cloud computing, and digital infrastructure is increasing rapidly. He added that many of these specialised roles have emerged only recently, making workforce development a critical priority for businesses, academic institutions, and policymakers. "India is on track to become one of the world’s largest digital infrastructure markets within this decade, supported by sustained investments, policy momentum, and accelerating demand. What now requires equal emphasis is the depth, quality, and readiness of the talent pipeline. AI, cloud, and advanced digital infrastructure rely on highly skilled engineers, architects, and operators capable of managing complex, rapidly evolving environments. Many of these roles have emerged only recently, making workforce readiness a strategic priority rather than a secondary consideration. Addressing this gap will require coordinated action across industry, academia, and policy frameworks to build both scale and specialisation." As National Technology Day 2026 highlights India’s progress in AI and digital innovation, industry leaders say long-term success will depend on building secure infrastructure, strengthening cyber resilience, and preparing a workforce capable of managing increasingly complex technology environments.

California Hits General Motors With Record $12.75 Million CCPA Privacy Settlement

California Privacy Settlement

California Attorney General Rob Bonta and a coalition of state and local enforcement agencies have announced a $12.75 million settlement with General Motors over allegations that the automaker illegally collected and sold drivers’ personal data without proper consent, in violation of the California Consumer Privacy Act (CCPA). The California privacy settlement marks the largest CCPA penalty in California history so far and represents the state’s first enforcement action focused on data minimization requirements under California privacy law. The case centers on allegations that General Motors shared sensitive driver information, including geolocation data and driving behavior, with data brokers Verisk Analytics and LexisNexis Risk Solutions between 2020 and 2024.

California Privacy Settlement Targets Driver Data Sales

According to the complaint, GM collected data through its OnStar connected vehicle platform, which offers emergency assistance, navigation, and crash response services. Investigators alleged that the company sold names, contact details, precise location information, and driving behavior data of hundreds of thousands of Californians to the two data brokers. Authorities said the data was intended to help create driver-risk scoring products that could be used by insurance companies when setting premiums. The investigation was conducted jointly by the California Department of Justice, the California Privacy Protection Agency (CalPrivacy), and district attorneys from San Francisco, Los Angeles, Napa, and Sonoma counties. Attorney General Rob Bonta said the settlement sends a clear message about consumer control over personal data. “General Motors sold the data of California drivers without their knowledge or consent,” Bonta said in the announcement, adding that the data could reveal sensitive details about consumers’ daily routines and movements.

CCPA Violations and Data Minimization Concerns

A major part of the case focused on alleged violations of the CCPA’s data minimization and purpose limitation requirements, which were added to California law in 2023. Under these provisions, companies are required to collect and retain only the data necessary for a disclosed purpose. Investigators alleged that GM retained driving and location data long after it was needed to operate OnStar services and later sold that retained data to third parties. Authorities also alleged that GM failed to clearly inform consumers about how their information would be used. The complaint stated that GM’s privacy policies suggested driver data would only be used to provide requested OnStar services and even claimed the company did not sell driving or location information. Investigators said the company’s practices contradicted those statements. San Francisco District Attorney Brooke Jenkins described modern vehicles as “rolling data collection machines” and said consumers deserve transparency about what information is collected and how it is shared. Los Angeles County District Attorney Nathan J. Hochman said companies handling consumer data would be held accountable under California privacy laws, regardless of their size.

Connected Vehicle Privacy Under Scrutiny

The settlement follows growing regulatory scrutiny around connected vehicle privacy and automotive data collection practices. In 2023, CalPrivacy launched investigations into connected car manufacturers and their handling of consumer information. Public attention increased further in 2024 after a report by The New York Times highlighted how automakers were sharing driving behavior data with insurance companies. The reporting indicated that some consumers outside California had experienced increased insurance premiums tied to such data-sharing practices. California investigators later determined that California drivers were likely not directly affected through insurance rate increases because state insurance laws prohibit insurers from using driving behavior data to set premiums. However, regulators maintained that the collection, retention, and sale of the data itself violated California privacy requirements.

Settlement Terms for General Motors

Under the proposed California privacy settlement, General Motors must implement several privacy-related measures over the coming years. The company will be required to:
  • Pay $12.75 million in civil penalties.
  • Stop selling driving data to consumer reporting agencies for five years.
  • Delete retained driving data within 180 days unless consumers provide express consent for limited uses.
  • Request the deletion of driver data already shared with LexisNexis and Verisk.
  • Establish and maintain a comprehensive privacy compliance program.
  • Submit privacy assessments and compliance reports to California regulators and prosecutors.
The settlement also reinforces California’s broader push to strengthen consumer control over personal information under the CCPA. CalPrivacy Executive Director Tom Kemp said California privacy laws require businesses to collect only the information they genuinely need and to be transparent about how that data is handled. Alongside the settlement announcement, regulators also highlighted the state’s Delete Request and Opt-out Platform (DROP), which allows Californians to submit requests to delete personal information held by hundreds of registered data brokers.

The Cyber Express Weekly Roundup: EU AI Act Updates, Malware Expansion, Critical Vulnerabilities, and Rising Cybercrime Trends

weekly roundup

In this weekly roundup from The Cyber Express, the global cybersecurity landscape continues to show rapid and uneven change, shaped by both regulatory shifts and escalating cyber threats. Governments are tightening oversight of new technologies such as artificial intelligence, while threat actors are simultaneously refining their techniques to exploit businesses, infrastructure, and end users across multiple platforms.  This edition of cybersecurity news brings together some of the most important developments of the week, ranging from significant amendments to the European Union’s AI Act to the expansion of malware campaigns into macOS environments and the discovery of a critical vulnerability in widely used enterprise firewall software.   It also covers major sentencing in a global ransomware case and a fresh warning from the FBI about the growing scale of cyber-enabled cargo theft targeting logistics and supply chain organizations. 

The Cyber Express Weekly Roundup 

EU Updates AI Act with Simpler Rules and New AI Content Bans 

In a significant regulatory update, the European Union has agreed to revise parts of the EU AI Act. The updated framework aims to simplify compliance requirements for businesses while simultaneously introducing stricter restrictions on harmful AI-generated content. Read more.. 

ClickFix Malware Campaign Expands to macOS 

Another key development is the expansion of the ClickFix malware campaign beyond Windows systems. Security researchers at Microsoft have confirmed that the operation is now targeting macOS users using deceptive troubleshooting content. Read more... 

Critical PAN-OS Vulnerability Enables Remote Code Execution 

A critical security flaw has been identified in Palo Alto Networks’ PAN-OS firewall software. Tracked as CVE-2026-0300, the vulnerability carries a CVSS score of 9.3, indicating severe risk. The issue originates from a buffer overflow vulnerability in the User-ID Authentication Portal. Read more... 

Latvian Cybercriminal Sentenced in Global Ransomware Case 

Latvian national Deniss Zolotarjovs has been sentenced to 102 months in prison for his role in a large-scale ransomware operation. According to the U.S. Department of Justice, the group operated under multiple ransomware brands, including Conti, Royal, Akira, and Karakurt. Between 2021 and 2023, the organization carried out attacks against more than 54 companies worldwide, using data theft and encryption-based extortion tactics to pressure victims into paying ransom demands. Read more... 

FBI Warns of Rising Cyber-Enabled Cargo Theft 

The FBI has issued an alert regarding a sharp rise in cyber-enabled cargo theft. Criminal actors are using impersonation techniques to pose as legitimate logistics providers, allowing them to intercept and redirect freight shipments. The agency noted that logistics, shipping, and insurance companies have been targeted since at least 2024. Read more... 

Weekly Takeaway 

This week’s The Cyber Express weekly roundup highlights the growing convergence of regulatory change, advanced malware threats, critical infrastructure vulnerabilities, ransomware enforcement actions, and supply chain fraud. As the global cybersecurity landscape continues to evolve, organizations across all sectors remain under increasing pressure to strengthen defenses and adapt to emerging risks. 

Fake Moustache Trick Raises Questions Over UK Online Safety Act Age Checks

Online Safety Act

The rollout of the UK’s Online Safety Act in July 2025 was intended to create a safer digital environment for children through stricter age verification rules, tighter moderation standards, and stronger protections against harmful online content. However, early evidence suggests that many of the safeguards introduced under the legislation can still be bypassed with surprisingly simple tactics, including a fake moustache drawn with makeup.  Recent findings have raised concerns among parents, researchers, and digital safety experts about the effectiveness of current age verification systems. While the Online Safety Act has led to some improvements in children’s online experiences, critics argue that enforcement remains inconsistent and that many platforms are still vulnerable to manipulation.  One of the most widely discussed examples involved a 12-year-old boy who reportedly used an eyebrow pencil to create a fake moustache before facing a facial age estimation check. According to the report, the altered appearance convinced the system that he was 15 years old, allowing him to bypass restrictions designed for younger users. The incident has become a symbol of broader concerns about the reliability of AI-driven age-verification technologies. 

Online Safety Act Faces Early Challenges 

The Online Safety Act was introduced to strengthen online child protection measures by requiring platforms to implement stricter checks and reduce children’s exposure to harmful material. The legislation also aimed to improve reporting tools and create safer digital spaces for younger users.  Despite those goals, the report suggests that loopholes remain widespread. Children have reportedly been bypassing protection through several methods, including entering false birthdates, borrowing adult credentials, sharing accounts, and using VPN services. More advanced attempts have also involved spoofing facial recognition systems used in age verification processes.  Survey data cited in the findings revealed that nearly half of children believe current age verification systems are easy to evade. Around one-third admitted to bypassing these systems in recent months.  The fake moustache example particularly highlighted weaknesses in facial age estimation tools that rely heavily on visual indicators rather than stronger forms of identity confirmation. Experts argue that systems based primarily on appearance can be vulnerable to minor cosmetic changes, lighting adjustments, or camera manipulation. 

Mixed Results Following Online Safety Act Rollout 

Although concerns over age verification remain significant, the report noted that the Online Safety Act has produced some positive outcomes. Approximately half of the surveyed children said they were now seeing more age-appropriate content online. In addition, around 40% of both children and parents stated that the internet feels somewhat safer since the legislation came into effect.  Many children also appeared supportive of increased online protections. The findings showed that younger users generally approved of stricter platform rules, reduced interaction with strangers, and limitations placed on high-risk platform features.  Around 90% of children who noticed stronger moderation systems and improved reporting tools viewed those changes positively. Researchers said this indicates that many younger users are willing to engage with safer digital environments when protections are implemented effectively.  Still, the improvements have not been universal. Within just one month of new child protection codes being introduced under the Online Safety Act, nearly half of the children surveyed reported encountering harmful content online. This included violent material, hate speech, and body image-related content, all categories the legislation specifically aims to regulate. 

Privacy Concerns Grow Around Age Verification 

The expansion of age verification requirements has also triggered growing concerns over privacy and data security. More than half of the children surveyed said they had been asked to verify their age within a recent two-month period. These checks were reportedly common across major platforms, including TikTok, YouTube, Google services, and Roblox.  Many platforms now rely on technologies such as facial age estimation, government-issued identification checks, and third-party age assurance providers to comply with the Online Safety Act. While users generally described the systems as easy to complete, concerns remain about how sensitive data is collected, stored, and potentially reused.  Parents expressed unease about whether biometric information and identity documents submitted during age verification could later be retained by companies or accessed by government agencies. Those concerns have intensified calls for more centralized and privacy-focused verification systems instead of fragmented checks spread across multiple online services.  Experts argue that current approaches may not strike the right balance between child safety and personal privacy. They warn that if the weaknesses exposed by tactics like the fake moustache incident are not addressed, public trust in these systems could continue to decline. 

Dirty Frag Linux Vulnerability Exposes Major Distributions to Root Access Attacks

Dirty Frag

A newly disclosed local privilege escalation (LPE) vulnerability known as Dirty Frag is raising serious concerns across the Linux ecosystem after researchers revealed that the flaw can grant root access to most major Linux distributions. The vulnerability, which currently remains unpatched, has been described as a successor to the previously disclosed Copy Fail flaw tracked as CVE-2026-31431.  Security researcher Hyunwoo Kim, also known online as @v4bel, publicly disclosed the issue after what he described as a breakdown in the coordinated disclosure and embargo process. The vulnerability was initially reported to Linux kernel maintainers on April 30, 2026, but no official fixes or CVE identifiers had been assigned at the time of disclosure.  According to Kim, Dirty Frag is not a single bug but a vulnerability class capable of achieving root privileges across many Linux distributions by chaining together two separate flaws: the xfrm-ESP Page-Cache Write vulnerability and the RxRPC Page-Cache Write vulnerability.  Kim explained in his technical write-up:  “Dirty Frag is a vulnerability (class) that achieves root privileges on most Linux distributions by chaining the xfrm-ESP Page-Cache Write vulnerability and the RxRPC Page-Cache Write vulnerability.”  He further noted that Dirty Frag extends the same bug class associated with Dirty Pipe and Copy Fail (CVE-2026-31431). Unlike race-condition-based attacks, Dirty Frag operates through a deterministic logic flaw, making exploitation more reliable.  “Because it is a deterministic logic bug that does not depend on a timing window, no race condition is required, the kernel does not panic when the exploit fails, and the success rate is very high.” 

Dirty Frag Targets Multiple Linux Distributions 

The new LPE vulnerability affects a broad range of Linux distributions, including Ubuntu 24.04.4, RHEL 10.1, openSUSE Tumbleweed, CentOS Stream 10, AlmaLinux 10, and Fedora 44. Researchers warned that successful exploitation allows an unprivileged local user to escalate privileges and gain full root access.  In a public disclosure sent to the oss-security mailing list on May 8, 2026, Kim described Dirty Frag as a “universal Linux LPE” capable of compromising all major Linux distributions.  The disclosure stated:  “This is a report on ‘Dirty Frag’, a universal LPE that allows obtaining root privileges on all major distributions.”  Kim also emphasized that the impact closely resembles Copy Fail, or CVE-2026-31431, which has already been observed under active exploitation in the wild. 

How Dirty Frag Works 

The first component of Dirty Frag, the xfrm-ESP Page-Cache Write vulnerability, originates from the IPSec (xfrm) subsystem. Researchers said it provides attackers with a four-byte store primitive similar to CVE-2026-31431 and allows overwriting small portions of the kernel page cache.  However, exploitation through the xfrm-ESP path requires an unprivileged user to create a namespace. Ubuntu blocks this behavior through AppArmor restrictions, limiting the effectiveness of that exploit path on Ubuntu-based Linux distributions.  To bypass that limitation, Dirty Frag chains a second flaw: the RxRPC Page-Cache Write vulnerability.  Kim explained:  “RxRPC Page-Cache Write does not require the privilege to create a namespace, but the rxrpc.ko module itself is not included in most distributions.”  He added that while RHEL 10.1 does not ship the rxrpc.ko module by default, Ubuntu systems load it automatically. By combining both vulnerabilities, attackers can adapt exploitation techniques depending on the target environment.  “Chaining the two variants makes the blind spots cover each other. In an environment where user namespace creation is allowed, the ESP exploit runs first. Conversely, on Ubuntu, where user namespace creation is blocked but rxrpc.ko is built, the RxRPC exploit works.” 

Links to Older Linux Kernel Vulnerabilities 

Researchers traced the xfrm-ESP vulnerability back to a Linux kernel source code commit made in January 2017. Interestingly, the same commit was also identified as the root cause of another serious Linux kernel issue, CVE-2022-27666, a buffer overflow vulnerability with a CVSS score of 7.8 that affected multiple Linux distributions.  The RxRPC Page-Cache Write vulnerability, meanwhile, was reportedly introduced in June 2023.  Security firm CloudLinx stated in an advisory that the flaw exists in the “ESP-in-UDP MSG_SPLICE_PAGES no-COW fast path” and is reachable through the XFRM user netlink interface.  AlmaLinux also released a technical analysis explaining how the issue impacts kernel memory handling:  “The bug lives in the in-place decryption fast paths of esp4, esp6, and rxrpc: when a socket buffer carries paged fragments that are not privately owned by the kernel, the receive path decrypts directly over those externally-backed pages.”  According to the advisory, this behavior can expose or corrupt plaintext data while an unprivileged process still maintains a reference to the affected pages. 

Public PoC Increases Risk for Linux Distributions 

The threat level surrounding Dirty Frag has intensified due to the public release of a fully working proof-of-concept exploit. Researchers warned that the exploit can grant root access using a single command, significantly lowering the barrier for attackers.  Until official patches become available, administrators are urged to disable the affected modules manually. The recommended mitigation command is: 
sudo sh -c "printf 'install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n' > /etc/modprobe.d/dirtyfrag.conf; rmmod esp4 esp6 rxrpc 2>/dev/null; true" 
Security experts also warned that Dirty Frag importantly differs from CVE-2026-31431. Unlike Copy Fail, Dirty Frag can still be exploited even if the Linux kernel’s algif_aead module has been disabled.  Kim stated:  “Note that Dirty Frag can be triggered regardless of whether the algif_aead module is available.”  He further cautioned:  “In other words, even on systems where the publicly known Copy Fail mitigation (algif_aead blacklist) is applied, your Linux is still vulnerable to Dirty Frag.”  With no patches currently available and exploit code already circulating publicly, the newly disclosed Dirty Frag LPE vulnerability presents a significant risk to Linux distributions worldwide. 

Europe Moves to Tighten AI Rules While Easing Compliance Burden

EU AI Act

The European Union has reached a provisional agreement to amend parts of the EU AI Act, introducing simplification measures for businesses while also expanding restrictions on harmful AI applications, including so-called “nudifier” apps and AI-generated child sexual abuse material. The agreement, reached early Thursday by negotiators from the European Parliament and the Council, forms part of the EU’s broader “digital omnibus” package aimed at refining the implementation of the bloc’s landmark AI legislation. The updated proposal seeks to reduce compliance burdens and legal uncertainty for AI providers while maintaining the AI Act’s core risk-based framework. Lawmakers said the changes are designed to make the rules more practical without weakening safeguards tied to safety, privacy, and fundamental rights.

EU AI Act Deadlines Pushed to Reduce Legal Uncertainty

One of the biggest changes under the proposed amendments is the postponement of several obligations linked to high-risk AI systems. Under the revised timeline, rules for AI systems classified as high-risk due to their use cases will now apply from 2 December 2027. These systems include AI deployed in biometric identification, critical infrastructure, education, employment, law enforcement, and border management. Meanwhile, AI systems used as safety components under sector-specific EU product safety laws will face compliance obligations from 2 August 2028. The agreement also delays watermarking obligations for AI-generated content until 2 December 2026. The European Commission had earlier proposed a February 2027 implementation date. Watermarking tools are intended to help identify and trace AI-generated images, audio, and video content. Lawmakers said the postponements are necessary to ensure technical standards and implementation guidance are fully in place before the rules become enforceable.

EU Bans Nudifier Apps and AI-Generated Abuse Content

A major part of the agreement focuses on tightening restrictions around harmful AI-generated sexual content. Negotiators agreed to ban AI systems designed to create child sexual abuse material or generate explicit deepfake content involving identifiable individuals without consent. The restriction covers images, video, and audio content. The EU AI Act ban specifically applies to companies placing such AI systems on the EU market, providers failing to include reasonable safeguards against misuse, and users deploying the systems to create illegal or non-consensual explicit material. The decision directly targets “nudifier” apps, which use AI to digitally remove clothing or generate fake explicit imagery of individuals. Companies operating such systems will have until 2 December 2026 to comply with the new requirements. Michael McNamara, co-rapporteur for the Civil Liberties, Justice and Home Affairs committee, said the agreement strengthens the EU’s ability to act against AI systems that threaten human dignity and fundamental rights. “I’m pleased that this morning we reached an agreement on the AI Omnibus,” McNamara said. “Alongside simplification measures, we are banning nudification apps, a key part of the Parliament’s mandate, and, of course, the creation of child sexual abuse material using AI systems.”

Simplification Measures for AI Providers and SMEs

The amendments also introduce several simplification measures intended to reduce overlapping compliance requirements for companies developing AI technologies. Under the new framework, machinery products with AI features will no longer need to comply separately with both the EU AI Act and sector-specific safety laws if existing safety rules already provide equivalent protection. Lawmakers also narrowed the definition of “safety component” within the EU AI Act. This means AI functions designed only to assist users or improve product performance will not automatically be classified as high-risk unless their failure creates health or safety risks. Another change allows companies to process personal data where strictly necessary to detect and correct bias in AI systems, provided appropriate safeguards are in place. The agreement further extends certain exemptions previously available only to small and medium-sized enterprises (SMEs) to small mid-cap companies. EU officials said the move is intended to help startups and growing technology firms scale AI innovation more easily within Europe. Arba Kokalari, co-rapporteur for the Internal Market and Consumer Protection committee, said the revised rules strike a balance between innovation and regulation. “With this agreement, we show that politics can move just as quickly as technology,” Kokalari said. “We now make the AI rules more workable in practice, remove overlaps and pause the high-risk requirements.”

Next Steps for the EU AI Act Amendments

The provisional agreement still requires formal approval from both the European Parliament and the Council before it can become law. EU lawmakers are aiming to finalize adoption before 2 August 2026, which marks the scheduled start date for existing high-risk AI system rules under the original AI Act framework. The negotiations are part of the EU’s continuing effort to shape global standards around artificial intelligence governance while addressing concerns related to safety, transparency, and misuse of generative AI technologies.

Global Instructure Breach Hits Queensland Schools Through QLearn Platform

QLearn Cybersecurity Incident

A major QLearn cybersecurity incident has affected thousands of educational institutions globally, including Queensland state schools and universities, after a cyber breach involving third-party education technology provider Instructure exposed personal information linked to students and staff. Queensland Education Minister John-Paul Langbroek confirmed the incident in an official statement, saying the Queensland Department of Education was briefed about the international cybersecurity breach involving Instructure, the provider behind the Department’s online learning platform, QLearn. According to early assessments, the breach may affect more than 200 million people and over 9,000 institutions worldwide, making it one of the largest education-sector cybersecurity incidents disclosed this year.

QLearn Cybersecurity Incident Impacts Queensland Schools

The Department of Education said students and staff who have worked or studied at Education Queensland schools since 2020 may have been affected by the QLearn cybersecurity incident. Authorities stated that compromised information currently appears limited to names, email addresses, and school locations. Officials added there is currently no evidence that passwords, dates of birth, or financial information were accessed during the breach. The online learning platform QLearn was introduced in Queensland schools in 2020 under the previous government and has since become a widely used digital education system across the state. Minister Langbroek said school principals have already begun contacting affected families and teachers to notify them about the breach and provide further guidance. “This morning I have been briefed by the Department of Education about an international cybersecurity breach involving a third-party provider, Instructure, which delivers the Department’s online learning platform, QLearn,” Langbroek said in the statement.

Instructure Data Breach Raises Concerns Across Education Sector

The QLearn cybersecurity incident has once again highlighted the growing cybersecurity risks facing the global education sector, particularly as schools and universities continue relying heavily on third-party digital learning platforms. Because the breach involves Instructure, a provider serving institutions across multiple countries, the incident extends far beyond Queensland. Authorities indicated that educational institutions across Australia and overseas are also impacted. While officials stressed that no sensitive financial or authentication data has been identified as compromised so far, cybersecurity experts often warn that exposed personal information such as names and email addresses can still be valuable to cybercriminals. Threat actors frequently use this type of information in phishing campaigns, identity-based scams, and social engineering attacks targeting students, parents, and school employees. The Department of Education has not publicly disclosed how the cybersecurity breach occurred or whether any ransomware or unauthorized network access was involved. Investigations into the incident are ongoing.

Queensland Department Prioritizes Support for Vulnerable Families

In response to the QLearn cybersecurity incident, the Queensland Department of Education said it is prioritizing support for vulnerable individuals and families potentially affected by the breach. According to the Minister’s statement, the Department is providing priority assistance to families and teachers with known family and domestic violence concerns, as well as individuals connected to Child Safety services. The additional support measures appear aimed at reducing potential risks associated with the exposure of school-related location information and contact details. Government agencies increasingly recognize that cybersecurity incidents affecting education systems can carry broader safety implications, especially for vulnerable groups whose personal or location-related information may require additional protection.

Global Education Sector Continues Facing Cybersecurity Threats

The QLearn cybersecurity incident adds to a growing list of cyberattacks and data breaches targeting educational institutions worldwide. Schools, universities, and online learning providers have become frequent targets due to the large amount of personal information they manage and the widespread use of interconnected digital platforms. Education systems often rely on multiple third-party vendors for online learning, communications, and student management services, increasing the potential attack surface for cybercriminals. The Queensland Department of Education said it will continue updating the public as more information becomes available from the ongoing investigation into the breach. At this stage, authorities have not advised affected individuals to reset passwords or take additional security measures, though officials are continuing to assess the full scope and impact of the incident. The investigation into the Instructure-related breach remains active as educational institutions worldwide work to determine the extent of the exposure and any potential long-term cybersecurity implications.

Operation Epic Fury Exposes Critical OT Security Gaps in U.S. Oil and Gas Sector

Operation Epic Fury

The cybersecurity posture of the U.S. oil and gas sector has come under renewed scrutiny following Operation Epic Fury, with a new independent survey revealing a disconnect between operator confidence and actual operational technology (OT) security capabilities. While companies across the upstream and midstream energy segments have accelerated cybersecurity investments since the February 28 launch of Operation Epic Fury, the findings suggest many organizations may still lack the tools needed to identify real-time cyber threats targeting OT environments.  The independent survey, conducted on behalf of Tosi, examined the views of OT decision makers across U.S. oil and gas operators. The research found that most respondents believe they can detect an active OT cyber breach within 24 hours. However, the same OT decision makers acknowledged relying heavily on systems and processes not specifically designed to monitor OT infrastructure.  According to the survey data, 87 percent of operators rated themselves as confident in their ability to detect an OT breach within a day, assigning their organizations a score of four or five on a five-point confidence scale. Despite that confidence, 51 percent said their detection capabilities primarily depend on IT security tools that provide only limited visibility into OT-specific network traffic.  Another 27 percent of respondents said they would depend on field operators or technicians identifying irregularities manually, while only 16 percent reported using continuous OT monitoring as the primary basis for cyber threat detection. Sakari Suhonen, CEO of Tosi U.S., warned that this gap represents a major vulnerability for the energy sector in the wake of Operation Epic Fury.  “This is the most consequential blind spot in U.S. energy infrastructure right now,” Suhonen said. “The sector has the budget, the executive attention, and the will to act. What it does not yet have is detection that actually sees OT. After Operation Epic Fury, that distinction is the difference between catching an intrusion in hours and finding out about it from a production outage.” 

Operation Epic Fury Drives Rapid OT Security Spending 

The independent survey was fielded in April 2026, approximately six weeks after Operation Epic Fury began. Researchers noted that the speed of the sector’s response has been unusually aggressive compared to previous cybersecurity cycles.  One of the clearest trends identified by OT decision makers involved changing perceptions of cyber risk. Sixty-three percent of surveyed operators said cyber risk is now higher than it was before February 28, with 13 percent describing the increase as significant.  Respondents identified several key factors contributing to elevated risk levels, including growing convergence between IT and OT systems, increased targeting of energy infrastructure by state-sponsored cyber actors, and expanding dependence on third-party remote access technologies.  The independent survey also showed that emergency cybersecurity funding is already being deployed. Ninety-four percent of operators said they had either approved or were actively reviewing unplanned OT security spending linked directly to the post-Operation Epic Fury threat landscape. Among OT decision makers surveyed, 95 percent expect OT cybersecurity budgets to increase over the next 12 months, while one in four anticipated budget growth exceeding 20 percent. 

OT Decision Makers Prioritize Detection and Visibility 

The survey findings indicate that OT decision makers are placing greater emphasis on visibility and detection capabilities rather than traditional perimeter security tools.  When respondents were asked to identify the single most important OT security capability to improve over the next year, 22 percent selected continuous monitoring and anomaly detection. Another 20 percent pointed to OT-specific incident detection and response solutions.  Additional priorities included asset discovery at 15 percent and OT-specific secure remote access at 14 percent. Combined, detection, visibility, and remote access technologies accounted for 71 percent of all named priorities among surveyed OT decision makers.  At the same time, operational disruptions linked to cybersecurity incidents appear widespread throughout the sector. According to the independent survey, 99 out of 100 operators reported experiencing at least one category of cyber incident since February 28.  Ransomware affecting OT-connected systems impacted 48 percent of operators surveyed, while another 48 percent reported precautionary OT shutdowns triggered by incidents originating on the IT side of operations. 

Human Challenges Continue to Slow OT Security Progress 

Despite the increase in cybersecurity spending following Operation Epic Fury, many organizations continue to struggle with internal operational barriers. The independent survey found that 45 percent of operators consider the cultural divide between IT and OT teams to be the single largest obstacle preventing faster cybersecurity improvements. Respondents said IT security personnel often lack the specialized expertise required to secure OT environments effectively.  Operational risk aversion ranked as the second-largest barrier at 28 percent. By contrast, only 11 percent of respondents identified budget constraints as a major challenge, marking a notable change from previous industry research in which financial limitations consistently ranked as the top concern for OT decision makers.  The findings emerge amid continuing warnings from federal authorities regarding Iran-aligned cyber activity targeting Western critical infrastructure after Operation Epic Fury. On April 7, six U.S. federal agencies — including the Cybersecurity and Infrastructure Security Agency (CISA), the FBI, and the Department of Energy — issued joint advisory AA26-097A. The advisory confirmed that Iranian-affiliated threat actors were actively disrupting programmable logic controllers across U.S. energy, water, and government sectors, resulting in operational disruptions and financial losses.  The Railroad Commission of Texas later issued a parallel warning to operators on April 10. According to Tosi, the independent survey represents the first dataset quantifying how the oil and gas sector itself is responding to the cybersecurity environment created by Operation Epic Fury. Suhonen said the industry’s next decisions regarding OT security investments will determine whether organizations close existing detection gaps or reinforce systems that remain ineffective for OT environments.  “The next twelve months will see oil and gas spend more on OT security than in the previous several years combined,” Suhonen said. “That spend will land in one of two places. It will close the detection gap with OT-native monitoring, asset visibility, and purpose-built secure remote access. Or it will deepen the IT-tool stack that operators have already told us they cannot see what they need it to see. The data is unambiguous about which path the market needs to take.” 

Salesforce Marketing Cloud Vulnerabilities Expose Cross-Tenant Subscriber Data Risks

Salesforce AMPScript

A recently disclosed set of vulnerabilities in Salesforce Marketing Cloud, widely known as SFMC, has drawn attention to the security risks tied to centralized marketing infrastructure.   The flaws, which affected components tied to AMPScript, CloudPages, and email-rendering workflows, could have enabled attackers to access subscriber information, enumerate marketing emails, and potentially affect organizations across multiple tenants.  Security researchers found that weaknesses in SFMC’s templating engine and cryptographic implementation introduced opportunities for unauthorized data access across customer environments. 

AMPScript and SFMC Template Injection Risks 

Modern enterprises rely heavily on Salesforce Marketing Cloud to manage large-scale marketing campaigns, personalized customer journeys, and trackable email communications. The platform, formerly known as ExactTarget, supports dynamic content generation through technologies such as AMPScript, Server-Side JavaScript (SSJS), and internal data views connected to large subscriber databases.  While these features provide flexibility for marketers, researchers noted that they also increase the impact of any underlying vulnerability. One of the major concerns centered on SFMC’s server-side templating framework.  AMPScript and SSJS allow organizations to dynamically insert subscriber attributes such as names, email addresses, and engagement metrics directly into marketing content. However, functions like TreatAsContent introduced a dangerous behavior because they effectively evaluate user-controlled input as executable template code. Researchers explained that if attacker-controlled data was passed into these functions, it could trigger template injection inside Salesforce Marketing Cloud environments.  The issue became more severe because SFMC historically supported AMPScript execution within email subject lines. According to the findings, legacy behavior caused subject templates to be evaluated twice by default. That design opened the door for payload execution during the second rendering stage. Researchers demonstrated the risk using the following payload inside a name field:  %%=RowCount(LookupRows("_Subscribers","SubscriberKey",_subscriberkey))=%%  If processed during the second evaluation phase, the payload could execute successfully and create a reliable injection point inside the marketing workflow.  Once template execution was achieved, attackers could potentially use built-in SFMC functions such as LookupRows to query internal Data Views, including: 
  • _Subscribers  
  • _Sent  
  • _Job  
  • _SMSMessageTracking  
  • _Click  
Access to these views could expose subscriber lists, email delivery records, engagement metrics, and message history associated with affected Salesforce Marketing Cloud tenants. 

CloudPages and “View Email in Browser” Vulnerability

Researchers identified an even more serious vulnerability tied to SFMC’s “view email in browser” functionality and CloudPages infrastructure. Many Salesforce customers configure branded domains such as view.example.com or pages.example.com that route back to shared SFMC infrastructure. These links typically rely on an encrypted qs parameter containing tenant and message-specific information. According to researchers from Searchlight Cyber, the older “classic” qs implementation used unauthenticated CBC encryption. The researchers found that the implementation behaved as a padding oracle, which made it possible to decrypt and re-encrypt query string parameters under certain conditions. Initially, the researchers abused the weakness using the Padre tool before later improving the process through the AMPScript MicrositeURL function.  This allowed them to forge valid QS values and access workflows such as “Forward to a Friend,” which could resolve subscriber identifiers into actual email addresses.  One of the most concerning aspects of the vulnerability was SFMC’s use of a single static encryption key shared across tenants. Researchers stated that once the cryptographic structure became understood, attackers could theoretically enumerate subscribers and access email content across multiple organizations using the same mechanism.

Legacy Encryption Weaknesses Expanded the Attack Surface 

The researchers also uncovered an older URL format that relied on per-parameter “encryption.” However, the mechanism reportedly consisted of a repeating static XOR key combined with a checksum. Although the scheme was considered legacy functionality, researchers found that it still worked on modern SFMC tenants. Because the implementation lacked strong cryptographic protections, attackers could decrypt and enumerate parameters such as JobID and ListSubscriber at high speed without relying on the slower padding-oracle technique.  The findings highlighted how legacy systems inside large cloud platforms can continue to create security exposure long after newer protections are introduced. 

Impact of the Salesforce Marketing Cloud Vulnerability 

Researchers concluded that the combined vulnerabilities could have enabled attackers to: 
  • Enumerate and exfiltrate subscriber records  
  • Access sent marketing emails and engagement data  
  • Forge cross-tenant QS tokens  
  • Access emails belonging to other organizations  
  • Exploit hard-coded cryptographic material  
  • Abuse argument-injection flaws tied to the MicrositeURL function  
  • Manipulate CloudPages and other SFMC web workflows  
To address the issues, Salesforce assigned multiple CVEs covering several root causes, including insecure cryptographic implementations, hard-coded keys, and argument injection vulnerabilities affecting MicrositeURL and CloudPages components.  According to Salesforce, the vulnerabilities were reported on 16 January 2026. Mitigations were deployed between 21 January and 24 January 2026. The company stated that it had identified no confirmed malicious exploitation at the time of disclosure.  As part of the remediation process, Salesforce migrated Marketing Cloud Engagement encryption to AES-GCM, rotated encryption keys, and disabled the double evaluation behavior tied to AMPScript subject-line rendering.  The company also invalidated all legacy tracking and CloudPages links created before 21 January 2026 at 23:00 UTC. Those links expired globally on 23 January 2026 at 21:00 UTC. 

CISA Launches CI Fortify to Defend Critical Infrastructure From Nation-State Cyber Threats

CI Fortify

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has launched a new initiative called “CI Fortify” aimed at helping critical infrastructure operators prepare for disruptive cyberattacks linked to geopolitical conflicts. The initiative comes amid growing concerns over nation-state cyber threats targeting operational technology (OT) systems that support essential services across the United States. The CI Fortify initiative focuses on improving critical infrastructure resilience through two key objectives: isolation and recovery. CISA said the effort is designed to help operators maintain essential operations even if adversaries compromise telecommunications networks, internet services, or industrial control systems. According to the agency, nation-state actors are no longer limiting their activities to espionage. Instead, threat groups have increasingly been pre-positioning themselves inside critical infrastructure environments to potentially disrupt or destroy systems during future geopolitical conflicts.

CI Fortify Initiative Focuses on Isolation and Recovery

Under the CI Fortify initiative, CISA is urging critical infrastructure organizations to assume that third-party communications and service providers may become unreliable during a crisis. Operators are also being asked to plan under the assumption that threat actors may already have some level of access to OT networks. Nick Andersen, Acting Director at CISA, emphasized the need for organizations to prepare for worst-case operational scenarios. “In a geopolitical crisis, the critical infrastructure organizations Americans rely on must be able to continue delivering, at a minimum, crucial services,” Andersen said. “They must be able to isolate vital systems from harm, continue operating in that isolated state, and quickly recover any systems that an adversary may successfully compromise.” The isolation strategy outlined under CI Fortify involves proactively disconnecting operational technology systems from external business networks and third-party connections. CISA said this approach is intended to prevent cyber impacts from spreading into OT environments while allowing organizations to continue delivering essential services in a degraded communications environment. The agency advised operators to identify critical customers, including military infrastructure and other lifeline services, and determine the minimum operational capabilities needed to support them during emergencies. CISA also recommended updating engineering processes and business continuity plans to support safe operations for extended periods while systems remain isolated.

Recovery Planning Central to Critical Infrastructure Resilience

Alongside isolation, the CI Fortify initiative places strong emphasis on recovery planning. CISA urged operators to maintain updated system documentation, create secure backups of critical files, and regularly practice system replacement or manual operational transitions. The agency noted that organizations should also identify communications dependencies that could complicate recovery efforts, such as licensing servers, remote vendor access, or upstream network connections. CISA encouraged operators to work closely with managed service providers, system integrators, and vendors to understand potential failure points and establish alternative recovery pathways. The initiative also highlights broader benefits of emergency planning beyond cybersecurity incidents. According to CISA, the same planning processes can help organizations maintain operations during weather-related disruptions, equipment failures, and safety emergencies. The agency said isolation planning can help cut off command-and-control access to compromised systems, while strong recovery preparation can reduce incident response costs and shorten recovery timelines.

Security Vendors and Service Providers Asked to Support CI Fortify

The CI Fortify initiative extends beyond infrastructure operators and calls on cybersecurity vendors, industrial automation suppliers, and managed service providers to support resilience planning efforts. Industrial control system vendors are being encouraged to identify barriers that could interfere with isolation and recovery procedures, including licensing restrictions and server dependency issues. Managed service providers and integrators are expected to assist organizations in engineering updates, local backup collection, and recovery documentation planning. Meanwhile, security vendors are being asked to support threat monitoring and provide intelligence if nation-state actors shift from espionage-focused activity to destructive cyber operations. CISA also requested vendors share information related to tactics that could undermine recovery or bypass isolation protections, including malicious firmware updates and vulnerabilities affecting software-based data diodes.

Volt Typhoon Cyberattacks Continue to Shape U.S. Cybersecurity Strategy

The launch of CI Fortify is closely tied to ongoing concerns surrounding the Volt Typhoon cyberattacks, which U.S. officials have linked to Chinese state-sponsored threat actors. CISA’s initiative specifically references the Volt Typhoon campaign as an example of how adversaries have attempted to establish long-term access inside U.S. critical infrastructure systems to potentially support disruptive actions during military conflicts. The Volt Typhoon operation first became public in 2023, when U.S. authorities revealed that Chinese hackers had infiltrated multiple sectors of American critical infrastructure. Former CISA Director Jen Easterly stated in 2024 that the agency had identified and removed Volt Typhoon intrusions across several sectors. She later reiterated in 2025 that efforts continued to focus on identifying and evicting Chinese cyber actors from critical infrastructure environments. Despite these operations, cybersecurity researchers and some government officials have warned that Chinese threat actors may still retain access to portions of critical infrastructure networks. Several experts have argued that nation-state groups remain deeply embedded in certain environments despite years of remediation efforts. With the CI Fortify initiative, CISA appears to be shifting focus toward operational resilience, recognizing that prevention alone may not be sufficient against sophisticated nation-state cyber threats targeting U.S. critical infrastructure.

PAN-OS Flaw CVE-2026-0300 Exposes Firewalls to Remote Code Execution

Buffer Overflow Vulnerability

A newly disclosed cybersecurity issue, tracked as CVE-2026-0300, has drawn urgent attention due to its critical severity and active exploitation. The flaw affects PAN-OS, the operating system used in Palo Alto Networks firewalls, and has been categorized as a buffer overflow vulnerability with serious implications for enterprise security environments.  The CVE-2026-0300 PAN-OS vulnerability was officially published on May 6, 2026, and updated the same day after being discovered in real-world production environments. It carries a CVSS score of 9.3, placing it firmly in the “critical” category. The issue stems from a buffer overflow vulnerability in the User-ID Authentication Portal, also known as the Captive Portal service, within PAN-OS.  This flaw allows an unauthenticated attacker to execute arbitrary code with root privileges by sending specially crafted network packets. Because the attack requires no authentication, no user interaction, and can be carried out over the network with low complexity, the exposure risk is considered extremely high. 

Technical Details of the Buffer Overflow Vulnerability in PAN-OS 

The root cause of CVE-2026-0300 PAN-OS is classified under CWE-787: Out-of-bounds Write, a common but dangerous type of buffer overflow vulnerability. Attackers can exploit this flaw to overwrite memory and potentially take full control of affected systems.  The vulnerability impacts PA-Series and VM-Series firewalls when the User-ID™ Authentication Portal is enabled. Importantly, Prisma Access, Cloud NGFW, and Panorama appliances are not affected.  Security data associated with the vulnerability highlights the following: 
  • Attack Vector: Network  
  • Attack Complexity: Low  
  • Privileges Required: None  
  • User Interaction: None  
  • Confidentiality, Integrity, Availability Impact: High  
Additionally, the vulnerability is automatable and has already reached the “ATTACKED” stage in exploit maturity, indicating that real-world attacks have been observed. 

Active Exploitation and Risk Factors 

Evidence shows limited exploitation of CVE-2026-0300 PAN-OS, particularly targeting systems where the User-ID Authentication Portal is exposed to untrusted networks or the public internet. Environments that allow external access to this portal face the highest level of risk. The severity is further highlighted by the CVSS vector:  CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H  This translates to a scenario where attackers can remotely compromise systems without needing credentials or user involvement, leveraging the buffer overflow vulnerability to gain root-level access. 

Affected and Unaffected Versions 

Multiple versions of PAN-OS are impacted by CVE-2026-0300, including: 
  • PAN-OS 12.1 versions prior to 12.1.4-h5 and 12.1.7  
  • PAN-OS 11.2 versions prior to 11.2.4-h17, 11.2.7-h13, 11.2.10-h6, and 11.2.12  
  • PAN-OS 11.1 versions prior to 11.1.4-h33, 11.1.6-h32, 11.1.7-h6, 11.1.10-h25, 11.1.13-h5, and 11.1.15  
  • PAN-OS 10.2 versions prior to 10.2.7-h34, 10.2.10-h36, 10.2.13-h21, 10.2.16-h7, and 10.2.18-h6  
Patches are scheduled with estimated availability dates ranging from May 13 to May 28, 2026. Cloud NGFW and Prisma Access deployments remain unaffected. 

Mitigation and Workarounds 

While patches are being rolled out, organizations are advised to take immediate steps to reduce exposure to the buffer overflow vulnerability in PAN-OS.  Recommended mitigations include: 
  • Restricting access to the User-ID Authentication Portal to trusted internal IP addresses only  
  • Preventing any exposure of the portal to the public internet  
  • Disabling the User-ID Authentication Portal entirely if it is not required  
The risk associated with CVE-2026-0300 PAN-OS drops significantly when these best practices are implemented. Systems that already follow strict network segmentation and access control policies are at a much lower risk. 

UIDAI, NFSU Sign 5-Year Pact to Boost Cybersecurity and Digital Forensics

UIDAI and NFSU

The collaboration between the Unique Identification Authority of India and the National Forensic Sciences University marks a significant development in India's security landscape and digital forensics. In a move aimed at strengthening the country’s digital infrastructure, UIDAI and NFSU have formalized a five-year partnership to advance research, training, and operational capabilities in cybersecurity and digital forensics. 

According to an official statement, UIDAI and NFSU have established a structured collaboration designed to address emerging challenges in cybersecurity and digital forensics.

UIDAI and NFSU Join Forces on Cybersecurity and Digital Forensics

The agreement, announced on May 5 in Ahmedabad, provides a comprehensive framework to bring together expertise from both institutions. It is intended to reinforce cyber resilience across UIDAI’s systems, which form the backbone of India’s digital identity ecosystem.  The Ministry of Electronics and Information Technology highlighted that this partnership creates an umbrella structure for coordinated efforts in research, technical development, and capacity building. The initiative underscores the growing importance of cybersecurity and digital forensics as critical components of national digital infrastructure. 

Six Strategic Pillars Driving UIDAI and NFSU Collaboration 

The UIDAI and NFSU partnership is structured around six key pillars, each targeting specific aspects of cybersecurity and digital forensics. These include academic and professional development, aimed at building skilled talent in the field, as well as strengthening information security and system integrity within UIDAI’s ecosystem.  Another major focus area is the development of advanced forensic infrastructure and laboratory capabilities. This will support deeper investigation and analysis of cyber incidents. Additionally, the agreement outlines provisions for technical support in cybersecurity operations, ensuring that UIDAI benefits from NFSU’s specialized expertise.  The collaboration also emphasizes joint research and technical advisory in emerging technologies. Areas such as artificial intelligence, blockchain, cryptography, and deepfake detection are expected to play a central role. The sixth pillar focuses on strategic placement and outreach, creating pathways for NFSU students to gain hands-on experience and career opportunities within UIDAI-related projects. 

Strengthening India’s Digital Backbone

India’s digital identity framework, powered by UIDAI, requires continuous upgrades to counter evolving cyber threats. The UIDAI and NFSU partnership aims to address this need by integrating advanced cybersecurity and digital forensics practices into the system’s core operations. UIDAI Chief Executive Officer Vivek Chandra Verma described the agreement as a crucial step toward enhancing the security architecture of India’s digital public infrastructure. He stated that the collaboration will significantly improve forensic readiness and resilience, ensuring stronger protection against cyber risks. The signing ceremony was attended by senior officials from both institutions, including Deputy Director General Abhishek Kumar Singh and NFSU Gujarat Campus Director S. O. Junare. Their presence highlighted the institutional commitment to advancing cybersecurity and digital forensics through sustained collaboration. 

Expanding Access While Enhancing Security 

Alongside this partnership, UIDAI has also taken steps to improve accessibility to its services. Collaborations with digital platforms like MapmyIndia and Google now allow users to locate authorized Aadhaar centers more easily. These platforms provide information on available services, operating hours, and accessibility features. While these initiatives focus on user convenience, they also align with the broader objective of strengthening the integrity of India’s digital identity system. By combining improved accessibility with robust cybersecurity and digital forensics measures, UIDAI aims to maintain trust in its infrastructure.

Australia Forms Cyber Incident Review Board to Strengthen Defences After Major Breaches

Cyber Incident Review Board

Australia has announced the creation of a Cyber Incident Review Board, a move aimed at strengthening the country’s ability to respond to and learn from major cyberattacks. The initiative places Australia among a small group of jurisdictions globally that have formalised independent review mechanisms to assess significant cyber incidents and improve long-term resilience. The Cyber Incident Review Board will conduct no-fault, post-incident reviews of major cybersecurity events affecting both government and private sector organisations. Rather than assigning blame, the board’s mandate is to identify systemic gaps and generate actionable recommendations to improve how Australia prevents, detects and responds to cyber threats. Established under the Cyber Security Act 2024, the board is a central element of the government’s 2023-2030 Australian Cyber Security Strategy. The broader goal is to position Australia as one of the most cyber secure nations by the end of the decade, supported by resilient infrastructure, prepared communities and stronger industry practices. Officials said the Cyber Incident Review Board will focus on extracting lessons from incidents and translating them into practical steps that can reduce the likelihood and impact of future attacks.

Cyber Incident Review Board Brings Leaders From Cross-Sector 

The government has appointed a panel of senior cybersecurity and industry leaders to the Cyber Incident Review Board. The board will be chaired by Narelle Devine, Global Chief Information Security Officer at Telstra. Other members include Debi Ashenden of the University of New South Wales, Valeska Bloch from Allens, Jessica Burleigh of Boeing Australia, Darren Kane from NBN Co, Berin Lautenbach of Toll Group and Nathan Morelli from SA Power Networks. The group brings experience across cybersecurity operations, legal frameworks, governance, national security and critical infrastructure. Authorities said this mix is designed to ensure independent, credible advice that reflects both technical and policy realities.

Government Emphasises Learning Over Blame

Australia’s Minister for Cyber Security Tony Burke said the Cyber Incident Review Board will play a key role in ensuring continuous improvement in national cyber defence. “We know that cyber attacks are constant. This guarantees we learn from every attack and keep increasing our resilience,” Burke said in a statement. He added that the board will examine major cybersecurity incidents, develop findings and provide recommendations that can be applied across sectors. The no-fault model is intended to encourage cooperation from affected organisations, while still producing insights that can benefit the wider ecosystem.

Response Shaped by Recent High-Profile Cyberattacks

The creation of the Cyber Incident Review Board follows a series of major cyber incidents in Australia, including breaches involving health insurer Medibank and telecom provider Optus. These events exposed sensitive customer data and triggered widespread public concern, increasing pressure on the government to strengthen cybersecurity oversight. By introducing structured post-incident reviews, authorities aim to ensure that lessons from such breaches are not lost and can inform future preparedness efforts.

How Australia’s Approach Compares Globally

Australia’s Cyber Incident Review Board aligns with similar efforts internationally but includes some distinct features. The European Union has established a comparable mechanism under its Cyber Solidarity Act, tasking the EU Agency for Cybersecurity with reviewing significant cross-border incidents. However, that framework has yet to be tested in practice. In the United States, a cyber safety review board has already examined several incidents, including a high-profile breach involving Microsoft. That report pointed to avoidable security failures and called for cultural and leadership changes within the company, prompting CEO Satya Nadella to prioritise security across operations. However, earlier U.S. reviews, such as those into the Log4j vulnerability and the Lapsus$ group, were criticised for lacking focus and impact. Analysts noted that broader, less targeted reviews made it harder to drive accountability or meaningful change.

Stronger Powers to Ensure Participation

One notable difference in Australia’s model is its ability to compel organisations to provide information if they decline to participate voluntarily. This marks a shift from the U.S. approach, which relied on cooperation from affected entities. Experts have argued that such powers could improve the depth and accuracy of findings, ensuring that the Cyber Incident Review Board has access to critical data when analysing incidents. At the same time, the framework stops short of allowing flexible expansion of board membership for specialised cases, an idea that has been suggested in international policy discussions.

Focus on Long-Term Cyber Preparedness

The Cyber Incident Review Board is expected to become a key mechanism in shaping Australia’s cybersecurity posture over the coming years. By systematically reviewing incidents and sharing lessons across sectors, the government hopes to build a more coordinated and resilient defence against evolving cyber threats. With cyberattacks continuing to target critical infrastructure, businesses and public services, the success of the Cyber Incident Review Board will likely depend on its ability to translate insights into measurable improvements across the national ecosystem.

U.S. Will Now Examine National Security Implications of New AI Models, Pre-Release

Claude AI, Antropic, AI, Artificial Intelligence

In the span of four days, the U.S. government announced two parallel sets of agreements with frontier AI companies that together define the two tracks Washington wants to run simultaneously—test AI for national security risks before the public ever sees it, and deploy AI directly on the military's most classified networks.

The Center for AI Standards and Innovation — CAISI, the entity under the Department of Commerce's National Institute of Standards and Technology that inherited the remit of the former AI Safety Institute — announced new agreements with Google DeepMind, Microsoft, and Elon Musk's xAI. These build on renegotiated agreements with Anthropic and OpenAI that date to 2024, updated to reflect directives from Commerce Secretary Howard Lutnick and America's AI Action Plan.

Under the CAISI agreements, the three companies will hand over their frontier AI models to government evaluators before those models are publicly released. The evaluations probe for national security-relevant capabilities and risks.

To conduct a thorough assessment, developers frequently provide CAISI with models that have reduced or removed safety guardrails — a design choice that allows evaluators to probe what a model can do at its ceiling, not what it will do under commercial safety controls. Evaluators from across the federal government participate, coordinated through the CAISI-convened TRAINS Taskforce, an interagency body focused specifically on AI national security concerns.

CAISI said it has completed more than 40 such evaluations to date. The agreements explicitly support testing in classified environments and were drafted with the flexibility to adapt rapidly as AI capabilities continue advancing.

"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said CAISI Director Chris Fall. "These expanded industry collaborations help us scale our work in the public interest at a critical moment."

Listen to: Charting the AI Frontier in Cybersecurity with Ryan Davis

Fall was appointed to lead CAISI after Collin Burns — a former Anthropic researcher — was reportedly removed from the director role after just four days. The personnel transition at CAISI's top reflects a broader institutional pivot. Under the Biden administration, the AI Safety Institute focused on safety standards, definitions, and voluntary guardrails. Under Trump, CAISI has shifted its emphasis toward AI acceleration and national security capability assessment. The substance of what the evaluators do — probe powerful models before release — has not changed. The framing of why they do it has.

The latest announcement comes four days after the Department of War (formerly Department of Defense) announced agreements with eight frontier AI companies to deploy their models directly on the military's classified networks for operational use.

The companies cleared are SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services, and Oracle. The networks in question are classified at Impact Level 6, covering secret-level data, and Impact Level 7, which refers to the most highly restricted national-security systems. The stated objectives are data synthesis, situational awareness enhancement, and warfighter decision support.

The Department of War announcement carries one conspicuous absence that dominates coverage of what it actually means. Anthropic is not on the list. The company that first deployed AI models on Pentagon classified systems — via a Palantir integration under the Maven Smart System contract — is excluded after a dispute over the guardrails governing military and surveillance use of its AI.

Also read: Australia Establishes AI Safety Institute to Combat Emerging Threats from Frontier AI Systems

The Pentagon had previously branded Anthropic a "supply chain risk," a designation typically reserved for foreign entities posing national security concerns. A March 2026 federal injunction reversed that designation, but it did not restore Anthropic's position as a Pentagon AI vendor. Palantir has pulled its Claude models from its DoD platforms accordingly.

The exclusion has strategic implications that extend beyond one company's contract status. Anthropic's recently released Mythos model — described by Treasury Secretary Scott Bessent as representing a step change in large language model capability — has generated significant attention from U.S. officials and financial sector executives about its potential to supercharge adversarial cyber operations.

The fact that Mythos is not among the models being assessed for classified military use, while simultaneously being cited by senior officials as a capability milestone that warrants concern, creates a gap in the government's stated AI security posture that is difficult to characterize as anything other than a policy contradiction.

New Infostealer Dubbed ‘Pheno’ Hijacks Windows’ Phone Link App to Steal MFA OTPs

Pheno, Infostealer, OTP

Attackers have found a way to intercept SMS-based one-time passwords from a victim's mobile device without deploying a single line of malware on the phone itself. Instead, they go through the Windows PC the phone is already connected to.

Researchers documented an active intrusion campaign active since at least January 2026, that combines a remote access trojan called "CloudZ" with a previously undocumented plugin named "Pheno." Together the two tools are designed to steal credentials and harvest authentication codes that arrive on a victim's phone by abusing Microsoft Phone Link, a legitimate Windows application built into every Windows 10 and 11 system.

Microsoft Phone Link, formerly "Your Phone," is a synchronization tool that bridges a user's Android or iOS device to their Windows PC, mirroring calls, messages, and app notifications directly onto the desktop.

Pheno exploits that bridge. It continuously scans running processes for keywords including "YourPhone," "PhoneExperienceHost," and "Link to Windows" to detect an active phone connection. When one is found, the plugin writes "Maybe connected" to a local staging file and gains access to the Phone Link application's local SQLite database. It is a file that can contain SMS messages and authenticator app notification content, including OTP codes.

The attack never targets the mobile device directly. It targets the enterprise-managed Windows endpoint the device trusts, bypassing security controls focused on securing smartphones rather than the desktop layer they sync with.

Also read: Infostealers and Lack of MFA Led to Dozens of Major Breaches

CloudZ is a modular .NET RAT compiled on January 13, and obfuscated with ConfuserEx. Beyond loading Pheno, it supports credential harvesting from web browsers, file operations, remote command execution, and host profiling.

It establishes an encrypted TCP connection to its command-and-control server and rotates between three hardcoded user-agent strings to make its traffic blend with legitimate browser requests. To evade analysis, CloudZ detects .NET debuggers and profilers via environment variable queries and generates its executable functions dynamically in memory — meaning the most sensitive code never sits as a static binary on disk.

The infection chain begins with a fake ScreenConnect application update. ScreenConnect is a legitimate remote support tool commonly used in enterprise environments. Executing the fake update drops a Rust-compiled loader, which in turn deploys a .NET loader that installs CloudZ and establishes persistence via a scheduled task. The .NET loader performs thorough sandbox checks, scanning for analysis tools including Wireshark, Fiddler, Procmon, and Sysmon before proceeding.

Cisco Talos researchers did not attribute the campaign to a known threat actor. The initial access vector also remains unidentified.

Latvian Cybercriminal Jailed for Role in Multi-Million Dollar Ransomware Scheme

Ransomware Organization Sentencing

A ransomware organization sentencing has brought one of the key operatives behind a major cybercrime group to justice, highlighting the global reach of law enforcement in tackling ransomware attacks. A Latvian national, Deniss Zolotarjovs, has been sentenced to 102 months in prison for his role in a Russian-linked ransomware organization responsible for targeting more than 54 companies worldwide. The sentencing marks a significant development in ongoing efforts to dismantle international ransomware networks. According to the U.S. Department of Justice, Zolotarjovs played a central role in extortion operations carried out between June 2021 and August 2023. The group operated under multiple ransomware brands, including Conti, Karakurt, Royal, TommyLeaks, SchoolBoys Ransomware, and Akira, reflecting a complex and evolving cybercrime structure.

Ransomware Organization Sentencing: Role in Extortion and Data Exploitation

Officials said Zolotarjovs was primarily responsible for increasing pressure on victims who hesitated to pay ransom demands. He analyzed stolen data and used sensitive information to intensify extortion tactics. In one case involving a pediatric healthcare provider, Zolotarjovs used children’s health information to pressure the organization into paying. When the ransom demand was not met, he allegedly encouraged co-conspirators to leak or sell the data. Court documents reveal he distributed a bulk set of sensitive records to hundreds of patients, aiming to amplify fear and force compliance. Assistant Attorney General A. Tysen Duva described Zolotarjovs as a “cruel, ruthless, and dangerous international cybercriminal,” noting that his actions included exploiting highly personal data to increase leverage over victims.

Financial and Operational Impact of Attacks

The ransomware organization’s activities caused widespread damage. Of the more than 54 targeted companies, attacks on 13 resulted in losses exceeding $56 million, including approximately $2.8 million paid in ransom. An additional 41 companies are believed to have paid around $13 million, though detailed loss figures are still being compiled. Authorities estimate that the total financial impact could reach hundreds of millions of dollars when factoring in underreported incidents. Beyond financial losses, the attacks led to the exposure of highly sensitive data, including Social Security numbers, addresses, dates of birth, and healthcare records. In one instance, a government entity’s 911 emergency system was forced offline, raising serious concerns about public safety and the broader consequences of ransomware attacks.

Organized Structure and Global Operations

Investigators found that the ransomware organization operated with a structured hierarchy and used a network of companies across Russia, Europe, and the United States to mask its activities. Members were largely based in Russia and reportedly operated from an office in St. Petersburg. The group’s operations also involved corruption and misuse of public resources. Authorities said some members had ties to former Russian law enforcement, allowing them to access databases, intimidate individuals, and identify potential recruits. These connections also enabled members to avoid scrutiny, including evading taxes and military service through bribes.

Arrest, Extradition, and Prosecution

Zolotarjovs was arrested in Georgia in December 2023 and later extradited to the United States in August 2024 after contesting the process. In July 2025, he pleaded guilty to conspiracy charges involving money laundering and wire fraud. The case was investigated by the Federal Bureau of Investigation, with support from multiple field offices and international partners. Special Agent in Charge Jason Cromartie said the case reflects the agency’s continued efforts to track down cybercriminals operating across borders. U.S. Attorney Dominick S. Gerace II added that the prosecution demonstrates that cybercriminals cannot rely on geography or anonymity to evade justice.

Continued Focus on Ransomware Threats

The ransomware organization sentencing highlight the scale and persistence of ransomware threats targeting businesses and public services. Authorities said investigations into related actors and networks remain ongoing as part of broader efforts to disrupt global cybercrime operations.
❌