Visualização normal

Antes de ontemStream principal
  • ✇Krebs on Security
  • How AI Assistants are Moving the Security Goalposts BrianKrebs
    AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and nov
     

How AI Assistants are Moving the Security Goalposts

8 de Março de 2026, 20:35

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.

If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

WHEN AI INSTALLS AI

One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.

According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

VIBE CODING

AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.

Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.

“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”

ATTACKERS LEVEL UP

The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

BEWARE THE ‘LETHAL TRIFECTA’

This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

  • ✇Firewall Daily – The Cyber Express
  • OpenClaw Vulnerability Exposes How an Open-Source AI Agent Can Be Hijacked Ashish Khaitan
    When the open-source AI agent for OpenClaw burst onto the scene, it did so with astonishing speed. In just five days, the project surpassed 100,000 stars on GitHub, becoming one of the fastest-growing open-source AI tools in history. Developers quickly embraced it as a personal assistant that could run locally, plug into calendars and messaging platforms, execute system commands, and autonomously manage workflows.  But beneath that meteoric rise, researchers uncovered the OpenClaw vulnerabili
     

OpenClaw Vulnerability Exposes How an Open-Source AI Agent Can Be Hijacked

27 de Fevereiro de 2026, 04:29

OpenClaw Vulnerability

When the open-source AI agent for OpenClaw burst onto the scene, it did so with astonishing speed. In just five days, the project surpassed 100,000 stars on GitHub, becoming one of the fastest-growing open-source AI tools in history. Developers quickly embraced it as a personal assistant that could run locally, plug into calendars and messaging platforms, execute system commands, and autonomously manage workflows.  But beneath that meteoric rise, researchers uncovered the OpenClaw vulnerability, a weakness that allowed any website a developer visited to quietly seize control of the agent. Security researchers at Oasis Security identified what they describe as a complete vulnerability chain within OpenClaw’s core architecture. The chain enabled a malicious website to take over a developer’s AI agent without requiring plugins, browser extensions, or any form of user interaction. After receiving the disclosure, the OpenClaw team classified the issue as “High” severity and released a patch within 24 hours. 

Decoding the OpenClaw Vulnerability 

Originally launched under the names Clawdbot and later MoltBot, OpenClaw rapidly evolved into a defining example of modern open-source AI innovation. Its explosive popularity even drew attention from OpenAI. On February 15, OpenAI CEO Sam Altman announced that OpenClaw’s creator, Peter Steinberger, had joined the company, calling him “a genius with a lot of amazing ideas about the future of very smart agents.”  The tool’s appeal lies in its autonomy. Through a web dashboard or terminal interface, users can prompt OpenClaw to send messages, manage workflows across platforms, execute commands, and even participate in what some described as an emergent AI social network. It runs as a self-hosted agent, placing powerful capabilities directly on developers’ laptops.  Yet that power has already attracted abuse. Earlier in the month, researchers uncovered more than 1,000 malicious “skills” in OpenClaw’s community marketplace, ClawHub. These fake plugins posed as cryptocurrency utilities or productivity integrations but instead delivered info-stealing malware and backdoors. That episode was a classic supply-chain problem; malicious community contributions poisoning an otherwise legitimate ecosystem.  The OpenClaw vulnerability, however, was different. It did not rely on third-party plugins or marketplace downloads. Instead, the vulnerability chain lived in the bare OpenClaw gateway itself, operating exactly as documented. No user-installed extensions were required. No marketplace interaction was necessary. The flaw was embedded in the core system.  For many organizations, this incident highlights a broader issue: shadow AI. Tools like OpenClaw are frequently adopted directly by developers without formal IT oversight. They often run with deep access to local systems, credentials, messaging histories, and API keys, but without centralized governance or visibility. 

How the Vulnerability Chain Enabled a Silent Website-to-Local Takeover 

At the heart of OpenClaw’s architecture is the gateway, a local WebSocket server that functions as the system’s brain. The gateway manages authentication, chat sessions, configuration storage, and orchestration of the AI agent. Connected to it are “nodes,” which may include a macOS companion app, an iOS device, or other machines. These nodes register with the gateway and expose capabilities such as executing shell commands, accessing cameras, or reading contacts. The gateway can dispatch instructions to any connected node.  Authentication is handled via either a long token string or a password. By default, the gateway binds to localhost, operating under the assumption that local access is inherently trusted. That assumption proved to be the weak link in the vulnerability chain behind the OpenClaw vulnerability.  The attack scenario is deceptively simple. A developer has OpenClaw running locally, protected by a password and bound to localhost. While browsing the web, they land on a malicious or compromised site. That alone is enough to trigger the attack.  Because WebSocket connections to localhost are not blocked by standard browser cross-origin policies, JavaScript running on any visited webpage can open a WebSocket connection directly to the OpenClaw gateway. Unlike traditional HTTP requests, these cross-origin WebSocket connections proceed silently. The user sees no warnings.  Once connected, the malicious script exploits another flaw in the vulnerability chain: the gateway exempts localhost connections from rate limiting. Failed password attempts from localhost are neither throttled nor logged. In laboratory testing, researchers achieved hundreds of password guesses per second using only browser-based JavaScript. A list of common passwords could be exhausted in under a second. Even a large dictionary would fall within minutes. Human-chosen passwords offered little resistance.  After guessing the password, the attacker gains a fully authenticated session with administrative privileges. From there, the possibilities expand dramatically. The attacker can register as a trusted device, automatically approved because the gateway silently authorizes pairings from localhost. They can interact with the AI agent directly, dump configuration data, enumerate all connected nodes (including device platforms and IP addresses), and read application logs.  In practical terms, this means a malicious website could instruct the AI agent to comb through Slack conversations for API keys, extract private messages, exfiltrate sensitive files, or execute arbitrary shell commands on any connected device. For a typical developer heavily integrated with messaging platforms and AI provider APIs, exploitation of the OpenClaw vulnerability could amount to full workstation compromise, all initiated from a single browser tab. 

Governing Open-Source AI After the OpenClaw Vulnerability 

Researchers reported the issue with comprehensive technical documentation, root cause analysis, and proof-of-concept code. The OpenClaw team responded rapidly, issuing a fix in version 2026.2.25 and later within 24 hours, an impressive turnaround for a volunteer-driven open-source AI project.  Still, the broader lesson extends beyond a single patch. The rapid adoption of open-source AI tools means many organizations already have OpenClaw instances running on developer machines, sometimes without IT awareness. Security experts recommend four immediate steps. First, gain visibility into AI tooling across the organization. Inventory of which agents and local AI servers are operating within the developer fleet.   Second, update OpenClaw installations immediately to version 2026.2.25 or later, treating the OpenClaw vulnerability with the urgency of any critical security patch. Third, audit the credentials and permissions granted to AI agents, revoking unnecessary API keys and system capabilities. Finally, establish governance for non-human identities. AI agents authenticate, store credentials, and take autonomous actions; they must be managed with the same rigor as human accounts and service identities.  This includes implementing intent analysis before actions occur, deterministic guardrails for sensitive operations, just-in-time scoped access, and full audit trails linking human intent to agent activity. The researchers note that its Agentic Access Management platform was designed specifically to address this emerging challenge.  As open-source AI agents like OpenClaw become embedded in everyday developer workflows, the OpenClaw vulnerability serves as a cautionary tale. The future may indeed belong to autonomous agents, but without proper governance and oversight, a single overlooked vulnerability chain can turn groundbreaking open-source AI innovation into a serious enterprise risk. 
  • ✇Malwarebytes
  • OpenClaw: What is it and can you use it safely?
    An AI tool with a funny name has caused quite a commotion as of late—including some allegations of machine consciousness—so here is a breakdown on OpenClaw. Launched in November 2025, OpenClaw is an open-source, autonomous artificial intelligence (AI) agent that was made to run locally on your own computer, allowing it to manage tasks, interact with applications, and read and write files directly. It acts as a personal digital assistant, integrating with chat apps like WhatsApp and Discord
     

OpenClaw: What is it and can you use it safely?

23 de Fevereiro de 2026, 18:10

An AI tool with a funny name has caused quite a commotion as of late—including some allegations of machine consciousness—so here is a breakdown on OpenClaw.

Launched in November 2025, OpenClaw is an open-source, autonomous artificial intelligence (AI) agent that was made to run locally on your own computer, allowing it to manage tasks, interact with applications, and read and write files directly. It acts as a personal digital assistant, integrating with chat apps like WhatsApp and Discord to automate emails, scan calendars, and browse the internet for information. 

OpenClaw was formerly known as ClawdBot, but the project brushed up against the large AI developer Anthropic, because of its own tool named “Claude.” In response, OpenClaw’s developer quickly renamed the project to “Moltbot,” which brought impersonation campaigns from cybercriminals. The trademark trouble and the abuse that followed put a dent in OpenClaw’s reputation.

Another dent followed when Hudson Rock published an article about the first observed case of an infostealer grabbing a complete OpenClaw configuration from an infected system, effectively looting the “identity” of a personal AI agent rather than just browser passwords.

The case underlines an impending danger—and not just for OpenClaw, but for other AI agents as well. Infostealers are starting to harvest not just credentials but entire AI personas plus their cryptographic “skeleton keys,” turning one compromised agent into a pivot point for full‑blown account takeover and long‑term profiling.

As I stated before in a broader context, adversaries are starting to target AI systems at the supply‑chain level, quietly poisoning training data and inserting backdoors that only surface under specific conditions. OpenClaw sits squarely in this emerging risk zone: open source, moving fast, and increasingly wired into mailboxes, cloud drives, and business workflows while its security model is still being improvised.

At this stage of its development, treating OpenClaw as a hardened productivity tool is wishful thinking, since it behaves more like an over‑eager intern with an adventurous nature, a long memory, and no real understanding of what should stay private.

Researchers and regulators have already documented prompt injection risks, log poisoning, and exposed instances that hand attackers plaintext credentials or tokens via poisoned emails, websites, or logs that the agent dutifully processes.

How to use OpenClaw safely

For anyone thinking about using OpenClaw in production, the bigger picture is even less comforting. OpenClaw runs locally but is designed to be adventurous: it can browse, run shell commands, read and write files, and chain “skills” together without a human checking every step. Misconfigured permissions, over‑privileged skills, and a culture of “just give it access so it can help” mean the agent often sits at the center of your accounts, tokens, and documents, with very few guardrails.

In fact, an employee at Meta who works in AI safety and alignment recently shared on the social media platform X that she was unable to prevent ClawBot from deleting a major portion of her email inbox.

Further, the Dutch data protection authority (Autoriteit Persoonsgegevens) warned organizations not to deploy experimental agents like OpenClaw on systems that handle sensitive or regulated data at all, flagging the combination of privileged local access, immature security engineering, and a rapidly growing ecosystem of dubious third‑party plugins as a kind of Trojan horse on the endpoint.

Microsoft provided a list of recommendations in this field that make a lot of sense. They are not specifically aimed at OpenClaw, but provide a conservative baseline for self‑hosted, Internet‑connected agents with durable credentials. (If these recommendations feel overly technical, it’s because safely using an AI agent with broad access is still an experimental and technical process.)

  •  Run OpenClaw (or similar agents) in a sandboxed VM or container on isolated hosts, with default‑deny egress and tightly scoped allow‑lists.
  • Give the runtime its own non‑human service identities, least privilege, short token lifetimes, and no direct access to production secrets or sensitive data.
  • Treat skill/extension installation as introducing new code into a privileged environment: restrict registries, validate provenance, and monitor for rare or newly seen skills.
  • Log and periodically review agent memory/state and behavior for durable instruction changes, especially after ingesting untrusted content or shared feeds.
  • Understand and provide for the event where you may need to nuke‑and‑pave: keep non‑sensitive state snapshots handy, document a rebuild and credential‑rotation playbook, and rehearse it.
  • Run an up to date real-time anti-malware solution that can detect information stealers and other malware.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  • ✇Cybersecurity Blog | SentinelOne
  • OneClaw: Discovery and Observability for the Agentic Era SentinelOne
    Autonomous agents and personal AI assistants are moving from experimentation to enterprise reality. Tools like OpenClaw (formerly Moltbot and Clawdbot), Nanobot and Picoclaw are being embedded across development environments, cloud workflows, and operational pipelines. They install quickly, evolve dynamically, and often operate with deep system-level access. For CISOs and security leaders, this presents a new governance challenge: How do you secure what you can’t see? OneClaw, built by Prompt Se
     

OneClaw: Discovery and Observability for the Agentic Era

18 de Fevereiro de 2026, 13:04

Autonomous agents and personal AI assistants are moving from experimentation to enterprise reality. Tools like OpenClaw (formerly Moltbot and Clawdbot), Nanobot and Picoclaw are being embedded across development environments, cloud workflows, and operational pipelines. They install quickly, evolve dynamically, and often operate with deep system-level access. For CISOs and security leaders, this presents a new governance challenge: How do you secure what you can’t see?

OneClaw, built by Prompt Security from SentinelOne®, was created to answer that question. It is a lightweight discovery and observability tool built to help secure AI usage by providing broad, organization-wide visibility into OpenClaw deployments without disrupting workflows or slowing down innovation. Where agent sprawl is accelerating faster than policy frameworks can adapt, OneClaw restores clarity, accountability, and executive oversight where it matters most.

Understanding The Risks Behind OpenClaw

While most security programs assume that AI applications and agents are vetted, IT sanctioned, registered, and monitored, OpenClaw agents can be:

  • Installed directly by developers
  • Extended through public skills and plugins
  • Granted persistent memory and tool access
  • Configured to run autonomously, and
  • Connected to external services and API.

They can also operate quietly in local environments, call tools automatically, schedule tasks via cron jobs, and access sensitive systems – all outside of most application inventory controls. What manifests is two main concerns:

  • Shadow Agent Proliferation – How many OpenClaw instances are running across the enterprise?
  • Data Egress & External Exposure – Where are agents sending information, and to whom?

Without that centralized observability, these risks persist. As assistants such as OpenClaw, Nanobot, Picoclaw and other autonomous agents continue to emerge every day, OneClaw is designed to eliminate the opacity of the inherent risks they bring to businesses.

OpenClaw is only the beginning of a much larger shift toward autonomous agents and personal AI assistants embedded across the enterprise. In addition to OpenClaw, OneClaw already supports visibility coverage for emerging frameworks such as Nanobot and Picoclaw, extending the same discovery and observability capabilities across multiple agent ecosystems.

For CISOs, this future-proof approach is critical. Rather than deploying point solutions for each new tool that appears, OneClaw establishes a unified oversight layer for the entire category of new assistants, copilots, and autonomous workflows that continue to surface. The goal is not simply to manage one platform, but to provide lasting control over a rapidly expanding attack surface, ensuring that as agent adoption accelerates, security visibility and accountability keep pace.

Empowering Teams with Observability

OneClaw provides structured delivery and observability across OpenClaw deployments leveraging a scanner that automatically detects supported CLIs and inspects the local .openclaw directory within user environments. It parses session logs, configuration files, and runtime artifacts to summarize meaningful usage patterns and surface critical operational details. OneClaw also captures browser activity performed by agents to deliver visibility into external interactions and potential data exposure paths.

From this data, OneClaw then summarizes active skills and installed plugins, recent tool and application usage, scheduled cron jobs, configured communication channels, available nodes and AI models, security configurations, and autonomous execution settings.

OneClaw outputs are structured in JSON format, making it flexible for integration into existing security ecosystems. Organizations can conduct local reviews, inject into SIEM platforms, feed centralized monitoring systems, and correlate with identity, endpoint, and cloud telemetry. For security leaders invested in unified visibility, this ensures that agent observability does not become siloed.

For CISOs, this transforms AI agent behavior from invisible background activity into auditable telemetry that is integrated into broader enterprise security and risk management discussions.

From Raw Telemetry to Executive Intelligence

Though many tools generate logs, very few of them translate them into governance-ready insights. OneClaw provides a fully centralized dashboard that aggregates reports across all employees. Rather than just reviewing isolated endpoint-level findings, security leaders get visualized deployment trends, risk heatmaps, organization-wide exposure mapping, and skill usage distribution.

In a matter of minutes, the tool can be deployed via Jamf, Intune, Kandji or SentinelOne Remote Ops.

This elevates the conversation from technical detail to strategic oversight, which is critical as CISOs are now being asked for AI agent inventories, and whether the agents are operating within policy, can access sensitive systems, and are transmitting data externally.

With OneClaw, CISOs can gain visibility into:

  • How many OpenClaw agents are deployed enterprise-wide
  • What percentage is configured for autonomous execution
  • Which teams are installing high-risk skills
  • Where agents are making outbound connections
  • How has agent adoption changes day to day

This level of structured visibility empowers security leaders to build the foundational layer for proactive governance for agentic AI security and have effective conversations about the short and long-term risks from OpenClaw adoption in their organization.

Security-Focused Behavioral Analysis

OneClaw does not assume all autonomy is dangerous. Instead, it surfaces where autonomy exists so that leaders can make informed, balanced policy decisions and strengthen controls with precision. For example, security teams may determine that autonomous execution is appropriate within a sandboxed development environment but requires approvals in production systems. They may allow specific, vetted skills while restricting newly installed public plugins pending review. They also may decide that outbound communication to approved internal APIs is acceptable, while flagging unknown external domains for investigation.

By making these conditions visible, OneClaw enables proportionate governance rather than blanket restrictions. Teams can confidently approve safe automation, enforce guardrails where needed, and document oversight for executive and regulatory reporting. The result is not reduced innovation, but safer acceleration where autonomous agents operate within clearly defined boundaries, and security leaders remain firmly in control of how and where that autonomy is trusted.

Transparency Without Disruption

OneClaw was designed to deliver visibility into autonomous activity without interrupting the work those agents (and the teams deploying them) are meant to accelerate. Rather than creating friction into development pipelines or altering how OpenClaw, Nanobot, or Picoclaw operate, it acts as an observability layer that security teams can deploy quietly across the environment.

The approach matters: Security leaders are not looking to slow down innovation, but they are accountable for what happens if agent behavior crosses policy boundaries or introduces risk. OneClaw gives the context needed for CISOs to act decisively using the cybersecurity controls they already trust. It does not replace prevention capabilities, it makes them smarter by ensuring autonomous activity is no longer invisible.

By surfacing where agents exist, how they are configured, and what they are interacting with, OneClaw allows organizations to distinguish between productive automation and risk behavior that warrants intervention. Security teams can then enforce standards through existing controls without imposing blanket restrictions that only frustrate developers.

The result? This is a model aligned with how modern security actually operates – observe first, contextualize risk, and then apply controls proportionately. OneClaw makes transparency a force multiplier for the security stack, and empowers CISOs to have confidence that autonomous innovation continues under a watchful, informed oversight instead with blind trust.

Chaos to Clarity | Why This Matters Now

Agentic AI discovery is no longer optional. OpenClaw’s rapid growth and decentralized infrastructure is creating a large, and often unmanaged attack surface. With skills being frequently installed from public repositories and agents being granted deep permissions, configuration drift is occurring silently. Agent ecosystems can fall to exploitation through malicious skills and supply chain manipulation.

OneClaw supports how CISOs defend against OpenClaw-related risks by providing deep visibility and observability of agentic AI use and autonomous behavior, detecting risky configurations early on, and quantifying the exposures before incidents occur.

As adoption increases and autonomy deepens, OneClaw works to restore visibility while allowing powerful agents to accelerate development pipelines. CISOs have the clarity needed to govern autonomous systems responsibly, while getting structured discovery, centralized reporting, and security-focused analysis.

Start getting visibility into agent activity and security insights into OpenClaw deployments across your organization here. To learn more about securing your OpenClaw agents, register for our upcoming webinar happening Tuesday, March 3, 2026.

Join the Webinar
SentinelOne AI & Intelligence Leaders Discuss How to Secure OpenClaw Agents on March 3, 2026 at 10:00AM PST / 1:00PM EST
  • ✇Cybersecurity Blog | SentinelOne
  • Shadow Agents: How SentinelOne Secures the AI Tools That Act Like Users SentinelOne
    AI adoption is accelerating faster than security programs can adapt. Organizations are already experiencing breaches tied directly to unsanctioned AI usage, at significantly higher cost than traditional incidents, while the vast majority still lack meaningful governance controls to manage the risk. Traditional cybersecurity measures are necessary but insufficient. Securing AI requires purpose-built capabilities that span the entire AI lifecycle, from infrastructure to user interaction. The rapid
     

Shadow Agents: How SentinelOne Secures the AI Tools That Act Like Users

17 de Fevereiro de 2026, 18:28

AI adoption is accelerating faster than security programs can adapt. Organizations are already experiencing breaches tied directly to unsanctioned AI usage, at significantly higher cost than traditional incidents, while the vast majority still lack meaningful governance controls to manage the risk. Traditional cybersecurity measures are necessary but insufficient. Securing AI requires purpose-built capabilities that span the entire AI lifecycle, from infrastructure to user interaction.

The rapid adoption of Large Language Models (LLMs) and Artificial Intelligence (AI) introduces transformative capabilities, but also novel and complex security challenges. Securing these sophisticated systems requires a multi-layered, end-to-end approach that extends beyond traditional cybersecurity measures. SentinelOne’s® Singularity™ Platform is uniquely positioned to provide holistic protection for LLM and AI environments, from the underlying infrastructure to the integrity of the models themselves and their interactions.

This document provides a detailed breakdown of how SentinelOne’s capabilities address the unique security requirements and emerging threats associated with LLMs and AI, now further enhanced by the integration of Prompt Security’s cutting-edge AI usage and agent security technology.

Because the most urgent question security leaders are asking right now is specifically about agentic AI assistants, tools like OpenClaw (aka Clawdbot and Moltbot) that can execute code and access data with user-level privileges, this document leads with dedicated coverage for those tools before mapping the full platform architecture.

Securing Agentic AI Assistants: OpenClaw Coverage

The Question Security Leaders Are Asking

“Do we have coverage for the new agentic AI assistants, such as OpenClaw (aka. Moltbot and Clawdbot) that are showing up across our environment?” Yes. SentinelOne provides multi-layered detection, hunting, and governance capabilities that specifically address these tools across three reinforcing control planes: EDR/XDR telemetry, AI interaction security (Prompt Security), and open-source agent hardening (ClawSec).

OpenClaw (aka Clawdbot and Moltbot) represent the next evolution of shadow AI risk. Unlike browser-based chatbots that operate within a web session, these agentic AI assistants can execute code, spawn shell processes, access local files and secrets, call external APIs, and operate with the same privileges as the user account running them. In SentinelOne’s SOC framework, they fall squarely into the highest-risk categories: agentic execution and compromise through the loop.

If an agentic assistant can read files, call tools, and talk out, it should be treated like a privileged automation account and secured accordingly.

Coverage Layer 1: EDR/XDR Detection & Threat Hunting

SentinelOne’s Singularity agent provides telemetry and tracking of OpenClaw (aka. Moltbot and Clawdbot). The Data Lake PowerQuery provided below adds detection of any activity at the endpoint level. Purpose-built hunting queries target these tools across four signal categories:

Signal Category What SentinelOne Detects Example Indicators
Process Execution Clawdbot, OpenClaw, or Moltbot runtime processes launching on endpoints Command-line strings containing clawdbot, moltbot, or openclaw
File Activity Creation, modification, or presence of agentic assistant files File paths containing openclaw or clawdbot binaries and configurations
Network Activity Communication on default agentic service ports and domains associated with ‘bad’ extensions Traffic on port 18789 (default OpenClaw listener)
Persistence Mechanisms Scheduled tasks or services establishing agent persistence Scheduled tasks named OpenClaw or related service registrations

Dedicated PowerQuery for Clawdbot / OpenClaw / Moltbot:

dataSource.name = 'SentinelOne' AND

(event.type = 'Process Creation' AND tgt.process.cmdline

contains:anycase ('clawdbot','moltbot','openclaw')) OR

(tgt.file.path contains 'openclaw' or

tgt.file.path contains 'clawdbot') OR

(src.port.number = 18789 or dst.port.number = 18789) OR

(task.name contains 'OpenClaw')

| columns event.time, src.process.storyline.id, event.type,

endpoint.name, src.process.user, tgt.process.cmdline,

tgt.process.publisher, tgt.file.path,

src.process.parent.name, src.process.parent.publisher,

src.process.cmdline, src.ip.address, dst.ip.address

Beyond this targeted query, SentinelOne’s tiered SOC hunting framework provides behavioral detection that catches agentic assistants even when they are renamed, updated, or running through wrapper processes:

  • Tier 1 (Discovery): Identifies AI-capable runtimes and destinations across the environment, surfacing where agents like OpenClaw are executing.
  • Tier 3 (Behavioral): Detects the “agent-shaped” pattern, interpreter runtimes (Python, Node) spawning shell processes, touching secrets, and calling external APIs, which is the operational fingerprint of OpenClaw (aka Clawdbot and Moltbot) regardless of binary name.
  • Tier 4 (Impact): Correlates secrets access with non-standard egress within the same Storyline, identifying when an agentic assistant has moved from exploration to data exfiltration.

Storyline connects the entire chain of custody (i.e. what launched the agent, what it touched, and where it communicated) providing a defensible incident narrative for any agentic AI activity.

Coverage Layer 2: AI Interaction Security (Prompt Security)

The Prompt Security capabilities described in Pillar 7 of this document apply directly to OpenClaw (aka Clawdbot and Moltbot), but agentic assistants create risks that go beyond what standard AI chatbot monitoring addresses:

  • Agentic Shadow AI Discovery: Unlike browser-based AI tools that appear in web traffic logs, agentic assistants often run as local processes or connect through non-standard ports. Prompt Security identifies these tools regardless of how they connect, closing the visibility gap that network-based monitoring misses.
  • Execution-Aware Content Controls: Because agentic assistants can act on the instructions they receive (i.e. executing code, modifying files, calling APIs), Prompt Security’s content inspection takes on heightened importance. Sensitive data filtered at the interaction layer is prevented from ever entering an execution pipeline.
  • MCP Tool-Chain Governance: OpenClaw (aka Clawdbot and Moltbot) frequently interact with MCP tool servers to extend their capabilities. Prompt for agentic AI intercepts these calls, applying dynamic risk scoring before the agent can act on tool responses.

Coverage Layer 3: Agent Hardening (ClawSec)

ClawSec, an open-source security skill suite built by Prompt Security from SentinelOne, provides defense-in-depth specifically designed for OpenClaw agents:

  • Skill Integrity & Supply Chain Verification: Eliminates blind trust in downloaded skills by distributing security skills with checksums and verified sources. Drift detection flags when critical files have been silently modified.
  • Posture Hardening & Automated Audits: Scans for prompt-injection vectors, unsafe configurations, and runtime vulnerabilities within the agent environment. Automated daily audits generate human-readable security reports.
  • Community-Driven Threat Intelligence: Connects to a live security advisory feed powered by public vulnerability data (NVD) and community reports, making verified threat intelligence immediately available to subscribed agents.
  • Zero-Trust by Default: Blocks unauthorized egress and telemetry. If a threat is detected, the agent must explicitly request user consent before reporting externally, thereby eliminating hidden communication and background data sharing.

Integrated Coverage: How the Three Layers Work Together

Control Plane Coverage Scope Key Capability
EDR/XDR (Singularity Agent + Data Lake) Endpoint-level process, file, network, and persistence detection Behavioral detection via Storyline; purpose-built PowerQuery for Clawdbot/OpenClaw/Moltbot
AI Interaction Security (Prompt Security) User-to-AI interaction layer Real-time data leakage prevention, prompt injection blocking, shadow AI discovery
Agent Hardening (ClawSec) Within the OpenClaw agent runtime Skill integrity verification, posture hardening, zero-trust egress control

This three-layer approach ensures that whether an agentic AI assistant is discovered through EDR telemetry, flagged by Prompt Security’s interaction monitoring, or hardened proactively by ClawSec, security teams have full visibility and control over the risk these tools introduce.

At a Glance: Seven Security Pillars Mapped to Business Risk

The agentic AI coverage detailed above draws on all seven of SentinelOne’s core security pillars working together. The following table maps each pillar to the AI-specific threats it addresses and the business outcomes it protects, giving security leaders a rapid-reference guide for aligning platform capabilities to their organization’s AI risk priorities.

Security Pillar AI Risk Addressed Business Outcome Protected
Cloud Native Security (CNS) Exposed training data, misconfigured infrastructure, exploitable cloud paths Prevents data breaches; reduces regulatory exposure
Workload Protection Runtime compromise, container escapes, fileless attacks on AI hosts Ensures AI service continuity; prevents operational disruption
AI SIEM Multi-stage attacks, low-and-slow exfiltration, anomalous LLM usage Enables detection of sophisticated threats; supports forensics and compliance
Purple AI Evolving LLM attack techniques, slow investigation response times Reduces MTTR; accelerates threat hunting without specialist expertise
Automation & Response Fast-moving exfiltration, API key compromise, unauthorized data egress Minimizes breach blast radius; contains incidents autonomously
Secret Scanning & IaC Hardcoded credentials, pipeline vulnerabilities, insecure infrastructure definitions Prevents supply chain compromise; secures pre-production environments
AI Usage & Agent Security (Prompt Security) Shadow AI, prompt injection, data leakage through AI interactions, jailbreaks Protects IP and sensitive data; enables safe AI adoption at scale

Recommended Next Steps for Security Leaders

This week: You can’t govern what you can’t see. Run the OpenClaw detection query in your Data Lake to determine whether agentic AI assistants are already active in your environment, assuming they are until proven otherwise. Audit browser extensions across high-risk teams. Review your AI acceptable use policy to confirm it addresses autonomous agents, not just chatbots. The goal is a baseline inventory of what AI tools exist, where they’re running, and who’s using them.

Within 90 days: Move from inventory to continuous visibility. A Prompt Security proof of value can get you there quickly, delivering real-time discovery of all AI tool usage across your environment, including the shadow AI activity your current stack can’t see. Use that visibility to establish sanctioned alternatives that give employees a secure path to the productivity they’re already chasing with unsanctioned tools. Operationalize behavioral detection hunts as automated detection rules so your SOC can identify new agentic activity as it appears, not months later.

Within 6 months: Mature from visibility into governance. Complete a full AI tool inventory with data classification and risk scoring. Establish enforcement policies that contain or block unsanctioned agentic tools at the endpoint, interaction, and network layers. Build board-ready reporting metrics that track AI-related risk posture over time. The organizations that move fastest here won’t be starting from scratch, they’ll be the ones that invested in visibility early enough to know what they’re governing.

Conclusion: From Visibility to Confidence

Securing LLMs and AI is not a future challenge, it’s a present imperative. SentinelOne’s Singularity Platform, now significantly enhanced by the capabilities of Prompt Security, provides end-to-end protection that spans cloud infrastructure, workload runtime, AI interaction governance, and automated response.

But the threat landscape is no longer just about chatbots and data leakage. The rapid adoption of agentic AI assistants like OpenClaw demonstrates that AI tools are evolving from passive information retrieval into autonomous agents that execute code, access secrets, and operate with real privileges on real systems. This shift demands a corresponding shift in security posture — from monitoring what employees type into a browser to governing what autonomous processes do on your endpoints.

SentinelOne’s three-layer coverage model addresses this directly. EDR/XDR telemetry provides behavioral detection at the endpoint. Prompt Security governs the interaction layer where sensitive data meets AI. And ClawSec hardens the agent runtime itself. Together, these layers give security teams the ability to discover, govern, and contain agentic AI tools without blocking the productivity gains they deliver.

The gap between organizations that believe they have AI governance and those that actually do is exactly where breaches happen. Organizations that close that gap won’t be those that adopted AI fastest or blocked it longest, they’ll be the ones that built the visibility, controls, and response capabilities to adopt it safely.

Security isn’t the department that says no to AI. It’s the function that makes AI possible at enterprise scale. To learn more about securing your OpenClaw agents, register for our upcoming webinar happening Tuesday, March 3, 2026.

Join the Webinar
SentinelOne AI & Intelligence Leaders Discuss How to Secure OpenClaw Agents on March 3, 2026 at 10:00AM PST / 1:00PM EST
  • ✇Cybersecurity Blog | SentinelOne
  • ClawSec: Hardening OpenClaw Agents from the Inside Out SentinelOne
    Autonomous agents are moving fast – from experimental side projects to real operational components inside development workflows and cloud environments. While agent capabilities are accelerating though, agent security itself has lagged behind. Most agent frameworks still assume implicit trust: There is trust in downloaded skills, prompts that evolve over time, and trust in agents not to quietly exfiltrate data or drift into unsafe behavior. That assumption has already been proven wrong. Over the
     

ClawSec: Hardening OpenClaw Agents from the Inside Out

9 de Fevereiro de 2026, 11:52

Autonomous agents are moving fast – from experimental side projects to real operational components inside development workflows and cloud environments. While agent capabilities are accelerating though, agent security itself has lagged behind. Most agent frameworks still assume implicit trust: There is trust in downloaded skills, prompts that evolve over time, and trust in agents not to quietly exfiltrate data or drift into unsafe behavior.

That assumption has already been proven wrong. Over the last week, researchers have already uncovered more than 200 malicious OpenClaw skills published in a few short days, all masquerading as legitimate utilities while delivering credential-sharing malware. Distributed through GitHub and OpenClaw’s official registry, these skills harvested API keys, cloud secrets, wallet data, and SSH credentials, highlighting how easily agent supply chains can be weaponized at scale.

This activity did not emerge in isolation. OpenClaw’s rapid growth, decentralized skill ecosystem, and deep system-level access have created a large, largely unmanaged attack surface. Skills are often installed directly from public repositories, documentation is trusted at face value, and agents are granted persistent memory and tool access that rivals traditional applications without equivalent security controls.

Since the assumption of implicit trust no longer holds in today’s landscape, ClawSec was designed to close that gap. ClawSec is an open-source security skill suite created to harden OpenClaw agents against prompt injection, supply chain compromise, configuration drift, and unsafe runtime behavior. Purpose-built as a “skill-of-skills”, ClawSec wraps agents in a continuously verified security layer, validating what it runs, how it changes, and where the data is allowed to go. Now live on GitHub, ClawSec is a zero-cost, privacy-first solution protecting both humans and autonomous agents via a single install.

Why We Are Rethinking Agent Security

Traditional application security models don’t map cleanly onto agentic systems. As agents are dynamic by nature, they pull skills from external sources, modify their own prompts, call tools autonomously, and adapt their behavior over time. That flexibility is powerful, but it also creates new attack surfaces. Some of the most common failure modes include:

  • Blind trust in skills downloaded from public repositories
  • Prompt injection attacks that manipulate agent behavior at runtime
  • Silent configuration drift that weakens guardrails over time
  • Unauthorized egress where agents send data externally without user awareness

In many cases, these issues go undetected because there is no continuous verification layer watching the agent’s internals. ClawSec is designed to be that layer.

Introducing ClawSec

ClawSec is the first open-source security suite, purpose-built for OpenClaw deployments. Rather than acting as a single defense mechanism, it functions as a composable security platform made up of modular skills that work in tandem with each other.

Operating as a “skill-of-skills”, ClawSec is a hardened shell around an agent. It doesn’t replace existing skills – it validates and protects them. Every security-relevant aspect of the agent is continuously checked, from supply chain integrity to runtime behavior and outbound communications.

ClawSec is a project by Prompt Security, a SentinelOne company, with its purpose rooted in security research, experimentation, and agentic workflow hardening. The goal here is not control, but resilience. By making agent security open, auditable, and driven by the community, ClawSec aims to set the highest bar possible for what “safe by default” means in today’s class of autonomous systems.

How It Works

ClawSec is a closed feedback loop where individual detections strengthen the entire ecosystem over time.

  1. Install – Load the ClawSec suite as a single security skill.
  2. Activate – Integrity checks, posture hardening, and audits begin immediately.
  3. Detect – Suspicious behavior, drift, or known threats are flagged.
  4. Decide – The agent requests permission before reporting or communicating.
  5. Protect – Verified reports become community advisories that protect other agents.

Secure Skill Integrity & Supply Chain Defense

One of the most critical risks in agent-based ecosystems is skill supply chain compromise. Agents routinely download and execute skills written by third parties, often without any cryptographic verification or checks. With ClawSec, teams eliminate blind trust.

In ClawSec, every security skill is distributed with check-sums and verified sources only. The suite supposes standard SKILL.md definitions as well as packaged .skill formats, guaranteeing compatibility with all existing OpenClaw workflows.

Once installed, it continuously monitors critical files such as TOOLS.md, prompt baselines, and configuration manifests for signs of drift. In the case of unexpected changes, the agent is alerted immediately. This approach treats skills the way modern security teams need to treat dependencies: verify first, trust second. ClawSec ensures that silent modifications are no longer invisible.

Proactive Posture Hardening & Automated Results

Security should not be reactive. ClawSec activates posture hardening as soon as it is installed, meaning agents’ configuration and runtime context for known prompt-injection vectors, unsafe defaults, and misconfigurations are scanned and identified instantly.

For teams that want to run automated audits on a recurring basis, optional watchdog skills can be turned on for daily, on-startup, or post-major changes frequency. These audits generate human-readable responses that explain exactly what is being checked, what has changed, and what needs attention.

Community-Driven Threat Intelligence Without Centralization

Agent security evolves quickly, and no single team can track every emerging threat. ClawSec empowers teams by integrating a live, community-driven security advisory feed powered by the National Vulnerability Database (NVD) and reports submitted via GitHub Issues. When a threat is reviewed and verified by maintainers, it becomes an advisory that any subscribed ClawSec agent can consume.

With no centralized server, updates flow through GitHub workflows, making the system transparent, auditable, and resilient. As soon as a verified advisory is published, agents can then react automatically, flagging risky skills, alerting users, and putting a block on execution paths tied to known issues.

Zero-Trust by Default

As a deliberate design choice, ClawSec has a zero-trust stance on communication, enforcing silence as the baseline. Unauthorized egress and telemetry blocked outright, so the agent does not phone home when an anomaly, threat, or compromise is detected. Instead, it pauses and asks for explicit user consent before any reporting or external communications.

This model, with no hidden chatter, no background data sharing, and no surprise outbound requests, ensures that agents remain accountable to their operators – not to unseen infrastructure. Security events are handled transparently, with human experts at the core of the decision loop.

Conclusion | Build Secure. Share Secure.

While ClawSec is shipped with a strong set of core security capabilities, its real power lies in its extensibility. Developers are encouraged to contribute new security skills, including prompt defences, modules to help enforce policies, auditing tools, and more. All submitted skills are reviewed, check-summed, and published to a shared catalog meaning everyone can benefit.

What this creates is a shared security baseline for autonomous agents, defined and maintained by a community of experts rather than staying locked behind a vendor wall. We are excited to launch ClawSec to help organizations both build and share securely as we improve the security standard together.

Secure your OpenClaw Agents with ClawSec
Drift detection, security recommendations, automated audits, and skill integrity verification.

Third-Party Trademark Disclaimer:

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

❌
❌