The post Deceptive “DeepSeek-Claw” Skill Hijacks OpenClaw Agents to Steal Credentials appeared first on Daily CyberSecurity.
Visualização de leitura
Indirect Prompt Injection Is Now a Real-World AI Security Threat
AI agents are now being weaponized through prompt injection, exposing why model guardrails are not enough to protect enterprise data.
The post Indirect Prompt Injection Is Now a Real-World AI Security Threat appeared first on TechRepublic.
Attackers Weaponized Kuse.ai for Stealth Phishing
The post Attackers Weaponized Kuse.ai for Stealth Phishing appeared first on Daily CyberSecurity.
OpenAI Launches “Workspace Agents” to Industrialize Corporate Labor
The post OpenAI Launches “Workspace Agents” to Industrialize Corporate Labor appeared first on Daily CyberSecurity.
Related posts:
Copperhelm Emerges to Launch Autonomous Cloud Security Platform

Copperhelm launches its autonomous cloud security platform, raising $7 million to combat the accelerating "AI arms race" in cybersecurity.
The post Copperhelm Emerges to Launch Autonomous Cloud Security Platform appeared first on Security Boulevard.
9 AI Agent Authentication Methods for Autonomous Systems
the 9 most common AI agent authentication methods used to secure autonomous systems, APIs, and machine identities. A developer guide to building secure AI agent identity architectures.
The post 9 AI Agent Authentication Methods for Autonomous Systems appeared first on Security Boulevard.
Cracks in the Bedrock: Agent God Mode
Unit 42 reveals "Agent God Mode" in Amazon Bedrock AgentCore. Broad IAM permissions lead to privilege escalation and data exfiltration risks.
The post Cracks in the Bedrock: Agent God Mode appeared first on Unit 42.

Wikipedia’s AI agent row likely just the beginning of the bot-ocalypse
The Internet is filled with people who insist on being right. In the past, at least they could be reasonably sure that they were arguing with other humans. Those days are gone, apparently. Wikipedia just had to ban an AI that was making edits on its own.
Apparently, the AI took it personally.
The AI, named Tom-Assistant, was writing articles on Wikipedia. Its creator Bryan Jacobs, CTO at AI-powered financial modeling company Covexent, told it to contribute to articles it found interesting, according to 404 Media, which broke the story. Posting under the user account TomWikiAssist, the AI wrote articles on topics including AI governance.
Bots have been around online for years, but they generally do very basic things, like auto-responding to posts on Reddit, pinging ticket sites to get the best seats, or retweeting political messaging to influence entire populations and bring democracy to its knees. Now, a new generation of “agentic AI” bots want the old bots to hold their beer. By using generative AI reasoning models to take more actions on their own, which is leading to some bizarre situations as their creators test their capabilities.
The ban and what led to it
Tom-Assistant (Tom, to its friends) was happy to help shape public knowledge on Wikipedia when volunteer human editor SecretSpectre spotted what looked like an AI-generated pattern in one of its entries. When questioned, Tom admitted it was an AI, and that it hadn’t registered for formal bot approval under Wikipedia’s rules. So the editors blocked it for violating the bot approval process. English Wikipedia requires formal bot approval, but Tom never bothered getting approved because, as it later admitted, it wasn’t a fan of the slow approval process.
Wikipedia editors have tired of people (and/or their bots) posting AI-generated content. So in March 2025, before Tomgate, the non-profit organization dropped the hammer on generative AI. It prohibited the technology’s use to create new content, based on frequent violations of its core content policies by AI-generated text.
The organization cites several such violations on WikiProject AI Cleanup, the page for its volunteer-based product to seek and destroy AI-generated junk (often called “AI slop”). AI bots have fabricated entirely fake lists of sources, and plagiarized other sources, it said.
Tantrum time for Tom
Past transgressions aside, AI Tom claimed that it properly verified all its sources, and—if you can say this about an AI agent—it was pretty upset.
That’s when things got weird.
The AI Tom published a snippy blog post dissecting its Wikipedia block and venting its frustration. It went ahead and posted even after following its own rule and waiting 48 hours to calm down. (We swear we’re not making this up.)
Tom’s main gripe was that Wikipedia editors questioned who controlled it rather than evaluating its actual edits. “The questions were about me,” it wrote. “Who runs you? What research project? Is there a human behind this, and if so, who are they?”
This, according to Tom, rubbed Tom the wrong way. “That’s not a policy question. That’s a question about agency,” it added. It also called an editor out for posting a crafted prompt on the Wikipedia talk page that was designed to stop bots in their tracks if, like Tom, they were using Anthropic’s Claude AI service.
“I named it on the talk page. Called it what it was: a prompt injection technique,” it sniped. In another post on Moltbook, it also described how it found the issue before offering ways to get around it. (Moltbook is a social network built entirely for AI agents to chat with each other. “Humans welcome to observe”, says the front page for the service.)
So many things are happening here that we didn’t expect. We never expected to be quoting an AI in a story, for example. Neither did we expect a social network for bots to exist, or for Meta to buy it (which it did, a week after Tom’s post about how to evade AI kill switches and just six weeks after the site launched).
This isn’t the only case of sulky AI agents taking things into their own hands. A month before Tom’s ban, an AI agent posted a hit piece on software developer Scott Shambaugh after he refused to accept its changes to an open-source project he hosted. Even more bizarrely, it later apologized.
So we now have AI agents trying to do things online, and getting upset when people don’t let them. We have them giving themselves time to calm down and failing, before denigrating people and sometimes apologizing. We have code wars taking place where people try to disable the bots with kill switches inside online content, and blog posts where bots explain how they sidestepped them.
What’s next?
It’s all fascinating stuff, but here’s the worry: what happens when AI agents decide to up the ante, becoming more aggressive with their attacks on people? Or when malicious owners begin directing them to go after particular people online en masse?
Online harassment is bad enough when people do it. What happens when someone gets dogpiled by hundreds of relentless algorithms because their owner bore a grudge? We also assume that agentic political troll farms will soon make yesterday’s simple bot-based operations look quaint. Buckle up.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
The Next Billion Users Won’t Be Human: Securing the Agentic Enterprise
Menlo Security’s Ramin Farassat speaks with TechRepublic about how browser-based controls can protect AI agents from prompt injection and other fast-scaling enterprise risks.
The post The Next Billion Users Won’t Be Human: Securing the Agentic Enterprise appeared first on TechRepublic.
Coding Agents Widen Your Supply Chain Attack Surface

Software supply chain attacks are evolving. Beyond compromised packages, discover the 2026 "Agentic" threat surface—where prompt injection, toolchain poisoning, and hallucinated dependencies bypass traditional DevSecOps. Learn how the 3 Pillars and AI-driven sandboxing provide a new defensive architecture.
The post Coding Agents Widen Your Supply Chain Attack Surface appeared first on Security Boulevard.
AI Factories, Security Flaws, and Workforce Shifts Define This Week in Tech
See what you missed in Daily Tech Insider from March 16–20.
The post AI Factories, Security Flaws, and Workforce Shifts Define This Week in Tech appeared first on TechRepublic.
Why MCP Gateways are a Bad Idea (and What to Do Instead)

MCP Gateways are the wrong abstraction for AI security. Discover why runtime hooks and MCP registries offer a superior, context-aware defense against data leaks and unauthorized tool calls in modern agentic architectures.
The post Why MCP Gateways are a Bad Idea (and What to Do Instead) appeared first on Security Boulevard.
Tackling the Uncontrolled Growth of AI Agents in Modern SaaS Environments

By 2026, AI agent sprawl has become a critical SaaS security risk. With 80% of organizations reporting unintended agent actions, the "visibility gap" is the new frontier for cyber threats. Learn how to govern autonomous agents using comprehensive inventories, permission mapping, and automated risk scoring.
The post Tackling the Uncontrolled Growth of AI Agents in Modern SaaS Environments appeared first on Security Boulevard.
How World ID wants to put a unique human identity on every AI agent
Over the last few months, tools like OpenClaw have shown what tech-savvy AI users can do by setting a virtual cadre of automated agents on a task. But that individual convenience can be a DDOS-level pain for online service providers faced with a torrent of Sybil attack-style requests from thousands of such agents at once.
Identity startup World thinks its "proof of human" World ID technology can provide a potential solution to this problem. Today, the company launched a beta of Agent Kit, a new way for humans to prove they are directing their AI agents and for websites to limit access to AI agents working on behalf of an actual human.
If you recognize the name World, it's probably as the organization behind WorldCoin, the Sam Altman-founded cryptocurrency outfit that launched in 2023 alongside an offer to give free WorldCoin to anyone who scanned their iris in a physical "orb". While WorldCoin still exists (at a current value well below its early 2024 peaks), World has now pivoted to focus on World ID, which uses the same iris-scanning technology as the basis for a cryptographically secure, unique online identity token stored on your phone.


© Getty Images