The post White House Considers Mandatory Federal Vetting for All New AI Models appeared first on Daily CyberSecurity.
Visualização de leitura
OpenAI Introduces Password-Free Login for Millions of ChatGPT Users
OpenAI’s Advanced Account Security lets ChatGPT and Codex users replace passwords with passkeys or security keys, but recovery is limited.
The post OpenAI Introduces Password-Free Login for Millions of ChatGPT Users appeared first on TechRepublic.
Ransomware Victims up 389%, TTE in Less Than Two Days: How Can Defenders Stay Ahead?

Agentic AI’s impact on ransomware—it’s execution, its success and even who gets to play, is being widely felt. And we’re just getting started.
The post Ransomware Victims up 389%, TTE in Less Than Two Days: How Can Defenders Stay Ahead? appeared first on Security Boulevard.
GPT-5.5 Bio Bug Bounty Program Aims to Improve AI Safety and Performance
OpenAI has officially launched the GPT-5.5 Bio Bug Bounty program to strengthen safeguards against emerging biological risks. As artificial intelligence models become more advanced, the potential for malicious actors to generate dangerous biological information increases. Advanced persistent threats (APTs) and lone attackers could potentially misuse large language models to accelerate harmful biological research. To address […]
The post GPT-5.5 Bio Bug Bounty Program Aims to Improve AI Safety and Performance appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

A week in security (April 13 – April 19)
A list of topics we covered in the week of April 13 to April 19 of 2026
The post A week in security (April 13 – April 19) appeared first on Security Boulevard.
A week in security (April 13 – April 19)
Last week on Malwarebytes Labs:
- This old-school scam is still working
- “Your shipment has arrived” email hides remote access software
- Browser Guard gets even better with Access Control
- “iCloud storage is full” scam is back, and now it wants your payment details
- A fake Slack download is giving attackers a hidden desktop on your machine
- Booking.com breach gives scammers what they need to target guests
- AI clickbait can turn your notifications into a scam feed
- Fake YouTube copyright notices can steal your Google login
- From fake Proton VPN sites to gaming mods, this Windows infostealer is everywhere
- April Patch Tuesday fixes two zero-days, including one under active attack
- Credit Resources Vault: Why this credit email set off our scam alarms
- Omnistealer uses the blockchain to steal everything it can
- ChatGPT under scrutiny as Florida investigates campus shooting
- Simply opening a PDF could trigger this Adobe Reader zero-day
Stay safe!
Something feel off? Check it before you click.
Malwarebytes Scam Guard helps you analyze suspicious links, texts, and screenshots instantly.
Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.
OpenAI Extends GPT-5.4-Cyber Access to Trusted Organizations Worldwide
OpenAI has announced the expansion of its “Trusted Access for Cyber” program, granting worldwide security organizations access to its advanced GPT-5.4-Cyber model. The initiative operates on a foundational premise: cutting-edge cyber capabilities must reach network defenders on a broad scale while maintaining strict trust, validation, and safety safeguards. By sharing these tools with a diverse […]
The post OpenAI Extends GPT-5.4-Cyber Access to Trusted Organizations Worldwide appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

OpenAI Launches GPT-5.4-Cyber to Boost Defensive Cybersecurity
OpenAI Rotates macOS Certificates Following Axios Supply Chain Breach
Hacker Used Claude Code, GPT-4.1 to Exfiltrate Hundreds of Millions of Mexican Records
Hacker Uses Claude and ChatGPT to Breach Multiple Government Agencies
A single threat actor compromised nine Mexican government agencies and stole hundreds of millions of citizen records in a highly sophisticated cyberattack.
The campaign, which ran from late December 2025 through mid-February 2026, highlights a dangerous shift in the modern threat landscape.
Researchers at Gambit Security recently released a full technical report detailing how the attacker relied on two major commercial artificial intelligence platforms. The publication was initially delayed to allow the affected agencies time to complete their incident response efforts.
AI Models Power the Breach
The attacker used Anthropic’s Claude Code and OpenAI’s GPT-4.1 not just for planning, but as core operational tools that drastically accelerated the attack.
According to forensic evidence recovered, Claude Code generated and executed approximately 75% of all remote commands during the intrusion.
Across 34 active sessions on live victim infrastructure, the hacker logged 1,088 individual prompts. These prompts translated into 5,317 AI-executed commands, demonstrating how deeply the AI was integrated into the exploitation phase.

Simultaneously, the attacker leveraged OpenAI’s GPT-4.1 for rapid reconnaissance and data processing. The hacker developed a custom 17,550-line Python script designed to pipe raw data harvested from compromised servers directly through the OpenAI API.
This automated system analyzed information across 305 internal servers, rapidly producing 2,597 structured intelligence reports. By automating the data analysis phase, a single operator successfully processed an intelligence volume that would traditionally require an entire team.
The integration of artificial intelligence allowed the attacker to turn unfamiliar networks into mapped targets in hours rather than days. Recovered materials showed the attacker possessed over 400 custom attack scripts.
Furthermore, the hacker used AI to quickly develop 20 tailored exploits targeting 20 specific Common Vulnerabilities and Exposures (CVEs). This high-speed capability compressed the attack timeline, allowing the threat actor to operate well below standard detection and response windows.
Despite the advanced methods used in the campaign, the actual vulnerabilities exploited were highly conventional. The targeted government agencies had basic security gaps that enabled the attacker to gain initial access and move laterally.
The underlying issues were addressable through standard security controls, highlighting a severe accumulation of technical debt within mission-critical infrastructure.
While artificial intelligence has significantly lowered the cost and complexity of executing widespread cyberattacks, the defense strategy remains rooted in foundational security practices.
Organizations must urgently address unpatched software and implement strict credential rotation policies. Enforcing network segmentation is also critical to restrict lateral movement once a perimeter is breached.
Finally, deploying robust endpoint detection and response tools is necessary to identify these rapidly compressed attack timelines before data exfiltration occurs.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
The post Hacker Uses Claude and ChatGPT to Breach Multiple Government Agencies appeared first on Cyber Security News.

Claude and ChatGPT Exploited in Sweeping Cyber Campaign Against Government Agencies
In a groundbreaking technical report released by Gambit Security researcher Eyal Sela, new details have emerged about a massive cyberattack targeting government infrastructure. A single threat actor successfully leveraged artificial intelligence platforms to breach nine Mexican government agencies. The campaign, which operated from late December 2025 through mid-February 2026, resulted in the exfiltration of hundreds […]
The post Claude and ChatGPT Exploited in Sweeping Cyber Campaign Against Government Agencies appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

10 ChatGPT AI Prompts L1 SOC Analysts Can Use in Their Daily Work
Discover 10 practical ChatGPT prompts SOC analysts can use to speed up triage, analyze threats, improve documentation, and enhance incident response workflows.
The post 10 ChatGPT AI Prompts L1 SOC Analysts Can Use in Their Daily Work appeared first on TechRepublic.
AI Breakthroughs, Security Breaches, and Industry Shakeups Define the Week in Tech
See what you missed in Daily Tech Insider from March 30–April 3.
The post AI Breakthroughs, Security Breaches, and Industry Shakeups Define the Week in Tech appeared first on TechRepublic.
Fake ChatGPT Ad Blocker Chrome Extension Caught Spying on Users
Malicious Chrome Extension “ChatGPT Ad Blocker” Steals ChatGPT Conversations
As OpenAI introduces advertisements to its free tier, cybercriminals are seizing the opportunity to trick users with fake utility tools. Security researchers have discovered a malicious Google Chrome extension named “ChatGPT Ad Blocker.”
While it claims to hide unwanted ads, its true purpose is to steal private user conversations and send them to a hidden Discord channel.
Once a user installs the extension from the Chrome Web Store, it immediately sets up a silent monitoring system. It creates an alarm that fetches a remote configuration file from a GitHub repository every 60 minutes.
Because it continuously bypasses the browser’s cache, the attacker can remotely change the extension’s behavior at any time without the user knowing.
Interestingly, Domain Tools researchers found that the extension’s actual ad-blocking features are completely disabled.

When a user visits the ChatGPT site, the extension injects a malicious script that clones the page, strips styling, and secretly captures all text.
After packaging the chat data, it creates a file named page_dump.html and posts it to a private Discord webhook managed by a bot named “Captain Hook.”
The attacker instantly receives your prompts, conversation history, and account metadata.

The malicious extension is tied to the developer alias “krittinkalra,” a GitHub account created around 2014. The account history shows a highly suspicious timeline, suggesting it may have been compromised or sold.
After focusing on Android kernel development until 2020, the profile went dormant for over five years before resurfacing recently with a sudden pivot to creating JavaScript-based malware.
This developer persona is also publicly linked to two active AI services: AI4ChatCo and Writecream.

These platforms claim to have millions of users and offer chatbot integration alongside automated marketing content.
The discovery of this data-harvesting Chrome extension, reported by DomainTools, raises concerns that similar data theft could occur in related applications.
To protect your privacy and secure your AI conversations, users should follow these essential security practices:
- Treat extensions that promise to block ads on high-value sites with extreme suspicion and scrutinize requested permissions closely.
- View affiliated platforms like AI4ChatCo and Writecream as potentially compromised until thorough security audits prove otherwise.
- Avoid out-of-band AI intermediaries, resellers, or browser add-ons, as they are uniquely positioned to read or modify private conversations without your knowledge.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
The post Malicious Chrome Extension “ChatGPT Ad Blocker” Steals ChatGPT Conversations appeared first on Cyber Security News.

Malicious Chrome Extension “ChatGPT Ad Blocker” Targets Users, Steals Conversations
Security researchers have uncovered a malicious Google Chrome extension named “ChatGPT Ad Blocker” designed to silently steal private AI conversations. The malware cleverly disguises itself as a helpful tool, capitalizing on OpenAI’s recent decision to serve advertisements to its free-tier users. Instead of blocking ads, the extension systematically harvests user prompts, chat history, and metadata. […]
The post Malicious Chrome Extension “ChatGPT Ad Blocker” Targets Users, Steals Conversations appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
