Visualização normal

Ontem — 8 de Maio de 2026Stream principal
Antes de ontemStream principal

Google Expands AI Scam Protection to Samsung Galaxy S26

26 de Fevereiro de 2026, 09:30

Google expands Gemini-powered scam detection to Samsung Galaxy S26 and more Android devices, bringing on-device AI fraud protection to calls and messages.

The post Google Expands AI Scam Protection to Samsung Galaxy S26 appeared first on TechRepublic.

Scammers Use Fake Gemini AI Chatbot for Crypto Scam

20 de Fevereiro de 2026, 12:32

Scammers used a fake Gemini AI chatbot to promote a bogus Google Coin presale, signaling a rise in AI-driven crypto impersonation fraud.

The post Scammers Use Fake Gemini AI Chatbot for Crypto Scam appeared first on TechRepublic.

  • ✇Security Affairs
  • PromptSpy abuses Gemini AI to gain persistent access on Android Pierluigi Paganini
    PromptSpy is the first Android malware to abuse Google’s Gemini AI, enabling persistence and advanced spying features. Security researchers at ESET have uncovered PromptSpy, the first known Android malware to exploit Google’s Gemini AI to maintain persistence. The malware can capture lockscreen data, block uninstallation attempts, collect device information, take screenshots, and record screen activity as video, marking a concerning evolution in AI-assisted mobile threats. This is the sec
     

PromptSpy abuses Gemini AI to gain persistent access on Android

20 de Fevereiro de 2026, 04:49

PromptSpy is the first Android malware to abuse Google’s Gemini AI, enabling persistence and advanced spying features.

Security researchers at ESET have uncovered PromptSpy, the first known Android malware to exploit Google’s Gemini AI to maintain persistence. The malware can capture lockscreen data, block uninstallation attempts, collect device information, take screenshots, and record screen activity as video, marking a concerning evolution in AI-assisted mobile threats.

This is the second AI-powered malware discovered by ESET, following PromptLock in August 2025, the first known case of AI-driven ransomware.

Although AI is used only to keep the malicious app pinned in the recent apps list, it allows the malware to adapt to different devices and Android versions.

“Specifically, Gemini is used to analyze the current screen and provide PromptSpy with step-by-step instructions on how to ensure the malicious app remains pinned in the recent apps list, thus preventing it from being easily swiped away or killed by the system.” reads the report published by ESET. “The AI model and prompt are predefined in the code and cannot be changed. Since Android malware often relies on UI navigation, leveraging generative AI enables the threat actors to adapt to more or less any device, layout, or OS version, which can greatly expand the pool of potential victims.”

PromptSpy deploys a VNC module for remote control, abuses Accessibility Services to block removal, captures lockscreen data, records video, and uses encrypted C2 communications. The campaign appears to be driven by financial gain and mainly targets users in Argentina. The malware was likely developed in a Chinese-speaking environment. It is spread through a dedicated website rather than Google Play, and Google Play Protect can block known versions of it.

PromptSpy uses Google’s Gemini AI in a limited but clever way: to stay persistent. Instead of relying on fixed screen taps or coordinates, which often fail across different Android versions and device layouts, the malware sends Gemini a text prompt plus an XML dump of the current screen. This gives the AI a full view of buttons, text, and positions. Gemini then replies with JSON instructions telling the malware where to tap. PromptSpy repeats the process until the app is successfully locked in the recent apps list, preventing easy removal.

ESET discovered the threat in February 2026, PromptSpy evolved from an earlier variant called VNCSpy. Samples were uploaded from Hong Kong and later Argentina, suggesting regional targeting. The malware is distributed through malicious websites impersonating Chase Bank, using branding like “MorganArg.” A related phishing app, likely from the same actor, helps deliver the final payload.

Once installed, PromptSpy abuses Accessibility Services and includes a VNC module, giving attackers full remote control of the device. It can see the screen, perform gestures, and maintain control while staying hidden in the recent apps list.

The analysis of the malicious code revealed debug strings in simplified Chinese, along with functions handling Chinese Accessibility event types. A disabled debug method translated Android accessibility events into Chinese, suggesting with medium confidence that the malware was developed in a Chinese-speaking environment.

PromptSpy is delivered through a dropper that installs a hidden payload APK. After installation, it requests Accessibility permissions, shows a fake loading screen, and secretly contacts Gemini AI to lock itself in the Recent Apps list for persistence. It continuously sends screen data to Gemini and executes returned tap or swipe instructions.

promptspy
Network communication of malware and Gemini with prompt request and response shown in red rectangles

The malware includes a VNC module for full remote control and communicates with its C2 server using AES-encrypted VNC traffic. It can steal PINs, record screens, take screenshots, and list installed apps. To prevent removal, it overlays invisible elements over uninstall buttons. Victims must reboot into Safe Mode to remove it.

PromptSpy shows a new evolution in Android malware. By using generative AI to read and interpret on-screen elements, it can adapt to almost any device or interface. Instead of fixed tap coordinates, it sends a screen snapshot to AI and receives step-by-step instructions, making its persistence more resilient to UI changes.

“More broadly, this campaign shows how generative AI can make malware far more dynamic and capable of real‑time decision‑making.” concludes the report. “PromptSpy is an early example of generative AI‑powered Android malware, and it illustrates how quickly attackers are beginning to misuse AI tools to improve impact.”

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, PromptSpy)

  • ✇Security Affairs
  • Google: state-backed hackers exploit Gemini AI for cyber recon and attacks Pierluigi Paganini
    Google says nation-state actors used Gemini AI for reconnaissance and attack support in cyber operations. Google DeepMind and GTIG report a rise in model extraction or “distillation” attacks aimed at stealing AI intellectual property, which Google has detected and blocked. While APT groups have not breached frontier models, private firms and researchers have tried to clone proprietary systems. State-backed actors from North Korea, Iran, China, and Russia use AI for research, targeting, and p
     

Google: state-backed hackers exploit Gemini AI for cyber recon and attacks

13 de Fevereiro de 2026, 07:57

Google says nation-state actors used Gemini AI for reconnaissance and attack support in cyber operations.

Google DeepMind and GTIG report a rise in model extraction or “distillation” attacks aimed at stealing AI intellectual property, which Google has detected and blocked. While APT groups have not breached frontier models, private firms and researchers have tried to clone proprietary systems. State-backed actors from North Korea, Iran, China, and Russia use AI for research, targeting, and phishing. Threat actors also test agentic AI, AI-powered malware like HONESTCUE, and underground “jailbreak” services.

Threat actors now use large language models to craft polished, culturally accurate phishing messages that remove common red flags like poor grammar. They also run “rapport-building” phishing, holding realistic multi-step conversations to gain trust before delivering malware.

Google reported that North Korea-linked hacker group UNC2970 used its Gemini AI model to gather intelligence on targets and support cyber operations. The company also said other threat groups now weaponize generative AI to speed up attack stages, run information operations, and even attempt model extraction attacks.

“The North Korean government-backed actor UNC2970 has consistently focused on defense targeting and impersonating corporate recruiters in their campaigns. The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance.” reads the report published by Google. “This actor’s target profiling included searching for information on major cybersecurity and defense companies and mapping specific technical job roles and salary information. “

Iran-linked group APT42 also used generative AI tools like Gemini to boost reconnaissance and targeted social engineering. The group searched for official email addresses, researched organizations to build believable pretexts, and created tailored personas based on target biographies. The nation-state actor also used AI for language translation and understanding local context. Google disrupted the activity and disabled related assets.

In September 2025, Google tracked new malware called HONESTCUE that uses the Gemini API to generate malicious C# code on demand. Instead of storing full payloads, the malware sends prompts to Gemini, receives source code for a second-stage downloader, compiles it in memory with .NET tools, and executes it without writing files to disk. This fileless approach helps evade detection. Attackers also host payloads on platforms like Discord CDN. Researchers believe a single actor or small group is testing this AI-assisted malware as a proof of concept.

In November 2025, GTIG found COINBAIT, a phishing kit built with help from AI. It pretends to be a major crypto exchange to steal login details. Some of the activity links to UNC5356, a group known for SMS and phone phishing. The kit was likely created using Lovable AI and built as a complex React website. It includes detailed “? Analytics:” logs that show how it tracks and steals data. The attackers hid their systems behind Cloudflare and trusted services to avoid detection. COINBAIT shows a move toward modern web tools and cloud services, may be used by different groups, and also connects to AI-hosted ClickFix scams that trick users into installing malware like ATOMIC.

Underground forums show strong demand for AI tools built for cybercrime. Since most threat actors cannot build their own models, they rely on established services like Gemini. One example, Xanthorox, claimed to be a private custom AI for malware and phishing, but it actually ran on commercial and open-source AI tools layered together.

Attackers need stolen API keys to scale abuse, creating risks for organizations using cloud AI services. Criminals often exploit weak security in open-source AI platforms to steal and resell API keys, fueling a black market.

Google disabled accounts linked to this abuse and continues strengthening safeguards, threat detection, red teaming, and secure AI development through frameworks like SAIF and research projects such as Big Sleep and CodeMender.

“The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly.” concludes Google.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Gemini AI)

❌
❌