Visualização de leitura

Bringing Rust to the Pixel Baseband

Posted by Jiacheng Lu, Software Engineer, Google Pixel Team

Google is continuously advancing the security of Pixel devices. We have been focusing on hardening the cellular baseband modem against exploitation. Recognizing the risks associated within the complex modem firmware, Pixel 9 shipped with mitigations against a range of memory-safety vulnerabilities. For Pixel 10, Google is advancing its proactive security measures further. Following our previous discussion on "Deploying Rust in Existing Firmware Codebases", this post shares a concrete application: integrating a memory-safe Rust DNS(Domain Name System) parser into the modem firmware. The new Rust-based DNS parser significantly reduces our security risk by mitigating an entire class of vulnerabilities in a risky area, while also laying the foundation for broader adoption of memory-safe code in other areas.

Here we share our experience of working on it, and hope it can inspire the use of more memory safe languages in low-level environments.

Why Modem Memory Safety Can’t Wait

In recent years, we have seen increasing interest in the cellular modem from attackers and security researchers. For example, Google's Project Zero gained remote code execution on Pixel modems over the Internet. Pixel modem has tens of Megabytes of executable code. Given the complexity and remote attack surface of the modem, other critical memory safety vulnerabilities may remain in the predominantly memory-unsafe firmware code.

Why DNS?

The DNS protocol is most commonly known in the context of browsers finding websites. With the evolution of cellular technology, modern cellular communications have migrated to digital data networks; consequently, even basic operations such as call forwarding rely on DNS services.

DNS is a complex protocol and requires parsing of untrusted data, which can lead to vulnerabilities, particularly when implemented in a memory-unsafe language (example: CVE-2024-27227). Implementing the DNS parser in Rust offers value by decreasing the attack surfaces associated with memory unsafety.

Picking a DNS library

DNS already has a level of support in the open-source Rust community. We evaluated multiple open source crates that implement DNS. Based on criteria shared in earlier posts, we identified hickory-proto as the best candidate. It has excellent maintenance, over 75% test coverage, and widespread adoption in the Rust community. Its pervasiveness shows its potential as the de-facto DNS choice and long term support. Although hickory-proto initially lacked no_std support, which is needed for Bare-metal environments (see our previous post on this topic), we were able to add support to it and its dependencies.

Adding no_std support

The work to enable no_std for hickory-proto is mostly mechanical. We shared the process in a previous post. We undertook modifications to hickory_proto and its dependencies to enable no_std support. The upstream no_std work also results in a no_std URL parser, beneficial to other projects.

The above PRs are great examples of how to extend no_std support to existing std-only crates.

Code size study

Code size is the one of the factors that we evaluated when picking the DNS library to use.

Code size
by category
Rust implemented Shim that calls Hickory-proto on receiving a DNS response 4KB
core, alloc, compiler_builtins
(reusable, one-time cost)
17KB
Hickory-proto library and dependencies 350KB



Sum 371KB

We built prototypes and measured size with size-optimized settings. Expectedly, hickory_proto is not designed with embedded use in mind, and is not optimized for size. As the Pixel modem is not tightly memory constrained, we prioritized community support and code quality, leaving code size optimizations as future work.

However, the additional code size may be a blocker for other embedded systems. This could be addressed in the future by adding additional feature flags to conditionally compile only required functionality. Implementing this modularity would be a valuable future work.

Hook-up Rust to modem firmware

Before building the Rust DNS library, we defined several Rust unit tests to cover basic arithmetic, dynamic allocations, and FFI to verify the integration of Rust with the existing modem firmware code base.

Compile Rust code to staticlib

While using cargo is the default choice for compilation in the Rust ecosystem, it presents challenges when integrating it into existing build systems. We evaluated two options:

  1. Using cargo to build a staticlib before the modem builds. Then add the produced staticlib into the linking step.
  2. Directly work with rustc and integrate the Rust compilation steps into the existing modem build system.

Option #1 does not scale if we are going to add more Rust components in the future, as linking multiple staticlibs may cause duplicated symbol errors. We chose option #2 as it scales more easily and allows tighter integration into our existing build system. Our existing C/C++ codebase uses Pigweed to drive the primary build system. Pigweed supports Rust targets (example) with direct calls to rustc through rust tools defined in GN.

We compiled all the Rust crates, including hickory-proto, its dependencies, and core, compiler_builtin, alloc, to rlib. Then, we created a staticlib target with a single lib.rs file which references all the rlib crates using extern crate keywords.

Build core, alloc, and compiler_builtins

Android’s Rust Toolchain distributes source code of core, alloc, and compiler_builtins, and we leveraged this for the modem. They can be included to the build graph by adding a GN target with crate_root pointing to the root lib.rs of each crate.

Pixel modem firmware already has a well-tested and specialized global memory allocation system to support some dynamic memory allocations. alloc support was added by implementing the GlobalAlloc with FFI calls to the allocators C APIs:

use core::alloc::{GlobalAlloc, Layout};

extern "C" {
    fn mem_malloc(size: usize, alignment: usize) -> *mut u8;
    fn mem_free(ptr: *mut u8, alignment: usize);
}

struct MemAllocator;

unsafe impl GlobalAlloc for MemAllocator {
    unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
        mem_malloc(layout.size(), layout.align())
    }

    unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
        mem_free(ptr, layout.align());
    }
}

#[global_allocator]
static ALLOCATOR: MemAllocator = MemAllocator;

Pixel modem firmware already implements a backend for the Pigweed crash facade as the global crash handler. Exposing it into Rust panic_handler through FFI unifies the crash handling for both Rust and C/C++ code.

#![no_std]
use core::panic::PanicInfo;

extern "C" {
    pub fn PwCrashBackend(sigature: *const i8, file_name: *const i8, line: u32);
}

#[panic_handler]
fn panic(panic_info: &PanicInfo) -> ! {
    let mut filename = "";
    let mut line_number: u32 = 0;

    if let Some(location) = panic_info.location() {
        filename = location.file();
        line_number = location.line();
    }

    let mut cstr_buffer = [0u8; 128];
    // Never writes to the last byte to make sure `cstr_buffer` is always zero
    // terminated.
    let (_, writer) = cstr_buffer.split_last_mut().unwrap();
    for (place, ch) in writer.iter_mut().zip(filename.bytes()) {
        *place = ch;
    }

    unsafe {
        PwCrashBackend(
            "Rust panic\0".as_ptr() as *const i8,
            cstr_buffer.as_ptr() as *const i8,
            line_number,
        );
    }

    loop {}
}

Link Rust staticlib

The Pixel modem firmware linking has a step that calls the linker to link all the objects generated from C/C++ code. By using llvm-ar -x to extract object files from the Rust combined staticlib and supplying them to the linker, the Rust code appears in the final modem image.

There was a performance issue we experienced due to weak symbols during linking. The inclusion of Rust core and compiler-builtin caused unexpected power and performance regressions on various tests. Upon analysis, we realized that modem optimized implementations of memset and memcpy provided by the modem firmware are accidentally replaced by those defined in compiler_builtin. It seems to happen because both compiler_builtin crate and the existing codebase defines symbols as weak, linker has no way to figure out which one is weaker. We fixed the regression by stripping the compiler_builtin crate before linking using a one line shell script.

llvm-ar -t <rust staticlib> | grep compiler_builtins | xargs llvm-ar -d <rust staticlib>

Integrating hickory-proto

Expose Rust API and calling back to C++

For the DNS parser, we declared the DNS response parsing API in C and then implemented the same API in Rust.

int32_t process_dns_response(uint8_t*, int32_t);

The Rust function returns an integer standing for the error code. The received DNS answers in the DNS response are required to be updated to in-memory data structures that are coupled with the original C implementation, therefore, we use existing C functions to do it. The existing C functions are dispatched from the Rust implementation.

pub unsafe extern "C" fn process_dns_response(
    dns_response: *const u8,
    response_len: i32,
) -> i32 {
    //... validate inputs `dns_response` and `response_len`.


    // SAFETY:
    // It is safe because `dns_response` is null checked above. `response_len`
    // is passed in, safe as long as it is set correctly by vendor code.
    match process_response(unsafe {
        slice::from_raw_parts(dns_response, response_len)
    }) {
         Ok(()) => 0,
         Err(err) => err.into(),
    }
}

fn process_response(response: &[u8]) -> Result<()> {
    let response = hickory_proto::op::Message::from_bytes(response)?;
    let response = hickory_proto::xfer::DnsResponse::from_message(response)?;

   
    for answer in response.answers() {  
        match answer.record_type() {
            hickory_proto::RecordType:... => {
                // SAFETY:
                // It is safe because the callback function does not store
                // reference of the inputs or their members.
                unsafe {
                    callback_to_c_function(...)?;
                }
            }
            
            // ... more match arms omitted.
        }    
    }

    Ok(())
}

In our case, the DNS responding parsing function API is simple enough for us to hand write, while the callbacks back to C functions for handling the response have complex data type conversions. Therefore, we leveraged bindgen to generate FFI code for the callbacks.

Build third-party crates

Even with all features disabled, hickory-proto introduces more than 30 dependent crates. Manually written build rules are difficult to ensure correctness and scale poorly when upgrading dependencies into new versions.

Fuchsia has developed cargo-gnaw to support building their third party Rust crates. Cargo-gnaw works by invoking cargo metadata to resolve dependencies, then parse and generate GN build rules. This ensures correctness and ease of maintenance.

Conclusion

The Pixel 10 series of phones marks a pivotal moment, being the first Pixel device to integrate a memory-safe language into its modem.

While replacing one piece of risky attack surface is itself valuable, this project lays the foundation for future integration of memory-safe parsers and code into the cellular baseband, ensuring the baseband’s security posture will continue to improve as development continues.

Special thanks to Armando Montanez, Bjorn Mellem, Boky Chen, Cheng-Yu Tsai, Dominik Maier, Erik Gilling, Ever Rosales, Hungyen Weng, Ivan Lozano, James Farrell, Jeffrey Vander Stoep, Jiacheng Lu, Jingjing Bu, Min Xu, Murphy Stein, Ray Weng, Shawn Yang, Sherk Chung, Stephan Chen, Stephen Hines.

Security for the Quantum Era: Implementing Post-Quantum Cryptography in Android

Posted by Eric Lynch, Product Manager, Android and Dom Elliott, Group Product Manager, Google Play

Modern digital security is at a turning point. We are on the threshold of using quantum computers to solve "impossible" problems in drug discovery, materials science, and energy—tasks that even the most powerful classical supercomputers cannot handle. However, the same unique ability to consider different options simultaneously also allows these machines to bypass our current digital locks. This puts the public-key cryptography we’ve relied on for decades at risk, potentially compromising everything from bank transfers to trade secrets. To secure our future, it is vital to adopt the new Post-Quantum Cryptography (PQC) standards National Institute of Standards and Technology (NIST) is urging before large-scale, fault-tolerant quantum computers become a reality.

To stay ahead of the curve, the technology industry must undertake a proactive, multi-year migration to Post-Quantum Cryptography (PQC). We have been preparing for a post-quantum world since 2016, conducting pioneering experiments with post-quantum cryptography, rolling out post-quantum capabilities in our products, and sharing our expertise through threat models and technical papers. For Android, the objective extends beyond patching individual applications or transport protocols. The imperative is to ensure that the entire platform architecture is resilient for the decades to come.

We are beginning tests of PQC enhancements starting in the next Android 17 beta, followed by general availability in the Android 17 production release. This deployment introduces a comprehensive architectural upgrade that is being rolled out across the operating system. By integrating the recently finalized NIST PQC standards deep into the platform, we’re establishing a new, quantum-resistant chain of trust. This chain of trust secures the platform continuously—from the moment the OS powers on, to the execution of applications distributed globally. Android is swapping today’s digital locks for advanced encryption to help enhance the security of every app you download—no matter how powerful future supercomputers get.

Securing the foundation: Verified boot and hardware trust

Security on any computing device begins when the hardware starts; if the underlying operating system is compromised, all subsequent software protections fail. As quantum computing advances, adversaries could potentially forge digital signatures to bypass these foundational integrity checks. To secure the platform against this looming threat, Android 17 introduces two major post-quantum cryptographic (PQC) upgrades:

  1. Upgrading Android Verified Boot (AVB): The AVB library is integrating the Module-Lattice-Based Digital Signature Algorithm (ML-DSA). This provides quantum-resistant digital signatures, ensuring the software loaded during the boot sequence remains highly resistant to unauthorized modification.
  2. Migrating Remote Attestation: Android 17 begins the transition of Remote Attestation to a fully PQC-compliant architecture under the current standards. By updating KeyMint's certificate chains to support quantum-resistant algorithms, devices can securely prove their state to relying parties, maintaining trust in a post-quantum environment.

Empowering developers: Android Keystore updates

Protecting the underlying operating system is only the first layer of defense; developers must be equipped with the cryptographic primitives necessary to leverage PQC keys and establish robust identity verification.

Implementing lattice-based cryptography, which requires significantly larger key sizes and memory footprints than classical elliptic curve cryptography, within the severely resource-constrained Trusted Execution Environment (TEE), represents a major engineering achievement. This capability is designed to support the hardware roots of trust and can now generate and verify post-quantum signatures.

Building on this hardware foundation, Android 17 updates Android Keystore to natively support ML-DSA. This allows applications to leverage quantum-safe signatures entirely within the device’s secure hardware, isolating sensitive key material from the main operating system. The SDK exposes both ML-DSA-65, and ML-DSA-87, enabling developers to seamlessly integrate these using the standard KeyPairGenerator API. This establishes a new era of identity and authentication for the app ecosystem without requiring developers to engineer proprietary cryptographic implementations.

Ecosystem scale: Bringing hybrid signing to Google Play apps and games

Android is committed to ensuring the platform is PQC resistant and extending the chain of PQC resistance to application signatures. The mechanisms used to verify the authenticity of applications are being upgraded to ensure that app installations and subsequent updates are strictly tamper-proof against quantum-enabled signature forgery. The platform will verify PQC signatures over APKs to enable this chain of trust.

To bring these critical protections to the wider developer community with minimal friction, the transition will be supported through Play App Signing. This approach provides an immediate bridge to quantum safety for the majority of active installs. Google Play will let developers automatically generate 'hybrid' signature blocks that combine classical and PQC keys.

Updating keys across billions of active devices is a complex operational endeavor. Play App Signing leverages Google Cloud KMS, which helps ensure industry-leading compliance standards, to secure signing keys. By managing signing keys securely in the cloud, Google Play enables developers to seamlessly upgrade their app security to PQC standards without the burden of complex, manual key management.

During the Android 17 release cycle, Google Play will handle the generation of quantum-safe ML-DSA signing keys for new apps and existing apps that opt-in, independent of the applications target API . Later, developers will be able to choose their own classical and ML-DSA signing keys and delegate them to Google Play for their hybrid key upgrade. To promote security best practices, Google Play will also start prompting developers to upgrade their signing keys at least every two years.

The cryptographic roadmap: From authenticity to privacy

Google’s post-quantum transition began in 2016, and Android 17 marks the first phase of Android’s post-quantum transition:

  • Securing the foundation: We are upholding the integrity of our attestation and Chain of Trust by incorporating ML-DSA into Android Verified Boot.
  • Empower Developers: The inclusion of ML-DSA support within Android Keystore and Play App Signing allows developers to safeguard their users and application.
  • Ecosystem Scale: By using hybrid signatures for APKs, developers can create a protected transition that preserves current trust while adding post-quantum defenses to block unauthorized updates.

Our roadmap further integrates post-quantum key encapsulation into KeyMint, Key Attestation and Remote Key Provisioning. This evolution is intended to bolster the security of the entire identity lifecycle—from hardware-level DICE measurements to our remote attestation servers—ensuring the Android ecosystem remains resilient and private against the quantum threats of tomorrow.

Google Expands AI Scam Protection to Samsung Galaxy S26

Google expands Gemini-powered scam detection to Samsung Galaxy S26 and more Android devices, bringing on-device AI fraud protection to calls and messages.

The post Google Expands AI Scam Protection to Samsung Galaxy S26 appeared first on TechRepublic.

Millions at Risk as Android Mental Health Apps Expose Sensitive Data

Oversecured flagged 1,575 flaws in 10 Android health apps with 14.7M installs, putting chats, CBT notes, and mood logs at risk, per BleepingComputer.

The post Millions at Risk as Android Mental Health Apps Expose Sensitive Data appeared first on TechRepublic.

Staying One Step Ahead: Strengthening Android’s Lead in Scam Protection

Posted by Lyubov Farafonova, Product Manager, Phone by Google; Alberto Pastor Nieto, Sr. Product Manager Google Messages and RCS Spam and Abuse

We’ve shared how Android’s proactive, multi-layered scam defenses utilize Google AI to protect users around the world from over 10 billion suspected malicious calls and messages every month1. While that scale is significant, the true impact of these protections is best understood through the stories of the individuals they help keep safe every day. This includes people like Majik B., an IT professional in Sunnyvale, California.

Despite his technical background, Majik recently found himself on a call that felt dangerously legitimate. While using his Pixel, he received a call that appeared to be from his bank. The number looked correct, the caller knew his name and his address, and the story about a "suspicious charge" made perfect sense. "I’m usually pretty careful about this stuff," Majik recalled, "but I stayed on the line longer than I normally would. Even knowing how these scams work, it was convincing in the moment."

The turning point came when his phone displayed a Scam Detection warning during the call, which provided a critical moment to pause and reflect. Majik hung up, checked his bank app directly, and confirmed there was no fraudulent charge. For Majik, Scam Detection was the intervention he needed: “The warning is what made me pause and avoid a bad situation”.

While stories like Majik’s show how our existing protections provide a robust shield against scams, our work isn't done. As scammers evolve their tactics and create more convincing and personalized threats, we’re using the best of Google AI to stay one step ahead.

A recent evaluation by Counterpoint Research found that Android smartphones provide the most comprehensive AI-powered protections of any mobile platform. We are committed to building on this foundation by expanding our AI-powered protections to more users and devices, while rolling out new features that utilize on-device AI to defend against increasingly sophisticated threats.

Expanding Scam Detection for Calls to Samsung Devices

To help protect you during phone calls, Scam Detection alerts you if a caller uses speech patterns commonly associated with fraud. We are bringing these protections to more of our users through new regional expansion and availability on new devices. Scam Detection for phone calls on Google Pixel devices is available in the US, UK, Australia, Canada, France, Germany, India, Ireland, Italy, Japan, Mexico, and Spain.

Scam Detection is already helping millions of users to stay safe from scammers, and we are expanding this feature to more manufacturers, starting with the Samsung Galaxy S26 series in the U.S. We are continuing to work with our partners to bring these industry-leading protections to even more users.

Powered by Gemini’s on-device model, Scam Detection provides intelligent protection against scam calls while ensuring that the processing stays on your device. This keeps your conversations private while delivering warnings in real-time. To preserve your privacy, the phone conversation processed by Scam Detection is neither stored on your device, nor shared outside of the device. To ensure you stay in total control of your experience, Scam Detection is turned off by default. When enabled, the feature only applies to calls identified as potential scams and is never used in calls with your contacts. You can easily manage these preferences in your phone settings whenever you choose.

Enhanced Protection Against Messaging Scams

We want everyone to feel secure when they open their messages, no matter where they are or what language they speak. To make this possible, we’ve now expanded Scam Detection for Google Messages to more than 20 countries. This includes support for several languages including English, Arabic, French, German, Portuguese, and Spanish.

Beyond reaching more people, we are also making our protections more intelligent. We are enhancing how Google Messages identifies fraudulent texts by utilizing our Gemini on-device model on the latest Android flagship devices in the US, Canada, and the UK. The added power of Gemini’s on-device model allows for a much more nuanced analysis of complex conversational threats.

For example, it can better detect the subtle conversational patterns used in job offer scams or sophisticated romance baiting scams (also known as “pig butchering”), a deceptive tactic where a scammer builds a long-term "relationship" with a potential victim to gain their trust, before tricking them into a fraudulent investment. Because these methods rely on gradual manipulation and don’t present typical warning signs, they need more advanced capabilities to catch them at scale. These advanced protections are now rolling out on Google Pixel 10 series and other select devices, and will be available on the Samsung Galaxy S26 series.

Gemini-powered Scam Detection alerts a user to a job offer scam

Using the Best of Google AI to Set the Standard in Mobile Scam Protection

Android continues to set the standard in mobile scam protections by leveraging advanced AI to identify and intercept threats as they happen. As scammer’s strategies shift, we remain committed to developing equally adaptive and intelligent defenses. Our goal is to provide you with peace of mind so you can continue to connect and communicate with confidence, knowing our multi-layered defenses are there to help protect your financial and personal data against mobile scams.


Disclaimers

1: This total comprises all instances where a message or call was proactively blocked or where a user was alerted to potential spam or scam activity.

Keeping Google Play & Android app ecosystems safe in 2025

Posted by Vijaya Kaza, VP and GM, App & Ecosystem Trust

The Android ecosystem is a thriving global community built on trust, giving billions of users the confidence to download the latest apps. In order to maintain that trust, we’re focused on ensuring that apps do not cause real-world harm, such as malware, financial fraud, hidden subscriptions, and privacy invasions. As bad actors leverage AI to change their tactics and launch increasingly sophisticated attacks, we’ve deepened our investments in AI and real-time defenses over the last year to maintain the upper hand and stop these threats before they reach users.

Upgrading Google Play’s AI-powered, multi-layered user protections

We’ve seen a clear impact from these safety efforts on Google Play. In 2025, we prevented over 1.75 million policy-violating apps from being published on Google Play and banned more than 80,000 bad developer accounts that attempted to publish harmful apps. These figures demonstrate how our proactive protections and push for a more accountable ecosystem are discouraging bad actors from publishing malicious apps, while our new tools help honest developers build compliant apps more easily. Initiatives like developer verification, mandatory pre-review checks, and testing requirements have raised the bar for the Google Play ecosystem, significantly reducing the paths for bad actors to enter.

User safety is at the core of everything we build. Over the years, we’ve continually introduced ways to help users stay safe and make informed app choices — from parental controls to data safety transparency and app badges. We’re constantly improving our policies and protections to encourage safe, high-quality apps on Google Play and stop bad actors before they cause harm.

Apps on Google Play undergo rigorous reviews for safety and compliance with our policies. Last year, we shared that Google Play runs over 10,000 safety checks on every app we publish, and we continue to check and recheck apps after they’ve been published. In 2025, we continued scaling our defenses even further by:

  • Boosting AI-enhanced app detection: We integrated Google’s latest generative AI models into our review process, helping our human review team continue to find complex malicious patterns faster.
  • Preventing unnecessary access to sensitive data: We prevented over 255,000 apps from getting excessive access to sensitive user data and continued to strengthen our privacy policies. Our commitment to privacy-forward app development, supported by tools like Play Policy Insights in Android Studio and Data safety section, has empowered developers to continue to: minimize privacy-sensitive permission requests, and prioritize the user in their design choices.
  • Blocking spam ratings and reviews: Whether they lead to review inflation or deflation, spam ratings and reviews can negatively impact our users’ trust and our developers’ growth. We’re continually evolving our detection models to help ensure app reviews are accurate. Our anti-spam protections blocked 160 million spam ratings and reviews last year, including inflated and deflated reviews. We also prevented an average 0.5-star rating drop for apps targeted by review bombing, protecting our users and developers from unhelpful reviews.
  • Safeguarding kids and families: Our approach to kids and families is built on the core belief that children deserve a safe, enriching digital environment. Our commitment is to empower parents with robust tools while providing children with access to high-quality, age-appropriate content. Last year, we announced new layers of protection, in addition to our existing safeguards, to prevent younger audiences from discovering or downloading apps involving activities like gambling or dating.

Enhancing Google Play Protect to help keep the entire Android ecosystem safe

We also continued to improve our protections for the broader Android ecosystem, by expanding Google Play Protect and real-time security measures like in-call scam protections to help keep users safe from scams, fraud, and other threats.

As Android’s built-in defense against malware and unwanted software, Google Play Protect now scans over 350 billion Android apps daily. This proactive protection constantly checks both Play apps and those from other sources to ensure they are not potentially harmful. And, last year, its real-time scanning capability identified more than 27 million new malicious apps from outside Google Play, warning users or blocking the app to neutralize the threat. To benefit from these protections, we recommend that users always keep Google Play Protect on.

While fraudsters are constantly evolving their tactics, Google Play Protect is evolving faster. Last year, we expanded:

  • Enhanced fraud protection: Google Play Protect’s enhanced fraud protection analyzes and automatically blocks the installation of apps that may abuse sensitive permissions to commit financial fraud. This protection is triggered when a user attempts to install an app from an "Internet-sideloading source" — such as a web browser or messaging app — that requests a sensitive permission. Building on the success of our initial pilot in Singapore, we expanded enhanced fraud protection to 185 markets, now covering more than 2.8 billion Android devices. In 2025, we blocked 266 million risky installation attempts and helped protect users from 872,000 unique, high-risk applications.
  • In-call scam protection: We also introduced new protections to combat social engineering attacks during phone calls. This feature preemptively disables the ability to turn off Google Play Protect during phone calls, stopping bad actors from being able to trick users into disabling their device's built-in defenses to download a malicious app while on a call.

Partnering with developers for a more secure, privacy-friendly future

Keeping Android and Google Play safe requires deep collaboration. We want to thank our global developer community for their partnership and for sharing their feedback on the tools and support they need to succeed.

In 2025, we focused on reducing friction for developers and providing them with tools to safeguard their businesses:

  • Building safer apps more easily: We’re helping developers streamline their work by bringing insights directly into their natural workflows. It starts with Play Policy Insights in Android Studio, which gives developers real-time feedback as they code. We focused first on permissions and APIs that grant deeper system access or handle personal data, like location or photos. This gives developers a head start on policy requirements, including prominent disclosures or usage declarations, while they’re still building. When developers move to Play Console to prepare their apps for submission, our expanded pre-review checks help catch common reasons for rejection, like improper usage of credentials or permissions and broken privacy policy links, ensuring smoother, faster reviews.
  • Stronger threat detection with Play Integrity API: Every day, apps and games make over 20 billion checks with Play Integrity API to protect against abuse and unauthorized access. In 2025, we added hardware-backed signals to make it even harder for bad actors to spoof devices and introduced new in-app prompts that let users fix common issues like network errors without leaving the app. We also launched device recall in beta to help developers identify repeat bad actors even after a device has been reset, all while protecting user privacy.
  • Building trust through developer verification: We’ve seen how effective developer verification is on Google Play, and now we’re applying those lessons to the broader Android ecosystem. By ensuring there is a real, accountable identity behind every app, verification helps legitimize authentic developers and prevents bad actors from hiding behind anonymity to repeatedly cause harm. After gathering feedback during our early access period, we’ll open up verification to all developers this year. We’ve also added a dedicated account type for students and hobbyists, which will allow them to distribute these apps to a limited number of devices without the full verification requirements.
  • Greater security with every Android release: In Android 16, developers can protect users’ most private information, like bank logins, with just one line of code. We’ve integrated this feature automatically to certain apps for an instant security boost against “tapjacking,” a trick where bad apps use hidden layers to steal clicks for ad fraud.

Looking ahead

Our top priority remains making Google Play and Android the most trusted app ecosystems for everyone. This year, we’ll continue to invest in AI-driven defenses to stay ahead of emerging threats and equip Android developers with the tools they need to build apps safely. To empower developers who distribute their apps on Google Play, we’ll maintain our focus on embedding checks to help build apps that are compliant by design, while providing guidance to help proactively avoid policy violations before an app is published. We’ll also roll out Android developer verifications to hold bad actors accountable and prevent them from hiding behind anonymity to cause repeated harm.

Thank you for being part of the Google Play and Android community as we work together to build a safer app ecosystem.

Google Warns Over 1 Billion Android Phones Are Now at Risk

Google warns that over 40% of Android devices no longer receive security updates, leaving more than 1 billion devices exposed to malware and spyware attacks.

The post Google Warns Over 1 Billion Android Phones Are Now at Risk appeared first on TechRepublic.

New Android Theft Protection Feature Updates: Smarter, Stronger

Posted by Nataliya Stanetsky, Fabricio Ferracioli, Elliot Sisteron, Irene Ang of the Android Security Team

Phone theft is more than just losing a device; it's a form of financial fraud that can leave you suddenly vulnerable to personal data and financial theft. That’s why we're committed to providing multi-layered defenses that help protect you before, during, and after a theft attempt.

Today, we're announcing a powerful set of theft protection feature updates that build on our existing protections, designed to give you greater peace of mind by making your device a much harder target for criminals.

Stronger Authentication Safeguards

We've expanded our security to protect you against an even wider range of threats. These updates are now available for Android devices running Android 16+.

More User Control for Failed Authentications: In Android 15, we launched Failed Authentication Lock, a feature that automatically locks the device's screen after excessive failed authentication attempts. This feature is now getting a new dedicated enable/disable toggle in settings, giving you more granular control over your device's security.

Expanding Identity Check to cover more: Early in 2025, we enabled Identity Check for Android 15+, which requires the user to utilize biometrics when performing certain actions outside of trusted places. Later in the year, we extended this safeguard to cover all features and apps that use the Android Biometric Prompt. This means that critical tools that utilize Biometric Prompt, like third-party banking apps and Google Password Manager, now automatically benefit from the additional security of Identity Check.

Stronger Protection Against Screen Lock Guessing: We’re making it much harder for a thief to guess your PIN, pattern, or password by increasing the lockout time after failed attempts. To ensure you aren’t locked out by mistake (by a curious child, for instance), identical incorrect guesses no longer count toward your retry limit.

Further Hardening Android GPUs

Posted by Liz Prucka, Hamzeh Zawawy, Rishika Hooda, Android Security and Privacy Team

Last year, Google's Android Red Team partnered with Arm to conduct an in-depth security analysis of the Mali GPU, a component used in billions of Android devices worldwide. This collaboration was a significant step in proactively identifying and fixing vulnerabilities in the GPU software and firmware stack.

While finding and fixing individual bugs is crucial, and progress continues on eliminating them entirely, making them unreachable by restricting attack surface is another effective and often faster way to improve security. This post details our efforts in partnership with Arm to further harden the GPU by reducing the driver's attack surface.

The Growing Threat: Why GPU Security Matters

The Graphics Processing Unit (GPU) has become a critical and attractive target for attackers due to its complexity and privileged access to the system. The scale of this threat is significant: since 2021, the majority of Android kernel driver-based exploits have targeted the GPU. These exploits primarily target the interface between the User-Mode Driver (UMD) and the highly privileged Kernel-Mode Driver (KMD), where flaws can be exploited by malicious input to trigger memory corruption.

Partnership with Arm

Our goal is to raise the bar on GPU security, ensuring the Mali GPU driver and firmware remain highly resilient against potential threats. We partnered with Arm to conduct an analysis of the Mali driver, used on approximately 45% of Android devices. This collaboration was crucial for understanding the driver’s attack surface and identifying areas that posed a security risk, but were not necessary for production use.

The Right Tool for the Job: Hardening with SELinux

One of the key findings of our investigation was the opportunity to restrict access to certain GPU IOCTLs. IOCTLs act as the GPU kernel driver’s user input and output, as well as the attack surface. This approach builds on earlier kernel hardening efforts, such as those described in the 2016 post Protecting Android with More Linux Security. Mali ioctls can be broadly categorized as:

  • Unprivileged: Necessary for normal operation.
  • Instrumentation: Used by developers for profiling and debugging.
  • Restricted: Should not be used by applications in production. This includes IOCTLs which are intended only for GPU development, as well as IOCTLs which have been deprecated and are no longer used by a device’s current User-Mode Driver (UMD) version.

Our goal is to block access to deprecated and debug IOCTLs in production. Instrumentation IOCTLs are intended for use by profiling tools to monitor system GPU performance and are not intended to be directly used by applications in production. As such, access is restricted to shell or applications marked as debuggable. Production IOCTLs remain accessible to regular applications.

A Staged Rollout

The approach is iterative and is a staged rollout for devices using the Mali GPU. This way, we were able to carefully monitor real-world usage and collect data to validate the policy, minimizing the risk of breaking legitimate applications before moving to broader adoption:

  1. Opt-In Policy: We started with an "opt-in" policy. We created a new SELinux attribute, gpu_harden, that disallowed instrumentation ioctls. We then selectively applied this attribute to certain system apps to test the impact. We used the allowxperm rule to audit, but not deny, access to the intended resource, and monitored the denial logs to ensure no breakage.
  2. Opt-Out Policy: Once we were confident that our approach was sound, we moved to an "opt-out" policy. We created a gpu_debug domain that would allow access to instrumentation ioctls. All applications were hardened by default, but developers could opt-out by:
  • Running on a rooted device.
  • Setting the android:debuggable="true" attribute in their app's manifest.
  • Requesting a permanent exception in the SELinux policy for their application.

This approach allowed us to roll out the new security policy broadly while minimizing the impact on developers.

Step by Step instructions on how to add your Sepolicy

To help our partners and the broader ecosystem adopt similar hardening measures, this section provides a practical, step-by-step guide for implementing a robust SELinux policy to filter GPU ioctls. This example is based on the policy we implemented for the Mali GPU on Android devices.

The core principle is to create a flexible, platform-level macro that allows each device to define its own specific lists of GPU ioctl commands to be restricted. This approach separates the general policy logic from the device-specific implementation.

Official documentation detailing the added macro and GPU security policy is available at:

SELinux Hardening Macro: GPU Syscall Filtering

Android Security Change: Android 16 Behavior Changes

Step 1: Utilize the Platform-Level Hardening Macro

The first step is to use a generic macro that we built in the platform's system/sepolicy that can be used by any device. This macro establishes the framework for filtering different categories of ioctls.

In the file/sepolicy/public/te_macros, a new macro is created. This macro allows device-specific policies to supply their own lists of ioctls to be filtered. The macro is designed to:

  • Allow all applications (appdomain) access to a defined list of unprivileged ioctls.
  • Restrict access to sensitive "instrumentation" ioctls, only permitting them for debugging tools like shell or runas_app when the application is debuggable.
  • Block access to privileged ioctls based on the application's target SDK version, maintaining compatibility for older applications.

Step 2: Define Device-Specific IOCTL Lists

With the platform macro in place, you can now create a device-specific implementation. This involves defining the exact ioctl commands used by your particular GPU driver.

  1. Create an ioctl_macros file in your device's sepolicy directory (e.g., device/your_company/your_device/sepolicy/ioctl_macros).
  2. Define the ioctl lists inside this file, categorizing them as needed. Based on our analysis, we recommend at least mali_production_ioctls, mali_instrumentation_ioctls, and mali_debug_ioctls. These lists will contain the hexadecimal ioctl numbers specific to your driver.

    For example, you can define your IOCTL lists as follows:

    define(`unpriv_gpu_ioctls', `0x0000, 0x0001, 0x0002')
    define(`restricted_ioctls', `0x1110, 0x1111, 0x1112')
    define(`instrumentation_gpu_ioctls', `0x2220, 0x2221, 0x2222')

Arm has provided official categorization of their IOCTLs in Documentation/ioctl-categories.rst of their r54p2 release. This list will continue to be maintained in future driver releases.

Step 3: Apply the Policy to the GPU Device

Now, you apply the policy to the GPU device node using the macro you created.

  1. Create a gpu.te file in your device's sepolicy directory.
  2. Call the platform macro from within this file, passing in the device label and the ioctl lists you just defined.

Step 4: Test, Refine, and Enforce

As with any SELinux policy development, the process should be iterative. This iterative process is consistent with best practices for SELinux policy development outlined in the Android Open Source Project documentation.

Conclusion

Attack surface reduction is an effective approach to security hardening, rendering vulnerabilities unreachable. This technique is particularly effective because it provides users strong protection against existing but also not-yet-discovered vulnerabilities, and vulnerabilities that might be introduced in the future. This effort spans across Android and Android OEMs, and required close collaboration with Arm. The Android security team is committed to collaborating with ecosystem partners to drive broader adoption of this approach to help harden the GPU.

Acknowledgments

Thank you to Jeffrey Vander Stoep for his valuable suggestions and extensive feedback on this post.

Android expands pilot for in-call scam protection for financial apps

Posted by Aden Haussmann, Associate Product Manager and Sumeet Sharma, Play Partnerships Trust & Safety Lead

Android uses the best of Google AI and our advanced security expertise to tackle mobile scams from every angle. Over the last few years, we’ve launched industry-leading features to detect scams and protect users across phone calls, text messages and messaging app chat notifications.

These efforts are making a real difference in the lives of Android users. According to a recent YouGov survey1 commissioned by Google, Android users were 58% more likely than iOS users to report they had not received any scam texts in the prior week2.

But our work doesn’t stop there. Scammers are continuously evolving, using more sophisticated social engineering tactics to trick users into sharing their phone screen while on the phone to visit malicious websites, reveal sensitive information, send funds or download harmful apps. One popular scam involves criminals impersonating banks or other trusted institutions on the phone to try to manipulate victims into sharing their screen in order to reveal banking information or make a financial transfer.

To help combat these types of financial scams, we launched a pilot earlier this year in the UK focused on in-call protections for financial apps.

How the in-call scam protection works on Android

When you launch a participating financial app while screen sharing and on a phone call with a number that is not saved in your contacts, your Android device3 will automatically warn you about the potential dangers and give you the option to end the call and to stop screen sharing with just one tap. The warning includes a 30-second pause period before you’re able to continue, which helps break the ‘spell’ of the scammer's social engineering, disrupting the false sense of urgency and panic commonly used to manipulate you into a scam.

Bringing in-call scam protections to more users on Android

The UK pilot of Android’s in-call scam protections has already helped thousands of users end calls that could have cost them a significant amount of money. Following this success, and alongside recently launched pilots with financial apps in Brazil and India, we’ve now expanded this protection to most major UK banks.

We’ve also started to pilot this protection with more app types, including peer-to-peer (P2P) payment apps. Today, we’re taking the next step in our expansion by rolling out a pilot of this protection in the United States4 with a number of popular fintechs like Cash App and banks, including JPMorganChase.

We are committed to collaborating across the ecosystem to help keep people safe from scams. We look forward to learning from these pilots and bringing these critical safeguards to even more users in the future.

Notes


  1. Google/YouGov survey, July-August, n=5,100 (1,700 each in the US, Brazil and India), with adults who use their smartphones daily and who have been exposed to a scam or fraud attempt on their smartphone. Survey data have been weighted to smartphone population adults in each country.  

  2. Among users who use the default texting app on their smartphone.  

  3. Compatible with Android 11+ devices 

  4. US users of the US versions of the apps; rollout begins Dec. 2025 

❌