Visualização de leitura

Child Safety at Risk as EU CSAM Detection Law Lapses, Reporting Concerns Rise

CSAM

A growing surge in CSAM (Child Sexual Abuse Material) circulating online has become an urgent concern for authorities and child protection organizations across the EU. As digital platforms continue to play a central role in communication, the challenge of tackling child sexual exploitation has intensified. The main issue lies in the expiration of a temporary EU legal framework that allowed online service providers to scan private communications for CSAM voluntarily. This legislation, originally introduced as a derogation under ePrivacy rules in 2021, officially lapsed on April 3, 2026. With lawmakers failing to agree on an extension, technology companies now face an uncertain legal environment that could undermine years of progress in combating child sexual exploitation online.

Expiry of EU Law Leaves CSAM Detection in Limbo 

The now-expired framework had enabled major technology firms to proactively identify and report Child Sexual Abuse Material using tools such as hash-matching technology. This method relies on digital fingerprints to detect known abusive content with high accuracy, while still maintaining user privacy.  Law enforcement agencies have consistently described such detection systems as “vital” in identifying perpetrators and rescuing victims. Without a clear legal basis, however, companies risk operating in a grey area where continuing these practices may expose them to legal challenges.  Despite this uncertainty, several major firms, including Google, Meta, Microsoft, and Snap, have stated they will continue voluntary efforts to detect CSAM. In a joint statement, they emphasized the urgency for EU institutions to establish a stable regulatory framework, noting that child safety cannot be compromised due to political delays. 

Sharp Decline in CSAM Reports Expected 

Authorities warn that the absence of legal clarity could lead to a dramatic drop in reports related to child sexual exploitation. Data from previous years highlights the scale of the issue. In 2025 alone, Europol processed approximately 1.1 million CyberTips received from the U.S.-based National Center for Missing & Exploited Children (NCMEC). These reports included files, videos, and images linked to Child Sexual Abuse Material, and were relevant to investigations across 24 European countries.  Officials have warned that this scenario is not hypothetical. A similar lapse in legal provisions in 2021 led to a noticeable decline in reporting, demonstrating how dependent investigations are on cooperation from digital platforms. 

Widespread Criticism of EU Inaction 

The failure of EU lawmakers to renew the legislation has sparked strong reactions from policymakers, advocacy groups, and industry leaders alike. European Home Affairs Commissioner Magnus Brunner described the situation as “hard to understand,” while child protection organizations labeled it an “abject political failure.”  A coalition of 247 organizations dedicated to children’s rights issued a joint statement condemning the lapse. They argued that the inability to maintain detection mechanisms creates a “deeply alarming and irresponsible gap” in efforts to combat Child Sexual Abuse Material. According to the coalition, detection at scale is foundational in addressing child sexual exploitation. It enables companies to remove harmful content, report cases to authorities, and prevent the redistribution of abusive material. Without it, millions of illegal files could continue circulating unchecked, prolonging the suffering of victims.

Real-World Consequences for Victims 

Behind every instance of CSAM is a real child subjected to abuse. The continued circulation of such material forces victims to relive their trauma repeatedly. Advocacy groups stress that failing to detect and remove this content effectively denies children their fundamental rights, including privacy and protection.  The absence of robust detection tools also means that many victims may remain unidentified and trapped in abusive environments. Law enforcement agencies rely heavily on digital evidence to locate and rescue affected individuals. Any disruption in this process directly impacts their ability to intervene. 

Commitment Amid Uncertainty 

Despite the legal ambiguity, technology companies have reaffirmed their commitment to tackling Child Sexual Abuse Material. They argue that voluntary detection practices have been in place for nearly two decades and remain a cornerstone of online safety.  These companies maintain that tools like hash-matching are essential for identifying known CSAM and preventing its spread. They also emphasize that such systems are designed to balance safety with privacy, countering concerns about overreach.  However, industry leaders have made it clear that a long-term solution must come from policymakers. Without a consistent legal framework in the EU, even well-intentioned efforts at risk are becoming unsustainable. 

Apple Introduces Age Checks for iPhone Users in the UK

Apple age verification

Apple has introduced Apple age verification UK measures that will require iPhone and iPad users to confirm they are adults before accessing certain services, including 18-plus apps. The change comes with the iOS 26.4 update and is being implemented in response to legal requirements in certain regions, including the UK. According to Apple, users may be prompted to confirm that they are adults when creating a new Apple Account or while using specific services. This requirement applies to actions such as downloading apps or changing certain settings linked to their Apple Account.

Apple Age Verification UK: How Users Confirm Age

As part of the Apple age verification UK rollout, users can confirm their age through multiple methods. Apple may use existing account information, such as whether a credit card is already linked to the account or how long the account has been active, to help determine if a user is an adult. Users also have the option to add a credit card to confirm their age or scan a government-issued ID, such as a driver’s license or national ID. Apple has stated that credit card details or ID documents are not stored unless users choose to save them for other purposes, such as adding a payment method. To complete the process, users must update their device to the latest software version and follow prompts in the Settings app. If they choose not to confirm immediately, they will continue to see a notification in Settings prompting them to complete the process later. If verification cannot be completed on the device, Apple requires users to use approved methods such as a driver’s license, national ID, or a credit card. Debit cards, gift cards, and passports are not supported, although a Digital ID in Apple Wallet created using a U.S. passport may be accepted in some cases.

Impact on Child Online Accounts

The Apple age verification UK changes also affect how minors use Apple services. In the UK, children under 13 cannot create an Apple Account without parental consent and must be part of a Family Sharing group. In such cases, a parent or guardian who has confirmed their age may be required to approve certain actions, including app downloads or changes to safety settings. Depending on the region, some features may not be available to users until they turn 18. Apple has also noted that age requirements for child accounts vary across countries, with thresholds ranging from under 13 in most regions to higher limits in others.

Regulatory Push on Child Online Safety

The rollout of Apple age verification UK comes as UK regulators increase scrutiny on how platforms enforce age restrictions. The Information Commissioner’s Office (ICO) and Ofcom have asked major platforms to outline how they plan to strengthen child safety protections, particularly in preventing children under 13 from accessing services meant for older users. The UK government is also considering additional measures, including potential restrictions on social media use for younger users and pilot programs to test new regulatory approaches. Several European countries have announced or are considering similar steps. Ofcom has stated that many platforms are not effectively enforcing minimum age requirements, with children continuing to access services despite age restrictions. The regulator has called on companies to implement stronger measures, including effective age checks, improved protections against grooming, safer content feeds, and proper assessment of new product features before they are introduced. Dame Melanie Dawes, Ofcom Chief Executive, said: “These online services are household names, but they’re failing to put children’s safety at the heart of their products. There is a gap between what tech companies promise in private, and what they’re doing publicly to keep children safe on their platforms. “Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid. That must now change quickly, or Ofcom will act.”

Growing Focus on Enforcement

The Apple age verification measures align with broader enforcement efforts under the UK’s online safety framework. Ofcom has written to major platforms, including Facebook, Instagram, Roblox, Snapchat, TikTok, and YouTube, requiring them to demonstrate how they will enforce minimum age rules and improve child safety protections. Platforms have been given deadlines to respond, after which Ofcom will assess their actions and determine whether further regulatory steps are necessary. The regulator has also indicated it is prepared to take enforcement action if companies fail to meet expectations. The introduction of age verification at the device and account level reflects increasing emphasis on ensuring that age restrictions are applied more consistently across digital services, particularly where children may be exposed to adult content or features.

Kids Internet and Digital Safety Act Gains Momentum in U.S. House

Kids Internet and Digital Safety Act

The debate over how to protect children online is once again at the center of U.S. policymaking. The Kids Internet and Digital Safety Act has moved forward in Congress, but not without controversy. While lawmakers backing the bill argue it will strengthen protections for children and empower parents, critics say the legislation may fall short when it comes to holding technology companies accountable. The House Energy and Commerce Committee advanced the Kids Internet and Digital Safety Act, alongside several related bills aimed at addressing online risks facing children. The vote followed a sharp divide along party lines, reflecting broader disagreements about how aggressively the government should regulate Big Tech in matters of online child safety. For many policymakers, the growing influence of social media and digital platforms on young users makes some form of legislation unavoidable. But the question remains: does the Kids Internet and Digital Safety Act truly tackle the problem, or does it leave major loopholes in place?

Kids Internet and Digital Safety Act Advances in Congress

Supporters of the Kids Internet and Digital Safety Act say the legislation represents a meaningful step toward creating a safer digital environment for children and teenagers. House Energy and Commerce Committee Chairman Brett Guthrie framed the bill as part of a broader responsibility to address digital threats affecting younger generations. “As people, as a Committee, and as a Congress, there are few things that are more essential than our responsibility to protect our nation’s children,” said Chairman Guthrie. He added, “We are taking the meaningful steps forward to empower parents and protect children and teens online. We owe it to parents. We owe it to communities. And most importantly, we owe it to the kids who are counting on us to get this right.” Supporters argue the kids online safety bill is designed to give parents better tools to monitor and protect their children online while pushing platforms toward greater transparency about how their systems affect young users. Representative Gus Bilirakis echoed that view while speaking about the need for stronger digital safety legislation. “Empowering parents to better protect their children—especially amid the near-constant barrage of digital threats—remains one of our most solemn and important responsibilities,” he said. “Today, we took meaningful action to advance that mission by moving forward several key measures, including the Kids Online Safety Act, designed to strengthen safeguards and increase transparency in the online space.”

Critics Warn of Weak Rules for Big Tech

Despite the push forward, the Kids Internet and Digital Safety Act has drawn strong criticism from Democratic lawmakers who argue that the bill’s provisions may be too weak to effectively regulate large technology platforms. One major concern raised during the committee markup was the bill’s “knowledge standard.” Critics argue this provision allows tech companies to avoid liability by claiming they were unaware that children were using their platforms. In practical terms, this could create a loophole where platforms escape accountability for harms linked to social media safety for kids simply by arguing they did not know minors were present. Another key issue is the absence of what policymakers call a “duty of care.” Such a requirement would compel platforms to actively prevent the most severe harms associated with online platforms, including exploitation, addiction-driven design, and exposure to harmful content. Without that requirement, critics say the kids online safety bill may place more responsibility on parents than on the technology companies operating the platforms themselves. The legislation also includes language that could preempt certain state-level regulations on Big Tech. Opponents argue that this provision could limit the ability of state attorneys general to pursue legal action against platforms and weaken stricter online child safety laws already passed in some states.

Additional Bills Target Social Media and AI Risks

The Kids Internet and Digital Safety Act was not the only proposal discussed during the committee session. Several related bills aimed at protecting children from emerging digital threats also advanced. Congressman Buddy Carter spoke about Sammy’s Law, named after a child who died following online exploitation. “This is absolutely necessary because the harms that our children are confronting on social media are severe, and our children simply do not yet have the development skills to protect themselves alone,” Carter said. “If this bill helps even one family avoid what happened to Sammy Chapman, then it will be worth it.” Other legislation addressed risks linked to app stores and artificial intelligence. Congressman John James introduced the App Store Accountability Act, which seeks to hold technology companies responsible for protecting young users. “The App Store Accountability Act holds big tech companies to the same standard as local corner stores,” he said. Meanwhile, Congresswoman Erin Houchin raised concerns about the psychological impact of AI chatbots on children while discussing the SAFE BOTs Act. “We're in the middle of a chatbot revolution. Children are on the front lines,” she said. “Kids today aren't just scrolling feeds, they're forming emotional bonds with AI companions that simulate empathy, mimic authority figures, and are available at any hour.”

The Bigger Question: Are Current Laws Enough?

The debate surrounding the Kids Internet and Digital Safety Act highlights a deeper issue: policymakers agree that children face growing risks online, but they remain divided on how to regulate the tech industry effectively. Supporters see the bill as a necessary first step toward improving social media safety for kids. Critics, however, argue that without stronger accountability measures, the legislation may struggle to deliver meaningful protections. As digital platforms continue to shape how children learn, communicate, and socialize, the challenge for lawmakers is not simply passing legislation—but ensuring that online child safety laws keep pace with the technology they aim to regulate.
❌