Visualização normal

Ontem — 8 de Maio de 2026Stream principal
  • ✇Firewall Daily – The Cyber Express
  • Fake Moustache Trick Raises Questions Over UK Online Safety Act Age Checks Ashish Khaitan
    The rollout of the UK’s Online Safety Act in July 2025 was intended to create a safer digital environment for children through stricter age verification rules, tighter moderation standards, and stronger protections against harmful online content. However, early evidence suggests that many of the safeguards introduced under the legislation can still be bypassed with surprisingly simple tactics, including a fake moustache drawn with makeup.  Recent findings have raised concerns among parents, r
     

Fake Moustache Trick Raises Questions Over UK Online Safety Act Age Checks

Online Safety Act

The rollout of the UK’s Online Safety Act in July 2025 was intended to create a safer digital environment for children through stricter age verification rules, tighter moderation standards, and stronger protections against harmful online content. However, early evidence suggests that many of the safeguards introduced under the legislation can still be bypassed with surprisingly simple tactics, including a fake moustache drawn with makeup.  Recent findings have raised concerns among parents, researchers, and digital safety experts about the effectiveness of current age verification systems. While the Online Safety Act has led to some improvements in children’s online experiences, critics argue that enforcement remains inconsistent and that many platforms are still vulnerable to manipulation.  One of the most widely discussed examples involved a 12-year-old boy who reportedly used an eyebrow pencil to create a fake moustache before facing a facial age estimation check. According to the report, the altered appearance convinced the system that he was 15 years old, allowing him to bypass restrictions designed for younger users. The incident has become a symbol of broader concerns about the reliability of AI-driven age-verification technologies. 

Online Safety Act Faces Early Challenges 

The Online Safety Act was introduced to strengthen online child protection measures by requiring platforms to implement stricter checks and reduce children’s exposure to harmful material. The legislation also aimed to improve reporting tools and create safer digital spaces for younger users.  Despite those goals, the report suggests that loopholes remain widespread. Children have reportedly been bypassing protection through several methods, including entering false birthdates, borrowing adult credentials, sharing accounts, and using VPN services. More advanced attempts have also involved spoofing facial recognition systems used in age verification processes.  Survey data cited in the findings revealed that nearly half of children believe current age verification systems are easy to evade. Around one-third admitted to bypassing these systems in recent months.  The fake moustache example particularly highlighted weaknesses in facial age estimation tools that rely heavily on visual indicators rather than stronger forms of identity confirmation. Experts argue that systems based primarily on appearance can be vulnerable to minor cosmetic changes, lighting adjustments, or camera manipulation. 

Mixed Results Following Online Safety Act Rollout 

Although concerns over age verification remain significant, the report noted that the Online Safety Act has produced some positive outcomes. Approximately half of the surveyed children said they were now seeing more age-appropriate content online. In addition, around 40% of both children and parents stated that the internet feels somewhat safer since the legislation came into effect.  Many children also appeared supportive of increased online protections. The findings showed that younger users generally approved of stricter platform rules, reduced interaction with strangers, and limitations placed on high-risk platform features.  Around 90% of children who noticed stronger moderation systems and improved reporting tools viewed those changes positively. Researchers said this indicates that many younger users are willing to engage with safer digital environments when protections are implemented effectively.  Still, the improvements have not been universal. Within just one month of new child protection codes being introduced under the Online Safety Act, nearly half of the children surveyed reported encountering harmful content online. This included violent material, hate speech, and body image-related content, all categories the legislation specifically aims to regulate. 

Privacy Concerns Grow Around Age Verification 

The expansion of age verification requirements has also triggered growing concerns over privacy and data security. More than half of the children surveyed said they had been asked to verify their age within a recent two-month period. These checks were reportedly common across major platforms, including TikTok, YouTube, Google services, and Roblox.  Many platforms now rely on technologies such as facial age estimation, government-issued identification checks, and third-party age assurance providers to comply with the Online Safety Act. While users generally described the systems as easy to complete, concerns remain about how sensitive data is collected, stored, and potentially reused.  Parents expressed unease about whether biometric information and identity documents submitted during age verification could later be retained by companies or accessed by government agencies. Those concerns have intensified calls for more centralized and privacy-focused verification systems instead of fragmented checks spread across multiple online services.  Experts argue that current approaches may not strike the right balance between child safety and personal privacy. They warn that if the weaknesses exposed by tactics like the fake moustache incident are not addressed, public trust in these systems could continue to decline. 
Antes de ontemStream principal
  • ✇Firewall Daily – The Cyber Express
  • UK’s Online Age Checks Are Failing—Kids are Beating Them with AI, Fake Beards Mihir Bagwe
    When governments introduced stricter online age checks under the UK’s Online Safety Act, the goal was to keep children away from harmful content. But in practice, the system is already showing cracks—and the most telling insight comes from the very users it’s meant to protect. Children aren’t just countering age checks, they’re actively bypassing them—and often with surprising ease. According to a new report from Internet Matters foundation, nearly half of children (46%) believe age verificati
     

UK’s Online Age Checks Are Failing—Kids are Beating Them with AI, Fake Beards

U.S. Government Sues TikTok, TikTok

When governments introduced stricter online age checks under the UK’s Online Safety Act, the goal was to keep children away from harmful content. But in practice, the system is already showing cracks—and the most telling insight comes from the very users it’s meant to protect.

Children aren’t just countering age checks, they’re actively bypassing them—and often with surprising ease.

According to a new report from Internet Matters foundation, nearly half of children (46%) believe age verification systems are easy to get around, while only 17% think they are difficult. That perception isn’t theoretical. It’s grounded in real behavior, shared knowledge, and increasingly creative workarounds.

From simply entering a fake birthdate to using someone else’s ID, children have developed a toolkit to bypass techniques. Some methods are almost trivial—changing a date of birth or borrowing a parent’s login—while others reflect a growing sophistication. Kids reported submitting altered images, using AI-generated faces, or even drawing facial hair on themselves to trick facial recognition systems.

In one striking example, a parent described catching their child using makeup to appear older—successfully fooling the system.

I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old. – Mum of boy, 12

But the problem goes deeper than perception. It’s systemic.

Also read: UK Regulator Ofcom Launches Probe into Telegram, Teen Chat Platforms

Bypassing Is the Norm, Not the Exception

The report reveals that nearly one in three children (32%) admitted to bypassing age restrictions in just the past two months. Older children are even more likely to do so, which shows how digital literacy often translates into evasion capability.

The most common methods?

  • Entering a fake birthdate (13%)
  • Using someone else’s login credentials (9%)
  • Accessing platforms via another person’s device (8%)

Despite widespread concerns about VPNs, they play a relatively minor role. Only 7% of children reported using them to bypass restrictions, suggesting that simpler, low-effort tactics remain the preferred route.

In other words, the barrier to entry is not just low—it’s practically optional.

Europe Threat Landscape Q1 2026, Online Age Check Europe’s cyber threat landscape Q1 2026 shows a sharp acceleration in cyber threats across the region. Do you know what's contributing to it?

Check Cyble's full analysis report here!

Even When It Works, It Doesn’t Work

Ironically, even when children attempt to follow the rules, the technology doesn’t always cooperate.

Some reported being incorrectly identified as older—or younger—by facial recognition systems. In cases where they were flagged as underage, enforcement was often inconsistent or temporary. One child described being blocked from going live on a platform for just 10 minutes before being allowed to try again.

This inconsistency creates a loophole where persistence pays. If at first you’re denied, simply try again.

A Risky Side Effect

Perhaps the most concerning finding isn’t that children can bypass age checks—it’s that adults can too.

The report states fears that adults may exploit these same weaknesses to access spaces intended for younger users. In some cases, this involves using images or videos of children to trick verification systems. There are even reports of adults acquiring child-registered accounts to blend into youth platforms.

This flips the entire premise of age verification on its head. Instead of protecting children, flawed systems may inadvertently expose them to greater risk.

Parents, Part of the Problem—or the Solution?

Adding another layer of complexity, parents themselves are sometimes complicit.

About 26% of parents admitted to allowing their children to bypass age checks, with 17% actively helping them do so. The reasoning is often pragmatic. Parents feel they understand the risks and trust their child’s judgment.

I have helped my son get around them. It was to play a game, and I knew the game, and I was happy and confident that I was fine with him playing it. – Mum of non-binary child, 13

But this undermines the consistency of enforcement. If rules vary from household to household, platform-level protections lose their impact.

Interestingly, the data also suggests that communication matters. Children who regularly discuss their online activity with parents are less likely to bypass restrictions than those who don’t.

Why Kids Are Bypassing in the First Place

The motivations aren’t always malicious. In many cases, children are simply trying to access social media (34%), gaming communities (30%), or messaging apps (29%) that their peers are already using.

What this resonate is a fundamental tension where age verification systems are trying to enforce boundaries in environments where social participation is the norm.

Age verification is often positioned as a cornerstone of online safety. But in practice, it’s proving to be more of a speed bump than a safeguard.

Children understand the systems. They share methods. They adapt quickly. And until the technology—and its enforcement—becomes significantly more robust, age checks may offer more reassurance than real protection.

  • ✇Firewall Daily – The Cyber Express
  • UK Regulator Ofcom Launches Probe into Telegram, Teen Chat Platforms Samiksha Jain
    The Ofcom investigation into major online platforms has widened as the UK regulator examines whether services such as Telegram, Teen Chat, and Chat Avenue are doing enough to prevent child sexual abuse and online grooming. The action comes under the Online Safety Act, which requires platforms to assess and reduce risks related to illegal content, including child sexual abuse material (CSAM). The UK’s communications watchdog said the Ofcom investigation was launched after receiving evidence su
     

UK Regulator Ofcom Launches Probe into Telegram, Teen Chat Platforms

Ofcom investigation

The Ofcom investigation into major online platforms has widened as the UK regulator examines whether services such as Telegram, Teen Chat, and Chat Avenue are doing enough to prevent child sexual abuse and online grooming. The action comes under the Online Safety Act, which requires platforms to assess and reduce risks related to illegal content, including child sexual abuse material (CSAM). The UK’s communications watchdog said the Ofcom investigation was launched after receiving evidence suggesting that harmful content and predatory behavior may be occurring across these platforms, raising serious concerns about user safety, especially for children.

Ofcom Investigation Into Telegram over CSAM Risks

A key part of the Ofcom investigation focuses on Telegram and its potential exposure to child sexual abuse material. Authorities confirmed they received intelligence from the Canadian Centre for Child Protection, which indicated the alleged presence and sharing of CSAM on the platform. Following this, Ofcom conducted its own assessment and decided to formally investigate whether Telegram has failed to meet its legal obligations under the Online Safety Act. In the UK, both the possession and distribution of such material are criminal offenses, placing significant responsibility on platforms to actively detect and remove it. Regulators stated that platforms offering user-to-user communication must implement systems to identify and mitigate risks. The Ofcom investigation will assess whether Telegram has adequate safeguards in place or if gaps in enforcement have allowed illegal content to circulate.

Teen Chat Platforms Under Scrutiny for Grooming Risks

The Ofcom investigation also extends to Teen Chat and Chat Avenue, which are being examined for their potential role in enabling online grooming. These platforms offer features such as open chatrooms, private messaging, and media sharing, which regulators say can be misused by predators. Online grooming can involve coercing minors into sharing explicit content, engaging in sexual conversations, or arranging offline meetings. Ofcom said it has been working with child protection agencies to identify services where such risks are higher. Despite prior engagement with the companies, the regulator said it remains unconvinced that sufficient protections are in place. The Ofcom investigation will determine whether these platforms are properly assessing risks and taking steps to prevent children from being exposed to harmful or illegal activity. In the case of Chat Avenue, the probe will also examine whether adequate safeguards exist to block minors from accessing explicit content.

File-Sharing Platforms Show Mixed Progress

Alongside messaging and chat services, the Ofcom investigation has reviewed file-sharing platforms, which have historically been used to distribute CSAM. Regulators noted some progress in this area. For instance, Pixeldrain has implemented perceptual hash-matching technology, allowing automated detection and removal of known abusive content. This came after Ofcom raised concerns about the platform’s initial lack of safeguards. Another service, Yolobit, has restricted access to users in the UK, leading Ofcom to close its investigation. Several other file-sharing providers have taken similar steps, either blocking UK access or deploying detection technologies following enforcement action. These developments suggest that regulatory pressure is pushing some platforms to improve, though the Ofcom investigation indicates that broader risks remain across different types of online services.

Enforcement Powers and Next Steps

Under the Online Safety Act, the Ofcom investigation follows a structured process. Regulators will gather and analyze evidence before determining whether a platform has breached its legal duties. Companies will be given a chance to respond before any final decision is made. If violations are confirmed, Ofcom has the authority to impose strict penalties. These include fines of up to £18 million or 10 percent of global annual revenue. In more serious cases, courts can enforce business disruption measures, such as requiring internet providers to block access to a platform in the UK or cutting off payment and advertising services. Suzanne Cater, Director of Enforcement at Ofcom, emphasized that tackling child exploitation remains a top priority. She noted that while some progress has been made, especially among file-sharing services, risks persist across larger platforms and youth-focused chat services.

Growing Pressure on Platforms to Comply

The Ofcom investigation highlights increasing regulatory scrutiny on online platforms operating in the UK. Under the Online Safety Act, any service accessible to UK users must comply with local laws, regardless of where the company is based. With investigations now underway across messaging apps, chat platforms, and file-sharing services, the regulator is signaling that failure to protect users, particularly children, will carry serious consequences. As the Ofcom investigation continues, further updates are expected on whether these platforms will face enforcement action or be required to strengthen their safety measures.
❌
❌