Visualização normal

Antes de ontemStream principal
  • ✇Malwarebytes
  • If a fake moustache can fool age checks, is the Online Safety Act working?
    A report based on a survey by the UK’s Internet Matters shows that much of the responsibility for managing the online safety of children still falls on families. The Online Safety Act came into effect in July, 2025, and the report explores what has changed in the online lives of UK families since then. We discussed in December 2025 whether the privacy risks of age verification outweighed the enhanced child protection. While the report shows some progress, it mostly provides “an early view
     

If a fake moustache can fool age checks, is the Online Safety Act working?

7 de Maio de 2026, 07:21

A report based on a survey by the UK’s Internet Matters shows that much of the responsibility for managing the online safety of children still falls on families.

The Online Safety Act came into effect in July, 2025, and the report explores what has changed in the online lives of UK families since then.

We discussed in December 2025 whether the privacy risks of age verification outweighed the enhanced child protection. While the report shows some progress, it mostly provides “an early view of how the online landscape is changing, and crucially, where it is not.”

Around half of children say they now see more age-appropriate content, and roughly four in ten parents and children feel the online world has become somewhat safer.

The online world is as much a part of a child’s environment as the physical world is. And blocking the view to parts of that world is not taken lightly. Almost half of children think age checks are easy to bypass. About a third admit to doing so recently, using tactics from fake birthdates and borrowed logins to spoofed faces and, less commonly, VPNs.

“I did catch my son [12] using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old.”

Yet 90% of children who noticed improved blocking and reporting saw this as a good thing. Their support for these safety features is pragmatic. They point to:

  • clearer rules
  • restricted contact with strangers
  • limits on high-risk functions

 They also rate these features as helpful in reducing exposure to harmful content and interactions.

But the system is not perfect. In the month after the child protection codes came into force, almost half of children reported some online harm, including violent, hateful, and body image-related content that should be covered by the Act’s protections.

The survey also revealed that age checks are now commonplace. Over half of children said they were asked to verify their age within a recent two-month window, often on major platforms like TikTok, YouTube/Google, and Roblox, on both new and existing accounts.

The technology is improving. Platforms use facial age estimation, government ID, and third-party age assurance apps, and these are usually easy for children to complete.

However, gains in protection come with unresolved and, in some cases, growing concerns around privacy and data use, especially around age verification and AI.

Parents are worried not just about what data is collected for age checks, but whether it will be stored or reused by government or industry. This has fueled calls for central, privacy-protective solutions rather than fragmented data collection across platforms.

Because age assurance systems are both intrusive (in terms of data) and often ineffective (easy workarounds, weak enforcement), the report suggests they may not yet provide a good safety-to-privacy trade-off from a family perspective.

Obviously, the survey also didn’t capture input from adults pretending to be children to gain access to child-only spaces, a risk that parents link directly to predatory behavior.

The authors conclude that the Online Safety Act has started to reshape children’s online environments, making safety features more visible and enabling more age‑appropriate experiences in some areas.

However, the Act has not yet produced a “step change.” Harmful content remains widespread, age‑assurance is patchy and easy to circumvent, and key concerns such as time spent online, AI risks, and persuasive design remain under‑regulated.


Browse like no one’s watching. 

Malwarebytes Privacy VPN encrypts your connection and never logs what you do, so the next story you read doesn’t have to feel personal. Try it free → 

  • ✇Schneier on Security
  • New Mexico’s Meta Ruling and Encryption Bruce Schneier
    Mike Masnick points out that the recent New Mexico court ruling against Meta has some bad implications for end-to-end encryption, and security in general: If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice. One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument w
     

New Mexico’s Meta Ruling and Encryption

6 de Abril de 2026, 16:09

Mike Masnick points out that the recent New Mexico court ruling against Meta has some bad implications for end-to-end encryption, and security in general:

If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice.

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

End-to-end encryption protects billions of people from surveillance, data breaches, authoritarian governments, stalkers, and domestic abusers. It’s one of the most important privacy and security tools ordinary people have. Every major security expert and civil liberties organization in the world has argued for stronger encryption, not weaker.

But under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications. The logic applies to literally every communication tool ever invented. Predators also use the postal service, telephones, and in-person conversation. The encryption itself harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors ­- choices made by people, not by the platform’s design.

The incentive this creates goes far beyond encryption, and it’s bad. If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements. Why add encryption if it becomes Exhibit A in a future lawsuit? Why implement any privacy-protective feature if a plaintiff’s lawyer will characterize it as “shielding bad actors”?

And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.

In a sane legal environment, you want companies to have these internal debates. You want engineers and safety teams to flag potential risks, wrestle with difficult tradeoffs, and document their reasoning. But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,” the rational corporate response is to stop putting anything in writing. Stop doing risk assessments. Stop asking hard questions internally.

The lesson every general counsel in Silicon Valley is learning right now: ignorance is safer than inquiry. That makes everyone less safe, not more.

The essay has a lot more: about Section 230, about competition in this space, about the myopic nature of the ruling. Go read it.

  • ✇Malwarebytes
  • Roblox gives predators “powerful tools” to target children, says LA County
    Los Angeles County has sued online gaming company Roblox, adding to a series of suits that accuse the virtual worlds platform of misleading parents into thinking it’s safe while leaving children exposed to predators and sexually explicit content. The February 19 filing makes LA County the first California government body to take the company to court over child safety. Roblox claims over 151 million daily users, most of which are kids. The company said it disputes the claims and will defend it
     

Roblox gives predators “powerful tools” to target children, says LA County

24 de Fevereiro de 2026, 12:22

Los Angeles County has sued online gaming company Roblox, adding to a series of suits that accuse the virtual worlds platform of misleading parents into thinking it’s safe while leaving children exposed to predators and sexually explicit content. The February 19 filing makes LA County the first California government body to take the company to court over child safety.

Roblox claims over 151 million daily users, most of which are kids. The company said it disputes the claims and will defend itself vigorously.

What the suit tells us about how predators operate

According to the complaint, Roblox violated California’s Unfair Competition Law and False Advertising Law. County Counsel Dawyn R. Harrison, who filed the lawsuit, said that the gaming platform has repeatedly exposed kids to sexually explicit material, grooming, and exploitation because it has chosen profit over safety.

“This is not about a minor lapse in safety,” Harrison said in a prepared press release. “It is about a company that gives pedophiles powerful tools to prey on innocent and unsuspecting children.”

Until November 2024, anyone could friend and message a child on the platform, the suit said. When Roblox changed those rules it was allegedly still possible for accounts registered with ages over 13 to message each other without having previously been connected, meaning that adults could still message teens who didn’t know them.

The suit also alleged that it’s easy for predators to masquerade as children on the site, because age has historically been self-reported with no enforcement of parental approval when kids sign up.

But Roblox’s approach to age verification changed last September, when the company announced plans to use age estimation on all users who wanted to the platform’s communication features. It then introduced the third-party Persona system, which requires a facial age check to use chat features. But Persona itself has become a problem.

Researchers recently discovered an exposed frontend revealing the tool does far more than check ages, including running facial recognition against watchlists. It can also hold on to personal data including government IDs, device fingerprints, and biometric information for up to three years. Discord has already walked away from Persona, but Roblox hasn’t.

Even setting the vendor aside, the safeguards aren’t working as advertised. When Malwarebytes researchers created an account for a child under 13 on Roblox in December 2025, it found that a child account could find communities linked to cybercrime and fraud-related keywords.

The complaint contains many allegations about the type of behavior that has occurred on Roblox, including:

  • The simulated rape of a seven year-old’s avatar in a digital playground environment
  • “Diddy” games that recreated some events from the imprisoned rap star’s parties
  • The creation of Jeffrey Epstein-themed accounts, and the operation of a game called “Escape to Epstein Island”
  • Virtual strip clubs where avatars can disrobe and give lap dances

The LA County complaint also mentioned a report from financial forensic research company Hindenburg Research published in October 2024. The company, targeting short sellers who trade by selling stocks in vulnerable companies, said that it had found multiple groups on the site trading child sexual abuse material and soliciting sexual favors. The report also alleged that Roblox was cutting safety spending even as problems mounted.

A former senior product designer allegedly told Hindenburg the trade-off was deliberate. “If you’re limiting users’ engagement, it’s hurting your metrics…in a lot of cases, the leadership doesn’t want that,” the product designer allegedly said, according to the lawsuit.

A cacophony of cases

This won’t be the only case Roblox has defended. In 2022, the Social Media Victims Law Center filed suit against the company for allegedly touting child safety while allowing the exploitation of a young girl. The following year, multiple families filed suit against the gaming company for allegedly misleading them about content harmful to children. Last year, the mother of a 15 year-old boy from Texas sued Roblox after he committed suicide. The complaint alleged that he was groomed and subsequently blackmailed over nude pictures he’d been persuaded to send a predator on the site.

Another lawsuit filed against the company in San Mateo in February 2025 claimed that a 27-year-old predator reached a 13-year-old boy through the platform’s “whisper” messaging system. That case described the platform as “a digital and real-life nightmare for children.”

The California suit joins an expanding pile of government cases against Roblox. Louisiana sued the company in August 2025, followed by Kentucky (October 2025), Texas (November 2025), and Florida (December 2025). Georgia’s Attorney General is also investigating the company. And a collection of separate private suits against the company have been consolidated into a single multi-district litigation.

What parents can do

So, what can parents do? Interestingly, one potential answer came last year when the company’s CEO Dave Baszucki spoke with the BBC:

“My first message would be, if you’re not comfortable, don’t let your kids be on Roblox.”

If you do want to let your children use Roblox (or any other site), then close monitoring is important. Restrict friend requests and disable open chat to the extent that the platform allows. Anonymize your children’s profiles to potentially avoid what one family claimed happened to them in an earlier lawsuit, , in which they had to move across the country after the predator reportedly tracked down their child’s address via Roblox.

Child education is key. Tell your children not to reveal personal information and not to take conversations off-platform, because that’s where exploitation escalates. And keep the conversation going, not as a one-time lecture, but as a regular part of talking about their day.

For more information about child safety, check out Malwarebytes’ research on the topic, which also offers useful advice.

LA County is seeking civil penalties of up to $2,500 per violation per day, plus injunctive relief that could force structural changes to how the platform operates.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Hackers reportedly steal pictures of 8,000 children from Kido nursery chain

Firm, which has 18 sites around London and more in US, India and China, has received ransom demand, say reports

The names, pictures and addresses of about 8,000 children have reportedly been stolen from the Kido nursery chain by a gang of cybercriminals.

The criminals have demanded a ransom from the company – which has 18 sites around London, with more in the US, India and China – according to the BBC.

Continue reading...

© Photograph: solarseven/Getty Images/iStockphoto

© Photograph: solarseven/Getty Images/iStockphoto

© Photograph: solarseven/Getty Images/iStockphoto

‘Hacking is assumed now’: experts raise the alarm about added risk of surveillance cameras in childcare centres

As governments consider mandatory CCTV in early education, one big provider with cameras already installed is yet to formalise guidelines for how the footage will be stored and used

In the wake of horrifying reports last week alleging that eight children had been sexually abused by a worker in a Melbourne childcare centre, politicians and providers have scrambled to offer a response.

One option emerged from the fray as something concrete and immediate: the installation of CCTV cameras in childcare centres.

Sign up for Guardian Australia’s breaking news email

Continue reading...

© Composite: Getty

© Composite: Getty

© Composite: Getty

❌
❌