Visualização de leitura

Smashing Security podcast #466: Meta sees everything, Copy Fail, and a deepfake gets hired

Meta's smart glasses promise privacy "designed for you" - but everything they record was being beamed off to workers in Nairobi to label by hand. When those workers blew the whistle, Meta sacked all 1,108 of them. Meanwhile, the IT press is in a frenzy over a new Linux bug called "Copy Fail" - complete with logo, dedicated website, and a marketing-friendly name. But is it really the disaster everyone's making it out to be? And in our featured interview, Jake Moore of ESET explains how he tricked a company into offering his deepfake clone a job - after a perfectly normal-looking video interview. All this and more in episode 466 of the "Smashing Security" podcast with cybersecurity expert and keynote speaker Graham Cluley, joined this week by special guest Paul Ducklin.

U.S. Consumers Lost $2.1 Billion in Social Media Scams in 2025, FTC Says

An FTC report says that Americans last year lost $2.1 billion in social media scams, such as shopping and investment schemes. Social media site have become the place where most of these scams start, and more than half of that money was stolen in scams began on Facebook, WhatsApp, and Instagram.

The post U.S. Consumers Lost $2.1 Billion in Social Media Scams in 2025, FTC Says appeared first on Security Boulevard.

Meta accused of violating DSA by failing to safeguard minors

The European Commission accuses Meta of failing to protect children, allowing users under 13 on Instagram and Facebook, in breach of the DSA rules.

The European Commission has accused Meta of violating child safety rules. Instagram and Facebook allegedly failed to prevent children under 13 from accessing their platforms. According to the Commission, Meta did not properly assess and mitigate risks to minors, breaching obligations under the Digital Services Act (DSA).

“The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act (DSA) for failing to diligently identify, assess and mitigate the risks of minors under 13 years old accessing their services.” reads the press release. “Despite Meta’s own terms and conditions setting the minimum age to access Instagram and Facebook safely at 13, the measures put in place by the company to enforce these restrictions do not seem to be effective. The measures do not adequately prevent minors under the age of 13 from accessing their services nor promptly identify and remove them, if they already gained access.”

Minors under 13 can easily bypass age rules on Instagram and Facebook by entering false birth dates, as Meta lacks effective verification checks. Reporting tools are also weak: they require multiple steps, are not user-friendly, and often fail to trigger proper action, allowing underage users to remain active. The European Commission says Meta’s risk assessment is incomplete and ignores evidence that 10–12% of under-13s use these platforms, as well as research showing younger children are more vulnerable to harm. As a result, Meta is urged to revise its risk evaluation methods and strengthen measures to detect, prevent, and remove underage users, ensuring better privacy, safety, and protection for minors.

“At this stage, the Commission considers that Instagram and Facebook must change their risk assessment methodology, in order to evaluate which risks arise on Instagram and Facebook in the European Union, and how they manifest.” continues the press release. “Moreover, Instagram and Facebook need to strengthen their measures to prevent, detect and remove minors under the age of 13 from their service.”

Instagram and Facebook can now review the Commission’s evidence and respond to the preliminary findings, while also taking steps to address the issues under the 2025 DSA Guidelines. The European Board for Digital Services will be consulted. If breaches are confirmed, Meta could face fines of up to 6% of its global annual turnover, along with periodic penalties to enforce compliance. These findings are not final.

The case stems from formal proceedings launched in May 2024, based on extensive analysis of internal data, risk reports, and input from experts and civil society. The Commission used DSA guidelines as a benchmark, stressing the need for effective age verification tools that are accurate, reliable, and privacy-friendly, and has proposed an EU age verification app as a reference model.

“The Commission continues its investigation into other potential breaches that are part of these ongoing proceedings, including Meta’s compliance with DSA obligations to protect minors and the physical and mental well-being of users of all ages.” concludes the press release. “This investigation covers also the assessment and mitigation of risks arising from the design of Facebook’s and Instagram’s online interfaces, which may exploit the vulnerabilities and inexperience of minors, leading to addictive behaviour and reinforcing the so-called ‘rabbit hole’ effects.”

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, European Commission)

New Mexico’s Meta Ruling and Encryption

Mike Masnick points out that the recent New Mexico court ruling against Meta has some bad implications for end-to-end encryption, and security in general:

If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice.

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

End-to-end encryption protects billions of people from surveillance, data breaches, authoritarian governments, stalkers, and domestic abusers. It’s one of the most important privacy and security tools ordinary people have. Every major security expert and civil liberties organization in the world has argued for stronger encryption, not weaker.

But under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications. The logic applies to literally every communication tool ever invented. Predators also use the postal service, telephones, and in-person conversation. The encryption itself harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors ­- choices made by people, not by the platform’s design.

The incentive this creates goes far beyond encryption, and it’s bad. If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements. Why add encryption if it becomes Exhibit A in a future lawsuit? Why implement any privacy-protective feature if a plaintiff’s lawyer will characterize it as “shielding bad actors”?

And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.

In a sane legal environment, you want companies to have these internal debates. You want engineers and safety teams to flag potential risks, wrestle with difficult tradeoffs, and document their reasoning. But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,” the rational corporate response is to stop putting anything in writing. Stop doing risk assessments. Stop asking hard questions internally.

The lesson every general counsel in Silicon Valley is learning right now: ignorance is safer than inquiry. That makes everyone less safe, not more.

The essay has a lot more: about Section 230, about competition in this space, about the myopic nature of the ruling. Go read it.

Meta & YouTube Found Negligent: A Turning Point for Big Tech?

A landmark jury verdict has found Meta and YouTube negligent in a social media addiction case, raising major questions about platform accountability and legal protections under Section 230. This episode covers the details of the case, why the ruling is significant, and what it could mean for the future of social media, privacy, and cybersecurity. […]

The post Meta & YouTube Found Negligent: A Turning Point for Big Tech? appeared first on Shared Security Podcast.

The post Meta & YouTube Found Negligent: A Turning Point for Big Tech? appeared first on Security Boulevard.

💾

A week in security (March 23 – March 29)

❌