Visualização de leitura

Networks of Browser Extensions Are Spyware in Disguise 

Modern browser extensions and ad blockers are legally collecting and reselling user data, including streaming habits and B2B sales intelligence, under the guise of "analytics." This unregulated "legal spyware" creates massive security gaps as employees unwittingly leak corporate URLs, SaaS dashboards, and research activity to third-party databases. With the rise of AI-native browsers and personal device syncing, security leaders must evolve beyond simple permission checks to implement rigorous extension governance and privacy policy reviews to prevent targeted attacks and corporate data leakage.

The post Networks of Browser Extensions Are Spyware in Disguise  appeared first on Security Boulevard.

High Court Backs UK Police Use of Live Facial Recognition Technology

Facial Recognition Policy

A Live Facial Recognition Policy used by the Metropolitan Police Service has been upheld by the High Court of Justice, marking a significant legal development in the use of surveillance technology in the UK. The ruling, delivered on April 21, 2026, dismissed a legal challenge that questioned whether the policy allows excessive discretion in how facial recognition is deployed. The case, brought by civil liberties campaigners, focused on whether the Live Facial Recognition Policy complies with protections under the European Convention on Human Rights, particularly rights related to privacy, expression, and assembly.

Challenge to Live Facial Recognition Policy and Legal Grounds

The judicial review was filed by Shaun Thompson and Silkie Carlo, director of Big Brother Watch. The claimants argued that the Live Facial Recognition Policy gives police officers too much freedom to decide where and how the technology is used, potentially leading to arbitrary surveillance. Their case relied on Articles 8, 10, and 11 of the ECHR, which protect the right to privacy and freedom of expression and assembly. They argued that the policy lacked sufficient clarity and safeguards, making it incompatible with legal standards that require laws to be foreseeable and constrained. However, the court clarified that the case was not about whether facial recognition technology itself is appropriate, but whether the policy governing its use meets legal requirements.

Court Finds Safeguards and Structure in Live Facial Recognition Policy

In its judgment, the court ruled that the Live Facial Recognition Policy contains clear rules and does not grant unchecked powers to police officers. Judges highlighted that the policy limits deployment to three defined scenarios: crime hotspots, protective security operations, and cases involving specific intelligence about a suspect’s presence. The court noted that each deployment must undergo a proportionality assessment, ensuring that potential impacts on privacy and civil liberties are considered. It also emphasized that decisions are subject to oversight and follow a structured chain of command. According to the ruling, these safeguards distinguish the current policy from earlier concerns raised in previous cases. The judges concluded that the Live Facial Recognition Policy meets the legal requirement of being “in accordance with the law.”

Evidence and Concerns Around Misuse Rejected

The claimants pointed to concerns about wrongful identification and potential misuse of facial recognition technology. One claimant described being mistakenly stopped after being incorrectly matched to a suspect. Despite these concerns, the court found that much of the supporting evidence did not directly address the legality of the policy. Some submissions were dismissed as opinion rather than factual or expert evidence relevant to the legal issues being considered. The court also rejected arguments that the policy enables widespread surveillance in crowded areas. It clarified that deployment decisions are based on crime data and intelligence, not simply on the number of people in a location.

Discrimination Concerns and Broader Debate

Concerns about bias in facial recognition systems were raised during the proceedings, particularly following earlier findings by the National Physical Laboratory. However, the court stated that no substantial legal challenge on discrimination grounds had been properly presented. As a result, it did not find evidence that the Live Facial Recognition Policy is unlawful on those grounds. Separately, the UK government has signaled plans to expand the use of facial recognition technology. The Home Office has proposed increasing its deployment and is consulting on a stronger legal framework to support wider use.

Operational Impact and Future of Facial Recognition

The Metropolitan Police has defended the use of facial recognition, stating that the technology has supported thousands of arrests angd helped identify suspects in serious crimes, including violent and sexual offenses. Officials also highlighted improvements in accuracy and safeguards, including the immediate deletion of non-matching data and human review of alerts. Commissioner Mark Rowley described the ruling as a major step forward for public safety, emphasizing that the technology is carefully controlled and effective. With the court confirming that the Live Facial Recognition Policy meets legal standards, the decision is likely to influence how surveillance tools are used and regulated in the UK. It also sets a precedent for future legal challenges as governments and law enforcement agencies continue to expand the use of biometric technologies.

On Microsoft’s Lousy Cloud Security

ProPublica has a scoop:

In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.

The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.

Or, as one member of the team put it: “The package is a pile of shit.”

For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security...

The post On Microsoft’s Lousy Cloud Security appeared first on Security Boulevard.

On Microsoft’s Lousy Cloud Security

ProPublica has a scoop:

In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.

The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.

Or, as one member of the team put it: “The package is a pile of shit.”

For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.

[…]

The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.

Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling—which included a kind of “buyer beware” notice to any federal agency considering GCC High—helped Microsoft expand a government business empire worth billions of dollars.

US Bans All Foreign-Made Consumer Routers

This is for new routers; you don’t have to throw away your existing ones:

The Executive Branch determination noted that foreign-produced routers (1) introduce “a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense” and (2) pose “a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.”

More information:

Any new router made outside the US will now need to be approved by the FCC before it can be imported, marketed, or sold in the country.

In order to get that approval, companies manufacturing routers outside the US must apply for conditional approval in a process that will require the disclosure of the firm’s foreign investors or influence, as well as a plan to bring the manufacturing of the routers to the US.

Certain routers may be exempted from the list if they are deemed acceptable by the Department of Defense or the Department of Homeland Security, the FCC said. Neither agency has yet added any specific routers to its list of equipment exceptions.

[…]

Popular brands of router in the US include Netgear, a US company, which manufactures all of its products abroad.

One exception to the general absence of US-made routers is the newer Starlink WiFi router. Starlink is part of Elon Musk’s company SpaceX.

Presumably US companies will start making home routers, if they think this policy is stable enough to plan around. But they will be more expensive than routers made in China or Taiwan. Security is never free, but policy determines who pays for it.

Is “Hackback” Official US Cybersecurity Strategy?

The 2026 US “Cyber Strategy for America” document is mostly the same thing we’ve seen out of the White House for over a decade, but with a more aggressive tone.

But one sentence stood out: “We will unleash the private sector by creating incentives to identify and disrupt adversary networks and scale our national capabilities.” This sounds like a call for hackback: giving private companies permission to conduct offensive cyber operations.

The Economist noticed (alternate link) this, too.

I think this is an incredibly dumb idea:

In warfare, the notion of counterattack is extremely powerful. Going after the enemy­—its positions, its supply lines, its factories, its infrastructure—­is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty.

Both vigilante counterattacks, and preemptive attacks, fly in the face of these rights. They punish people before who haven’t been found guilty. It’s the same whether it’s an angry lynch mob stringing up a suspect, the MPAA disabling the computer of someone it believes made an illegal copy of a movie, or a corporate security officer launching a denial-of-service attack against someone he believes is targeting his company over the net.

In all of these cases, the attacker could be wrong. This has been true for lynch mobs, and on the internet it’s even harder to know who’s attacking you. Just because my computer looks like the source of an attack doesn’t mean that it is. And even if it is, it might be a zombie controlled by yet another computer; I might be a victim, too. The goal of a government’s legal system is justice; the goal of a vigilante is expediency.

We don’t issue letters of marque on the high seas anymore; we shouldn’t do it in cyberspace.

❌