Visualização de leitura

Fake Moustache Trick Raises Questions Over UK Online Safety Act Age Checks

Online Safety Act

The rollout of the UK’s Online Safety Act in July 2025 was intended to create a safer digital environment for children through stricter age verification rules, tighter moderation standards, and stronger protections against harmful online content. However, early evidence suggests that many of the safeguards introduced under the legislation can still be bypassed with surprisingly simple tactics, including a fake moustache drawn with makeup.  Recent findings have raised concerns among parents, researchers, and digital safety experts about the effectiveness of current age verification systems. While the Online Safety Act has led to some improvements in children’s online experiences, critics argue that enforcement remains inconsistent and that many platforms are still vulnerable to manipulation.  One of the most widely discussed examples involved a 12-year-old boy who reportedly used an eyebrow pencil to create a fake moustache before facing a facial age estimation check. According to the report, the altered appearance convinced the system that he was 15 years old, allowing him to bypass restrictions designed for younger users. The incident has become a symbol of broader concerns about the reliability of AI-driven age-verification technologies. 

Online Safety Act Faces Early Challenges 

The Online Safety Act was introduced to strengthen online child protection measures by requiring platforms to implement stricter checks and reduce children’s exposure to harmful material. The legislation also aimed to improve reporting tools and create safer digital spaces for younger users.  Despite those goals, the report suggests that loopholes remain widespread. Children have reportedly been bypassing protection through several methods, including entering false birthdates, borrowing adult credentials, sharing accounts, and using VPN services. More advanced attempts have also involved spoofing facial recognition systems used in age verification processes.  Survey data cited in the findings revealed that nearly half of children believe current age verification systems are easy to evade. Around one-third admitted to bypassing these systems in recent months.  The fake moustache example particularly highlighted weaknesses in facial age estimation tools that rely heavily on visual indicators rather than stronger forms of identity confirmation. Experts argue that systems based primarily on appearance can be vulnerable to minor cosmetic changes, lighting adjustments, or camera manipulation. 

Mixed Results Following Online Safety Act Rollout 

Although concerns over age verification remain significant, the report noted that the Online Safety Act has produced some positive outcomes. Approximately half of the surveyed children said they were now seeing more age-appropriate content online. In addition, around 40% of both children and parents stated that the internet feels somewhat safer since the legislation came into effect.  Many children also appeared supportive of increased online protections. The findings showed that younger users generally approved of stricter platform rules, reduced interaction with strangers, and limitations placed on high-risk platform features.  Around 90% of children who noticed stronger moderation systems and improved reporting tools viewed those changes positively. Researchers said this indicates that many younger users are willing to engage with safer digital environments when protections are implemented effectively.  Still, the improvements have not been universal. Within just one month of new child protection codes being introduced under the Online Safety Act, nearly half of the children surveyed reported encountering harmful content online. This included violent material, hate speech, and body image-related content, all categories the legislation specifically aims to regulate. 

Privacy Concerns Grow Around Age Verification 

The expansion of age verification requirements has also triggered growing concerns over privacy and data security. More than half of the children surveyed said they had been asked to verify their age within a recent two-month period. These checks were reportedly common across major platforms, including TikTok, YouTube, Google services, and Roblox.  Many platforms now rely on technologies such as facial age estimation, government-issued identification checks, and third-party age assurance providers to comply with the Online Safety Act. While users generally described the systems as easy to complete, concerns remain about how sensitive data is collected, stored, and potentially reused.  Parents expressed unease about whether biometric information and identity documents submitted during age verification could later be retained by companies or accessed by government agencies. Those concerns have intensified calls for more centralized and privacy-focused verification systems instead of fragmented checks spread across multiple online services.  Experts argue that current approaches may not strike the right balance between child safety and personal privacy. They warn that if the weaknesses exposed by tactics like the fake moustache incident are not addressed, public trust in these systems could continue to decline. 

AI is spreading decision-making, but not accountability

On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.

As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.

AI, after all, may diffuse decision-making, but not legal liability.

AI doesn’t show up in court — people do

Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.

“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jessica Eaves Mathews, founder, Leverage Legal Group

LLG

“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”

In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.

The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.

CIO as the system’s last line of defense

If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.

“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.

The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Drumgoole, president, global infrastructure services, DXC Technology

DXC

The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.

“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.

In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.

The illusion of distributed accountability

Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Ojas Rege, SVP and GM, privacy and data governance, OneTrust

OneTrust

Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”

But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”

In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Simon Elcham, CAIO, Trustpair

Trustpair

And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.

Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.

The emergence and limits of the CAIO

To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.

At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”

The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jonathan Tushman, CTO, Hi Marley

Hi Marley

This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.

When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.

The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.

The widening gap between deployment and governance

If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”

Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.

The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.

At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.

Where the buck stops

For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.

When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.

AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.

White House weighs pre-release reviews for high-risk AI models

The Trump administration is in early discussions about whether advanced AI models should be vetted before public release, according to reporting from the New York Times, the Wall Street Journal, and Axios.

The conversations center on systems capable of facilitating cyberattacks, particularly models that could help users identify and exploit software vulnerabilities. Officials are considering several options, including formal pre-release review processes and government-led testing for higher-risk systems. No proposal has been finalized, and no timeline has been set.

What has changed

The discussions mark a shift in tone, if not yet in policy. On Jan. 20, 2025, Donald Trump’s first day back in office during his second term, he revoked Biden’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Three days later, he issued his own order, “Removing Barriers to American Leadership in Artificial Intelligence,” signaling a significant shift away from the Biden administration’s emphasis on oversight and risk mitigation toward a framework centered on deregulation and the promotion of AI innovation.

Among the things that order effectively ended: The Biden framework had introduced mandatory red-teaming for high-risk AI models, enhanced cybersecurity protocols, and monitoring requirements for AI used in critical infrastructure. The new discussions suggest certain security risks — particularly those tied to offensive cyber capabilities — warrant a more interventionist posture, even as the administration remains broadly opposed to sweeping AI regulation.

The Mythos factor

The discussion follows Anthropic’s recent introduction of Mythos, a model the company has described as representing a watershed moment for cybersecurity.

Anthropic has said Mythos Preview has found thousands of high-severity vulnerabilities, including some in every major operating system and web browser, and that AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. In one benchmark, the company reported significantly higher success rates compared to earlier models.

Anthropic has not released the model publicly. Instead, it launched Project Glasswing, committing up to $100 million in usage credits to a select group of technology and cybersecurity companies to use Mythos for defensive purposes — finding and patching vulnerabilities before malicious actors can exploit them.

Anthropic has also been briefing the Cybersecurity and Infrastructure Security Agency, the Commerce Department, and other stakeholders on the potential risks and benefits of Mythos Preview. OpenAI has developed a comparable model and has released it to a small set of companies through an existing trusted-access program.

What a review might mean

Pre-release evaluation of AI models is not a new idea, but it remains poorly defined in the US policy context. The Biden executive order Trump revoked had required developers of the largest AI systems to notify the government and share safety test results before deployment — one of several provisions the Trump administration characterized as burdensome obstacles to innovation.

The institutional picture has also shifted. The US AI Safety Institute, created under the Biden order to conduct pre-deployment evaluation and housed within the National Institute of Standards and Technology, was substantially reorganized after Trump took office. In June 2025, the agency was renamed the Center for AI Standards and Innovation, and its mission was revised.

Commerce Secretary Howard Lutnick framed the change as a repudiation of what he called the use of safety as a pretext for censorship and regulation. The renamed center’s mandate now includes leading unclassified evaluations of AI capabilities that may pose risks to national security, with a stated focus on demonstrable risks such as cybersecurity, biosecurity, and chemical weapons, potentially positioning it to play a role in any future review process.

Other governments have moved further and faster. The UK’s AI Security Institute has conducted pre-deployment evaluations of several frontier models, working directly with labs, including Anthropic and OpenAI, to assess risk thresholds before release. The EU AI Act, which began phasing in last year, establishes mandatory conformity assessments for high-risk AI applications.

The US has not established a comparable framework or legal authority to require such reviews.

UK’s Online Age Checks Are Failing—Kids are Beating Them with AI, Fake Beards

U.S. Government Sues TikTok, TikTok

When governments introduced stricter online age checks under the UK’s Online Safety Act, the goal was to keep children away from harmful content. But in practice, the system is already showing cracks—and the most telling insight comes from the very users it’s meant to protect.

Children aren’t just countering age checks, they’re actively bypassing them—and often with surprising ease.

According to a new report from Internet Matters foundation, nearly half of children (46%) believe age verification systems are easy to get around, while only 17% think they are difficult. That perception isn’t theoretical. It’s grounded in real behavior, shared knowledge, and increasingly creative workarounds.

From simply entering a fake birthdate to using someone else’s ID, children have developed a toolkit to bypass techniques. Some methods are almost trivial—changing a date of birth or borrowing a parent’s login—while others reflect a growing sophistication. Kids reported submitting altered images, using AI-generated faces, or even drawing facial hair on themselves to trick facial recognition systems.

In one striking example, a parent described catching their child using makeup to appear older—successfully fooling the system.

I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old. – Mum of boy, 12

But the problem goes deeper than perception. It’s systemic.

Also read: UK Regulator Ofcom Launches Probe into Telegram, Teen Chat Platforms

Bypassing Is the Norm, Not the Exception

The report reveals that nearly one in three children (32%) admitted to bypassing age restrictions in just the past two months. Older children are even more likely to do so, which shows how digital literacy often translates into evasion capability.

The most common methods?

  • Entering a fake birthdate (13%)
  • Using someone else’s login credentials (9%)
  • Accessing platforms via another person’s device (8%)

Despite widespread concerns about VPNs, they play a relatively minor role. Only 7% of children reported using them to bypass restrictions, suggesting that simpler, low-effort tactics remain the preferred route.

In other words, the barrier to entry is not just low—it’s practically optional.

Europe Threat Landscape Q1 2026, Online Age Check Europe’s cyber threat landscape Q1 2026 shows a sharp acceleration in cyber threats across the region. Do you know what's contributing to it?

Check Cyble's full analysis report here!

Even When It Works, It Doesn’t Work

Ironically, even when children attempt to follow the rules, the technology doesn’t always cooperate.

Some reported being incorrectly identified as older—or younger—by facial recognition systems. In cases where they were flagged as underage, enforcement was often inconsistent or temporary. One child described being blocked from going live on a platform for just 10 minutes before being allowed to try again.

This inconsistency creates a loophole where persistence pays. If at first you’re denied, simply try again.

A Risky Side Effect

Perhaps the most concerning finding isn’t that children can bypass age checks—it’s that adults can too.

The report states fears that adults may exploit these same weaknesses to access spaces intended for younger users. In some cases, this involves using images or videos of children to trick verification systems. There are even reports of adults acquiring child-registered accounts to blend into youth platforms.

This flips the entire premise of age verification on its head. Instead of protecting children, flawed systems may inadvertently expose them to greater risk.

Parents, Part of the Problem—or the Solution?

Adding another layer of complexity, parents themselves are sometimes complicit.

About 26% of parents admitted to allowing their children to bypass age checks, with 17% actively helping them do so. The reasoning is often pragmatic. Parents feel they understand the risks and trust their child’s judgment.

I have helped my son get around them. It was to play a game, and I knew the game, and I was happy and confident that I was fine with him playing it. – Mum of non-binary child, 13

But this undermines the consistency of enforcement. If rules vary from household to household, platform-level protections lose their impact.

Interestingly, the data also suggests that communication matters. Children who regularly discuss their online activity with parents are less likely to bypass restrictions than those who don’t.

Why Kids Are Bypassing in the First Place

The motivations aren’t always malicious. In many cases, children are simply trying to access social media (34%), gaming communities (30%), or messaging apps (29%) that their peers are already using.

What this resonate is a fundamental tension where age verification systems are trying to enforce boundaries in environments where social participation is the norm.

Age verification is often positioned as a cornerstone of online safety. But in practice, it’s proving to be more of a speed bump than a safeguard.

Children understand the systems. They share methods. They adapt quickly. And until the technology—and its enforcement—becomes significantly more robust, age checks may offer more reassurance than real protection.

Meta accused of violating DSA by failing to safeguard minors

The European Commission accuses Meta of failing to protect children, allowing users under 13 on Instagram and Facebook, in breach of the DSA rules.

The European Commission has accused Meta of violating child safety rules. Instagram and Facebook allegedly failed to prevent children under 13 from accessing their platforms. According to the Commission, Meta did not properly assess and mitigate risks to minors, breaching obligations under the Digital Services Act (DSA).

“The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act (DSA) for failing to diligently identify, assess and mitigate the risks of minors under 13 years old accessing their services.” reads the press release. “Despite Meta’s own terms and conditions setting the minimum age to access Instagram and Facebook safely at 13, the measures put in place by the company to enforce these restrictions do not seem to be effective. The measures do not adequately prevent minors under the age of 13 from accessing their services nor promptly identify and remove them, if they already gained access.”

Minors under 13 can easily bypass age rules on Instagram and Facebook by entering false birth dates, as Meta lacks effective verification checks. Reporting tools are also weak: they require multiple steps, are not user-friendly, and often fail to trigger proper action, allowing underage users to remain active. The European Commission says Meta’s risk assessment is incomplete and ignores evidence that 10–12% of under-13s use these platforms, as well as research showing younger children are more vulnerable to harm. As a result, Meta is urged to revise its risk evaluation methods and strengthen measures to detect, prevent, and remove underage users, ensuring better privacy, safety, and protection for minors.

“At this stage, the Commission considers that Instagram and Facebook must change their risk assessment methodology, in order to evaluate which risks arise on Instagram and Facebook in the European Union, and how they manifest.” continues the press release. “Moreover, Instagram and Facebook need to strengthen their measures to prevent, detect and remove minors under the age of 13 from their service.”

Instagram and Facebook can now review the Commission’s evidence and respond to the preliminary findings, while also taking steps to address the issues under the 2025 DSA Guidelines. The European Board for Digital Services will be consulted. If breaches are confirmed, Meta could face fines of up to 6% of its global annual turnover, along with periodic penalties to enforce compliance. These findings are not final.

The case stems from formal proceedings launched in May 2024, based on extensive analysis of internal data, risk reports, and input from experts and civil society. The Commission used DSA guidelines as a benchmark, stressing the need for effective age verification tools that are accurate, reliable, and privacy-friendly, and has proposed an EU age verification app as a reference model.

“The Commission continues its investigation into other potential breaches that are part of these ongoing proceedings, including Meta’s compliance with DSA obligations to protect minors and the physical and mental well-being of users of all ages.” concludes the press release. “This investigation covers also the assessment and mitigation of risks arising from the design of Facebook’s and Instagram’s online interfaces, which may exploit the vulnerabilities and inexperience of minors, leading to addictive behaviour and reinforcing the so-called ‘rabbit hole’ effects.”

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, European Commission)

Australia’s APRA Issues AI Risk Warning to Banks and Insurers

APRA AI risk warning

The APRA AI risk warning has placed banks, insurers, and superannuation trustees on alert as Australia’s financial regulator calls for a significant uplift in how artificial intelligence is governed across the sector. The Australian Prudential Regulation Authority has stated that current governance, risk management, and operational resilience practices are not keeping pace with the rapid adoption of AI. In a letter to regulated entities, APRA said the APRA AI risk warning follows a targeted supervisory review conducted late last year across major financial institutions. The review assessed how AI is being deployed and governed across the industry and found widening gaps between technology adoption and risk control frameworks.

APRA AI Risk Warning on Governance and Operational Gaps

The APRA AI risk warning highlights that AI is increasingly being embedded into operational systems, customer services, and decision-making tools across regulated entities. While adoption is accelerating, APRA observed that governance structures have not matured at the same speed. According to the regulator, assurance practices remain fragmented, particularly in areas involving cyber security, data protection, procurement, and operational resilience. The APRA AI risk warning notes that many organisations are still relying on traditional risk management approaches that are not designed for AI-driven systems. Another key concern raised in the APRA AI risk warning is the limited visibility over how AI models are trained, updated, or modified when embedded within third-party platforms. This lack of transparency, APRA said, reduces the ability of institutions to fully assess risks linked to model behaviour and system dependencies.

Board Oversight Gaps Highlighted in APRA Warning

The APRA AI risk warning also draws attention to board-level oversight challenges. While boards show strong interest in AI-driven productivity and customer service improvements, many still lack sufficient technical understanding to effectively challenge management decisions. APRA observed that some boards are heavily reliant on vendor summaries and presentations rather than detailed internal assessments of AI risk exposure. The APRA AI risk warning stresses that this creates blind spots in governance, particularly when dealing with unpredictable model outputs and operational risks.

AI Risk Warning Flags Cyber and Concentration Risks

Cybersecurity is a major focus of the APRA AI risk warning, with APRA noting that advanced AI models could significantly increase the speed and scale of cyberattacks. The regulator specifically referenced frontier AI models that may assist malicious actors in identifying system vulnerabilities more efficiently. The APRA AI risk warning also highlights growing concentration risk, where institutions depend heavily on single AI providers across multiple use cases. APRA cautioned that insufficient contingency planning in such scenarios could create operational vulnerabilities if service disruptions occur.

Fragmented Risk Management Systems

A key theme in the APRA AI risk warning is the fragmented nature of current risk management frameworks. AI-related risks often cut across multiple domains, including cyber security, privacy, procurement, and operational risk. However, APRA found that existing systems are not always integrated enough to manage these overlaps effectively. The regulator said this fragmentation limits the ability of financial institutions to gain a complete view of AI-related exposure and weakens overall assurance mechanisms.

Expectations for Stronger Controls

APRA Member Therese McCarthy Hockey stated that financial institutions must adapt quickly to manage emerging risks while continuing to leverage AI for efficiency and service improvements. She noted that while AI presents significant opportunities, organisations must ensure their systems are capable of identifying and responding to vulnerabilities at a pace matching AI-driven threats. The APRA AI risk warning outlines expectations for boards to maintain sufficient understanding of AI systems, set clear risk appetite frameworks, and ensure stronger oversight of third-party dependencies. APRA also expects clearer triggers for intervention when systems do not operate as intended.

Ongoing Supervisory Focus

The APRA AI risk warning confirms that while no new regulatory requirements are being introduced at this stage, APRA expects immediate improvements in how institutions manage AI-related risks. The regulator has indicated that it will continue to monitor AI adoption closely and may consider further policy action if necessary. APRA also stated it will continue engaging with domestic and international regulators to assess emerging risks linked to AI technologies and their impact on financial system stability.

How the EU’s NIS2 directive is changing how CIOs think about digital infrastructure

In conversations I’ve had with CIOs over the past year, there’s been a noticeable shift in how NIS2 (Network and Information Security Directive 2) is being discussed. It used to be filed away as another regulatory hurdle to clear, but now it’s prompting CIOs and their teams to think a little deeper about how well they understand the systems they depend on. For a long time, risk has been largely framed within the boundaries of the organization — something that could be managed through internal controls, policies and audits. But that no longer reflects how digital services are built or delivered. Most organizations I encounter rely on a web of providers spanning cloud platforms, data centers, network operators and software vendors, all working together to create a “patchwork” ecosystem. NIS2 is different because it acknowledges that reality and, in doing so, it’s forcing a broader and sometimes more uncomfortable reassessment of where risk really sits.

What stands out to me is that NIS2 doesn’t just focus on individual accountability, but on the very definition of resilience itself. It recognizes that disruption rarely originates within a single process, or even a single organization. More often, it emerges from the connections between them; from unseen dependencies, indirect relationships and assumptions about how systems will behave under pressure. That’s novel, because it moves the conversation away from whether individual systems are secure, and toward whether the overall architecture those systems sit within can continue to function when something inevitably goes wrong. In that sense, NIS2 is less about tightening cybersecurity controls and more about encouraging a different way of thinking, where resilience is shaped as much by how infrastructure is designed and connected as it is by how it is protected.

NIS2 expands the definition of risk beyond the enterprise

One of the most immediate impacts I’m seeing from NIS2 is how it challenges long-held assumptions about control. Speak to any CIO, and they’ll usually talk about securing what sits within their own environments — their applications, services and data. But in practice, very little of today’s digital estate is fully owned because it’s so distributed among third parties with countless links and dependencies. Virtually all business services depend on layers of external providers, each with its own dependencies, architectures and risk profiles. According to the World Economic Forum, the top supply chain risk in 2026 is the inheritance risk — the inability to ensure the integrity of third-party software, hardware or services. NIS2 brings that into sharp focus by extending accountability beyond direct suppliers to include the wider ecosystem that supports them. In essence, it prompts businesses to shift from asking “are we secure?” to “how secure is everything we rely on to operate?”

That’s quite a challenge, because it’s not enough for businesses to simply know their suppliers — they need to understand how deeply interconnected those relationships are. In many cases, the real exposure sits several steps removed, in the providers behind your providers or in shared infrastructure that underpins multiple services at once. The “uncomfortable reassessment” I mentioned earlier is the squaring of this circle — how many organizations have full visibility into that sprawling landscape, let alone the means to control it?

NIS2 is compelling organizations to map dependencies more rigorously, to ask harder questions of their partners and network infrastructure, and to recognize that resilience is only as strong as the most fragile link in the chain. The WEF shows that in 2026, only 33% of organizations map their entire IT supply chain to gain this visibility. And even then, the added risk of unknown service providers, such as is the case when suing the public Internet, where data pathways are neither visible nor controllable, is difficult to quantify.

Compliance is the trigger, but architecture is the challenge

What I find interesting about NIS2 is that it goes deeper than compliance — it’s trying to trigger a shift in culture. It’s relatively straightforward to introduce new policies, expand reporting requirements or formalize supplier assessments. But what happens when those requirements collide with the reality of how modern IT environments are built? Many organizations simply don’t have a clear, end-to-end view of how their services are delivered, how data flows between providers or how incidents might spread like wildfire across the ecosystem they depend on. NIS2 asks CIOs to look beyond governance frameworks and examine whether their operating models support the level of oversight and responsiveness the directive expects.

And that is where the architecture question becomes essential. It’s one thing to require suppliers to report incidents or meet certain security standards; it’s another thing entirely to ensure that the underlying infrastructure is designed to absorb disruption without cascading failure. In my experience, this is where many organizations begin to realize that resilience cannot be layered on afterwards. It must be built into how systems are structured, how dependencies are managed and how connectivity is established between environments. NIS2 may define what needs to be done, but it doesn’t prescribe how to do it. That responsibility sits with CIOs, who now have to translate regulatory intent into practical design decisions about where workloads run, how services interconnect and how failure is contained when it occurs.

Infrastructure design is now resilience design

What this ultimately leads to is a big infrastructure rethink. I’m privileged to have had some interesting discussions with CIOs and other executives about this very topic, so I know that resilience is beginning to be understood as more than a set of security controls. Connectivity is now at the heart of resilience, and in that sense, NIS2 has succeeded in getting organizations to think differently about what resilience really means. If a service depends on a single cloud region, a single network path or a tightly coupled set of providers, then no amount of policy or monitoring will prevent disruption when one of those elements fails. I’m pleased to see organizations starting to question these assumptions — not just asking whether systems are secure, but whether they are structured in a way that allows them to continue operating under stress. That shift in thinking does away with the abstract theory of resilience and defines it as something that can be designed and architected.

From a connectivity perspective, this means building in diversity at every level. Distributing workloads across geographically separate locations, establishing multiple, independent network paths and avoiding unnecessary concentration of critical services all contribute to a more resilient architecture. Interconnection plays a starring role here as the mechanism that allows different parts of the digital ecosystem to communicate in controlled, redundant and predictable ways. When designed properly, this kind of architecture limits the blast radius of any single point of failure and makes it easier to maintain service continuity even when parts of the system are down or under strain. The real takeaway here is that resilience is not something any single organization can achieve in isolation. It emerges from the collective design of the entire ecosystem, where each participant contributes to the overall stability of the services they all depend on.

When regulatory pressure gives way to strategic opportunity

The building blocks are already there. Practices like supplier due diligence, security certifications and business continuity planning are not new. What NIS2 does is raise the bar on how consistently and how deeply they are applied. It also brings a level of structure to conversations that were previously fragmented, particularly when it comes to expectations between partners. And therein lies the strategic upside. Organizations that can clearly demonstrate how they manage risk across their supply chains, how they design for resilience and how they respond to disruption are in a stronger position, not just from a regulatory standpoint, but in how they engage with customers and partners. In some sectors, we’re already seeing this play out through increased requests for transparency, self-assessments and proof of compliance. That trend is only going to accelerate. For CIOs, it’s a golden opportunity to move beyond a defensive posture and position resilience as a key competitive differentiator. It becomes a way to build trust, strengthen relationships and support more sustainable growth, rather than simply a requirement to satisfy regulators.

NIS2 may be the catalyst, but the underlying change runs deeper. It’s pushing CIOs to think beyond compliance and toward a more structural understanding of risk that reflects how digital services operate today.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

A Cybersecurity Lifeline for Lean IT Teams: Introducing C.R.E.W.

“Too small to target” is a dangerous cybersecurity myth, while "Where do I start?," is a legitimate cyber defense question.

Imagine leaving your office unlocked overnight—not because you don’t have anything valuable, but because you assume no one would bother breaking in.

The post A Cybersecurity Lifeline for Lean IT Teams: Introducing C.R.E.W. appeared first on Security Boulevard.

FCC Proposes Tougher KYC Rules to Crack Down on Illegal Robocalls

KYC Rules for Robocalls

The Federal Communications Commission (FCC) is proposing stricter Know-Your-Customer (KYC) rules for robocalls as part of a broader effort to curb illegal calls and protect consumers. In a newly released Further Notice of Proposed Rulemaking, the agency outlined plans to tighten requirements for originating voice service providers, which are considered the first line of defense against unlawful robocalls. The proposal reflects growing concern that existing KYC rules for robocalls are not being consistently enforced, allowing bad actors to exploit gaps in the system. The FCC emphasized that stopping illegal calls before they enter the network remains the most effective way to reduce fraud and abuse.

Why the FCC Is Expanding KYC Rules for Robocalls

Under current FCC robocall regulations, voice service providers are required to take “affirmative, effective” steps to know their customers. However, regulators say some providers are failing to carry out adequate checks, resulting in a surge of illegal robocalls that defraud consumers and expose telecom networks to misuse. “Combatting illegal calls is our top consumer protection priority, and we are taking a holistic approach by attacking them at every point in their lifecycle.” The FCC noted that weak KYC rules for robocalls not only enable scams but also make it harder for law enforcement to track criminal activities, including drug trafficking and human exploitation that rely on anonymous communication channels.

Proposed Changes to KYC Rules for Robocalls

The FCC is seeking public comment on several measures aimed at strengthening KYC rules for robocalls and improving telecom KYC compliance. One key proposal is to require providers to collect more detailed customer information before granting access to calling services. This includes name, physical address, government-issued identification number, and an alternate contact number for all new and renewing customers. For high-volume callers, such as businesses or bulk calling services, the FCC is considering additional requirements. These may include collecting information on how the service will be used—such as marketing or political campaigns—as well as technical data like IP addresses used to place calls. The Commission believes these enhanced Know-Your-Customer rules for robocalls could deter fraudsters from entering the network and make it easier to identify them if illegal activity occurs.

Verification, Monitoring, and Data Retention

Beyond data collection, the FCC is also proposing stricter verification and monitoring under its updated KYC rules for robocalls. Providers may be required to verify customer identities using supporting documents such as government-issued IDs or business registration records. The agency is also exploring whether companies should retain KYC records for up to four years after a customer relationship ends, allowing time for investigations into illegal robocalls. Another key focus is ongoing monitoring. The FCC is considering whether providers should re-verify customer information when unusual activity is detected, such as sudden spikes in call volume or changes in traffic patterns. These measures aim to ensure that telecom networks are not continuously exploited by bad actors using false or stolen identities.

Tougher Penalties to Enforce Compliance

To strengthen enforcement, the FCC has proposed financial penalties tied directly to violations of KYC rules for robocalls. The agency is considering a base fine of $2,500 per illegal call, aligning penalties with the scale of harm caused. This per-call penalty structure is designed to discourage large-scale robocall operations, where millions of fraudulent calls can generate significant profits. The FCC believes that stronger enforcement will push providers to take telecom KYC compliance more seriously and close existing loopholes.

Recent Enforcement Highlights Gaps

The push for stronger KYC rules for robocalls comes amid ongoing enforcement challenges. In a recent case, the FCC proposed a $4.5 million fine against Voxbeam Telecommunications for allegedly routing illegal robocalls into U.S. networks. The investigation found that Voxbeam accepted traffic from Axfone, a Czech-based provider not listed in the FCC’s Robocall Mitigation Database. Under existing rules, such traffic should have been blocked, raising concerns about gaps in compliance and oversight. If adopted, the new rules could significantly reshape how voice service providers onboard and monitor customers, bringing telecom practices closer to the stricter identity verification standards already seen in the financial sector.

FCC targets foreign router imports amid rising cybersecurity concerns

The FCC will ban new foreign-made routers in the U.S. over security risks, unless approved by DHS or defense authorities.

The U.S. FCC announced a ban on importing new foreign-made consumer routers, citing unacceptable cyber and national security risks. The decision, backed by Executive Branch assessments, means such devices can no longer be sold or marketed in the U.S. unless they receive special approval.

Routers will be added to the Covered List, with exceptions only for those cleared by the Department of Homeland Security or defense authorities after the Department of Homeland Security or defense authorities verify they pose no threat to communications networks.

“Today, the Federal Communications Commission updated its Covered List to include all consumer-grade routers produced in foreign countries. Routers are the boxes in every home that connect computers, phones, and smart devices to the internet.” reads the announcement published by FCC. “This followed a determination by a White House-convened Executive Branch interagency body with appropriate national security expertise that such routers “pose unacceptable risks to the national security of the United States or the safety and security of United States persons.””

The U.S. “Covered List” is a security list maintained by the Federal Communications Commission under the Secure and Trusted Communications Networks Act.

It identifies communications equipment and services that pose national security risks to U.S. networks. Anything placed on this list is effectively banned from being authorized, marketed, or sold in the United States.

U.S. authorities warn that foreign-made routers create serious supply chain and cybersecurity risks, potentially disrupting the economy, critical infrastructure, and national defense. Policy guidance stresses reducing dependence on foreign components for essential technologies.

These routers have already been exploited by threat actors for hacking, espionage, and intellectual property theft, and were linked to major cyber espionage campaigns like Volt Typhoon, Flax Typhoon, and Salt Typhoon targeting U.S. infrastructure.

Manufacturers can still request Conditional Approval if their devices are proven safe. The rules apply only to new models, meaning existing routers already in use or previously approved can still be sold and used without restrictions.

Currently, only a few products, like drones and software-defined radios from SiFly Aviation, Mobilicom, ScoutDI, and Verge Aero, are approved. Router manufacturers can seek Conditional Approval, while U.S.-made devices such as Starlink routers are exempt.

The FCC warns foreign routers pose major supply chain and cybersecurity risks, potentially disrupting infrastructure and the economy. Weak security in home and small office routers has already been exploited for hacking, espionage, and data theft, and can also turn devices into botnets for large-scale cyberattacks.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, router)

The FCC Just Blocked Every New Foreign-Made Router from the U.S. Market

Foreign-Made Router, FCC Ban, FCC

The router sitting in your home — the one connecting every phone, laptop, and smart device on your network to the internet — is almost certainly made overseas. As of March 23, no new model of that device can receive U.S. market authorization unless it clears a security review by the Department of War or the Department of Homeland Security first.

The Federal Communications Commission updated its Covered List to include all routers produced in a foreign country, following a National Security Determination received on March 20 from a White House-convened Executive Branch interagency body.

The determination concluded that foreign-produced routers introduce a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense, and pose a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.

The FCC's Covered List — established under the Secure and Trusted Communications Networks Act — carries real enforcement teeth. Equipment on the Covered List is prohibited from receiving FCC equipment authorization, and most electronic devices require FCC equipment authorization prior to importation, marketing, or sale in the U.S. Covered equipment is banned from receiving new equipment authorizations, preventing new devices from entering the U.S. market.

The national security determination cited three Chinese state-sponsored cyber campaigns by name. Routers produced abroad were directly implicated in the Volt, Flax, and Salt Typhoon cyberattacks, which targeted critical American communications, energy, transportation, and water infrastructure.

Salt Typhoon penetrated multiple U.S. telecommunications carriers and persisted inside their networks for months; Volt Typhoon pre-positioned itself inside U.S. critical infrastructure for potential future disruption; and Flax Typhoon operated a 260,000-device botnet largely built from compromised consumer routers.

Unlike prior Covered List entries that targeted specific entities such as Huawei and ZTE, this update applies categorically based on place of production, not manufacturer identity. That distinction matters enormously for the industry.

Virtually all routers are made outside the United States, including those produced by U.S.-based companies like TP-Link, which manufactures its products in Vietnam. It appears that the entire router industry will be impacted by the FCC's announcement concerning new devices not previously authorized by the FCC. Netgear, Amazon Eero, Google Nest WiFi, Asus, Linksys, and D-Link all manufacture in Asia. The one apparent exception is the newer Starlink Wi-Fi router, which the company says is manufactured in Texas.

The action does not strand existing users. Consumers can continue using any router they have already purchased, and retailers can continue selling previously authorized models already in their supply chains. Firmware updates for covered devices remain permitted at least through March 1, 2027.

The disruption falls entirely on new product cycles — which in a fast-moving consumer networking market means the freeze begins almost immediately.

A rule that bans new foreign router models while leaving millions of existing foreign-made devices completely untouched does not make U.S. networks measurably more secure today. Security researchers have noted that the Volt Typhoon attacks cited by the FCC as justification, primarily targeted Cisco and Netgear hardware — U.S.-designed products — pointing to software patching failures rather than manufacturing origin as the operational vulnerability.

A Conditional Approval pathway exists for manufacturers willing to pursue it. The Conditional Approval pathway requires companies to commit to establishing or expanding U.S. manufacturing for the products they want to bring to market. That is a significant industrial policy commitment on top of any security review, and one that smaller router vendors may find prohibitive.

The December 2025 drone ban used an identical framework — and as of publication, it had cleared exactly four non-Chinese drone systems while leaving major Chinese manufacturers fully blocked.

Also read: FCC Set to Reverse Course on Telecom Cybersecurity Mandate

TikTok Says No to End-to-End Encryption: Here’s Why That’s a Big Deal

In a move that bucks the entire industry trend, TikTok has confirmed it will not implement end-to-end encryption (E2EE) for direct messages on its platform — arguing that E2EE would make users less safe. We break down what’s really going on: the child safety argument, the privacy counterargument, the geopolitical questions surrounding ByteDance, and what […]

The post TikTok Says No to End-to-End Encryption: Here’s Why That’s a Big Deal appeared first on Shared Security Podcast.

The post TikTok Says No to End-to-End Encryption: Here’s Why That’s a Big Deal appeared first on Security Boulevard.

💾

Reading White House President Trump’s Cyber Strategy for America (March 2026)

White House released President Trump’s Cyber Strategy for America, framing cyberspace as a strategic domain to project power and counter growing cyber threats

The White House has released President Trump’s Cyber Strategy for America,” a document that outlines how the United States intends to maintain dominance in cyberspace and confront an increasingly hostile digital landscape.

The strategy reflects a broader shift: cyberspace is no longer viewed merely as a technical domain to defend, but as a strategic arena where national power is exercised, protected, and projected.

Donald Trump presented the document outlining the administration’s vision and priorities for addressing cyber threats targeting citizens, businesses, and critical infrastructure. From financial systems and healthcare to water utilities and telecommunications networks, the strategy highlights how both state-backed adversaries and cybercriminal groups increasingly exploit digital systems to advance geopolitical interests and economic gain.

To address this evolving threat landscape, the strategy introduces six policy pillars that will guide federal actions in the coming years:

  • Build Cyber Workforce
    Expand cyber talent through education, training, and collaboration between government, academia, and industry.
  • Shape Adversary Behavior
    Use offensive and defensive cyber operations and national power tools to deter, disrupt, and impose costs on state and criminal cyber adversaries.
  • Promote Common-Sense Regulation
    Streamline cyber and data regulations to reduce compliance burdens and enable faster, more effective private-sector responses to threats.
  • Modernize Federal Networks
    Secure and upgrade federal systems with zero-trust, cloud migration, AI-driven security, and post-quantum cryptography.
  • Secure Critical Infrastructure
    Protect key sectors—energy, finance, telecom, water, healthcare—and strengthen supply chain resilience with government-industry cooperation.
  • Sustain Tech Superiority
    Protect innovation and leadership in AI, quantum computing, cryptography, and emerging technologies critical to national security.

Modernizing federal networks represents another key priority. The strategy calls for the adoption of zero-trust architectures, post-quantum cryptography, cloud migration, and AI-driven security tools to strengthen the resilience of government systems. At the same time, it emphasizes protecting critical infrastructure and supply chains, including energy grids, financial systems, telecommunications, hospitals, and data centers.

A central element of the strategy is the need to maintain U.S. superiority in emerging technologies. The United States aims at maintaining technological sovereignty. Artificial intelligence, quantum computing, and advanced cryptography are treated not simply as technological priorities but as strategic assets tied directly to national security and economic power.

Equally important is the development of a stronger cyber workforce. The document describes cybersecurity talent as a strategic national asset, calling for deeper collaboration between academia, industry, and government to train the next generation of specialists and strengthen operational capabilities.

Perhaps the most significant message of the strategy is its posture. The United States declares that it will act rapidly, deliberately, and proactively to disrupt cyber threats, leveraging coordinated actions between government agencies, private companies, and international allies.

Another key element is the integration of the private sector into national cyber defense. The strategy acknowledges that much of the infrastructure underpinning the digital economy is owned and operated by private companies, making collaboration essential to building resilient systems and responding quickly to emerging threats.

In this vision, cyberspace is no longer only a domain of defense, it is a key theater of geopolitical competition where technological leadership and national power increasingly converge.

For policymakers and security experts worldwide, the message is clear: cybersecurity is no longer just about protecting networks, it is about sustaining national power in the digital age.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, White House President Trump’s Cyber Strategy)

Irish regulator probes X after Grok allegedly generated sexual images of children

Ireland’s Data Protection Commission opened a probe into X over Grok AI tool allegedly generating sexual images, including of children.

Ireland’s Data Protection Commission has launched another investigation into X over Grok’s AI image generator. The probe focuses on reports that the tool created large volumes of non-consensual and sexualized images, including content involving children, potentially violating EU data protection laws.

“The Data Protection Commission (DPC) has today announced that it has opened an inquiry into X Internet Unlimited Company (XIUC) under section 110 of the Data Protection Act 2018.” reads the Ireland’s DPC’s press release. “The inquiry concerns the apparent creation, and publication on the X platform, of potentially harmful, non-consensual intimate and/or sexualised images, containing or otherwise involving the processing of personal data of EU/EEA data subjects, including children, using generative artificial intelligence functionality associated with the Grok large language model within the X platform.”

In January, X’s safety team blocked the @Grok account from editing images of real people to add revealing clothing, such as bikinis, for all users. Image creation and editing features now remain available only to paid subscribers, adding an accountability layer to deter abuse and policy violations.

“We have implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.” reads the X announcement. “Image creation and the ability to edit images via the [@]Grok account on X are now only available to paid subscribers globally. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the [@]Grok account to violate the law or our policies can be held accountable.”

Ireland’s Data Protection Commission’s probe will assess whether X breached key GDPR provisions on lawful data processing, privacy by design, and impact assessments. As X’s lead EU regulator, the DPC said it had already engaged with the company and will now conduct a large-scale investigation into its compliance with fundamental data protection obligations.

“The decision to commence the inquiry was notified to XIUC on Monday 16 February.” Ireland’s DPC continues. “The purpose of the inquiry is to determine whether XIUC has complied with its obligations under the GDPR, including its obligations under Article 5 (principles of processing), Article 6 (lawfulness of processing), Article 25 (Data Protection by Design and by Default) and Article 35 (requirement to carry out a Data Protection Impact Assessment) with regard to the personal data processed of EU/EEA data subjects.”

The Irish DPC joins a growing list of regulators investigating X, including the European Commission, the UK’s ICO and Ofcom, and authorities in Australia, Canada, India, Indonesia, and Malaysia. France has also been conducting a broad investigation since January, expanding its scope as new concerns arise.

“The DPC has been engaging with XIUC since media reports first emerged a number of weeks ago concerning the alleged ability of X users to prompt the @Grok account on X to generate sexualised images of real people, including children. As the Lead Supervisory Authority for XIUC across the EU/EEA, the DPC has commenced a large-scale inquiry which will examine XIUC’s compliance with some of their fundamental obligations under the GDPR in relation to the matters at hand.” said Deputy Commissioner Graham Doyle.

An interesting report published by the nonprofit watch group Center for Countering Digital Hate (CCDH) estimates that Grok generated around 3 million sexualized images in just 11 days after X launched its image-editing feature, an average of about 190 per minute. Among them, roughly 23,000 appeared to depict children, or one every 41 seconds, plus another 9,900 cartoon sexualized images of minors. Researchers found that 29% of identified child images remained publicly accessible, highlighting the scale and speed of the content spread.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Grok)

Preparing for the future of data privacy

The focus on data privacy started to quickly shift beyond compliance in recent years and is expected to move even faster in the near future. Not surprisingly, the Thomson Reuters Risk & Compliance Survey Report found that 82% of respondents cited data and cybersecurity concerns as their organization’s greatest risk. However, the majority of organizations noticed a recent shift: that their organization has been moving from compliance as a “check the box” task to a strategic function.

With this evolution in data privacy, many organizations find that they need to proactively make changes to their approach to set themselves up for the future. Here are five key considerations to get ready for the future of data privacy.

1. Create a process for staying up to date on new and evolving regulations

While data privacy is more than simply compliance, your organization must comply with all regulations first and foremost — or else risk fines and reputational damage. However, regulations are constantly being passed and changed, making it exceptionally challenging to stay up to date. As of September 2024, 20 states had consumer data privacy laws, with legislation pending in numerous other states. While the U.S. does not currently have a federal data privacy law, the American Privacy Rights Act is in the first stage of legislation.

As the data privacy regulation landscape continues to change, organizations must create a process to manage all pertinent regulations, which can be challenging for global companies. Because organizations must comply with the regulations of their customer locations, not the company’s locations, global businesses often find themselves bound by many different regulations. Organizations are increasingly turning to artificial intelligence (AI) with tools that monitor all relevant regulations and ensure compliance, which saves time and reduces fines.

2. Focus on balancing data privacy with analytics and AI goals

AI at the University of Pennsylvania’s Wharton School found that the percentage of employees who used AI weekly increased from 37% in 2023 to 73% in 2024. However, this significant and rapid increase in AI adoption has created significant data privacy issues. Top concerns include a lack of data transparency, new endpoints for vulnerabilities, third-party vendors and potential regulatory gaps. At the same time, businesses not using AI will likely quickly fall behind competitors in productivity and personalization.

Because not using AI is rarely the right business decision, organizations must take a strategic approach to creating a balance between business value and data security. While technology is part of the solution, platforms and systems cannot solve the challenges without a balanced approach. By creating processes and a framework that helps organizations evaluate risks and benefits, businesses can make smart business decisions with regard to data privacy. For example, a company may adopt automation throughout their organization using AI except in use cases that involve sensitive customer and employee data.

Explore data privacy solutions

3. Consider privacy-preserving machine learning (PPML)

By using specific techniques in AI and analytics, organizations can reduce data privacy risks. Many organizations are turning to PPML, which is an initiative started by Microsoft to protect data privacy when training large-capacity language models. Here are the three components of PPML defined by Microsoft:

  1. Understand: Organizations should conduct threat modeling and attack research while also identifying properties and guarantees. Additionally, leaders need to understand regulatory requirements.
  2. Measure: To determine the current status of data privacy, leaders should capture vulnerabilities quantitatively. Next, teams should develop and apply frameworks to monitor risks and mitigation success.
  3. Mitigate: After gaining a full picture of data privacy, teams must develop and apply techniques to reduce privacy risks. Lastly, leaders must meet all legal and compliance regulations.

4. Focus on data minimization

In the past, many businesses defaulted to keeping all — or at least most of — their data for a lengthy period of time. However, all data stored and saved must follow compliance regulations, causing many organizations to use a strategy referred to as data minimization.

Deloitte defines data minimization as taking steps to determine what information is needed, how it’s protected and used and how long to keep it. By taking this measured approach and determining which data to keep, organizations can reduce costs, make it easier to find the right data and improve compliance. Additionally, it’s easier and takes fewer resources to secure a smaller volume of data.

5. Create a culture of data privacy

Just like cybersecurity, data privacy is not simply the job of specific employees. Instead, organizations need to instill the mindset that every employee is responsible for data privacy. Creating a data privacy culture doesn’t happen overnight or with a single meeting. Instead, leaders must work to instill the values and focus over time. The first step is for leaders to become champions, express the shift in responsibility and “walk the walk” in terms of data privacy.

Because data privacy depends on team members following the processes and requirements specified, organizations must not simply dictate the rules but instead must explain the importance of data privacy. When employees understand the risks of not following the processes as well as the consequences to the organization and its consumers, they are more likely to comply.

Additionally, leaders should measure compliance with the processes to determine the current state and then the goal. By then offering incentives, organizations can help encourage compliance as well as stress its overall importance.

Start crafting your data privacy approach now

As your team focuses on planning for 2025 and beyond, now is the time to pause to make sure that your approach and goals align with where the industry is moving. Organizations that understand where data privacy is likely headed and take the steps needed to align their goals with the future of data privacy can be better prepared to more effectively gain business value from their data while still ensuring compliance.

The post Preparing for the future of data privacy appeared first on Security Intelligence.

❌