Visualização de leitura

AI is spreading decision-making, but not accountability

On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.

As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.

AI, after all, may diffuse decision-making, but not legal liability.

AI doesn’t show up in court — people do

Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.

“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jessica Eaves Mathews, founder, Leverage Legal Group

LLG

“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”

In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.

The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.

CIO as the system’s last line of defense

If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.

“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.

The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Drumgoole, president, global infrastructure services, DXC Technology

DXC

The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.

“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.

In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.

The illusion of distributed accountability

Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Ojas Rege, SVP and GM, privacy and data governance, OneTrust

OneTrust

Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”

But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”

In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Simon Elcham, CAIO, Trustpair

Trustpair

And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.

Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.

The emergence and limits of the CAIO

To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.

At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”

The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jonathan Tushman, CTO, Hi Marley

Hi Marley

This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.

When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.

The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.

The widening gap between deployment and governance

If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”

Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.

The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.

At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.

Where the buck stops

For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.

When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.

AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.

White House weighs pre-release reviews for high-risk AI models

The Trump administration is in early discussions about whether advanced AI models should be vetted before public release, according to reporting from the New York Times, the Wall Street Journal, and Axios.

The conversations center on systems capable of facilitating cyberattacks, particularly models that could help users identify and exploit software vulnerabilities. Officials are considering several options, including formal pre-release review processes and government-led testing for higher-risk systems. No proposal has been finalized, and no timeline has been set.

What has changed

The discussions mark a shift in tone, if not yet in policy. On Jan. 20, 2025, Donald Trump’s first day back in office during his second term, he revoked Biden’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Three days later, he issued his own order, “Removing Barriers to American Leadership in Artificial Intelligence,” signaling a significant shift away from the Biden administration’s emphasis on oversight and risk mitigation toward a framework centered on deregulation and the promotion of AI innovation.

Among the things that order effectively ended: The Biden framework had introduced mandatory red-teaming for high-risk AI models, enhanced cybersecurity protocols, and monitoring requirements for AI used in critical infrastructure. The new discussions suggest certain security risks — particularly those tied to offensive cyber capabilities — warrant a more interventionist posture, even as the administration remains broadly opposed to sweeping AI regulation.

The Mythos factor

The discussion follows Anthropic’s recent introduction of Mythos, a model the company has described as representing a watershed moment for cybersecurity.

Anthropic has said Mythos Preview has found thousands of high-severity vulnerabilities, including some in every major operating system and web browser, and that AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. In one benchmark, the company reported significantly higher success rates compared to earlier models.

Anthropic has not released the model publicly. Instead, it launched Project Glasswing, committing up to $100 million in usage credits to a select group of technology and cybersecurity companies to use Mythos for defensive purposes — finding and patching vulnerabilities before malicious actors can exploit them.

Anthropic has also been briefing the Cybersecurity and Infrastructure Security Agency, the Commerce Department, and other stakeholders on the potential risks and benefits of Mythos Preview. OpenAI has developed a comparable model and has released it to a small set of companies through an existing trusted-access program.

What a review might mean

Pre-release evaluation of AI models is not a new idea, but it remains poorly defined in the US policy context. The Biden executive order Trump revoked had required developers of the largest AI systems to notify the government and share safety test results before deployment — one of several provisions the Trump administration characterized as burdensome obstacles to innovation.

The institutional picture has also shifted. The US AI Safety Institute, created under the Biden order to conduct pre-deployment evaluation and housed within the National Institute of Standards and Technology, was substantially reorganized after Trump took office. In June 2025, the agency was renamed the Center for AI Standards and Innovation, and its mission was revised.

Commerce Secretary Howard Lutnick framed the change as a repudiation of what he called the use of safety as a pretext for censorship and regulation. The renamed center’s mandate now includes leading unclassified evaluations of AI capabilities that may pose risks to national security, with a stated focus on demonstrable risks such as cybersecurity, biosecurity, and chemical weapons, potentially positioning it to play a role in any future review process.

Other governments have moved further and faster. The UK’s AI Security Institute has conducted pre-deployment evaluations of several frontier models, working directly with labs, including Anthropic and OpenAI, to assess risk thresholds before release. The EU AI Act, which began phasing in last year, establishes mandatory conformity assessments for high-risk AI applications.

The US has not established a comparable framework or legal authority to require such reviews.

Meta accused of violating DSA by failing to safeguard minors

The European Commission accuses Meta of failing to protect children, allowing users under 13 on Instagram and Facebook, in breach of the DSA rules.

The European Commission has accused Meta of violating child safety rules. Instagram and Facebook allegedly failed to prevent children under 13 from accessing their platforms. According to the Commission, Meta did not properly assess and mitigate risks to minors, breaching obligations under the Digital Services Act (DSA).

“The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act (DSA) for failing to diligently identify, assess and mitigate the risks of minors under 13 years old accessing their services.” reads the press release. “Despite Meta’s own terms and conditions setting the minimum age to access Instagram and Facebook safely at 13, the measures put in place by the company to enforce these restrictions do not seem to be effective. The measures do not adequately prevent minors under the age of 13 from accessing their services nor promptly identify and remove them, if they already gained access.”

Minors under 13 can easily bypass age rules on Instagram and Facebook by entering false birth dates, as Meta lacks effective verification checks. Reporting tools are also weak: they require multiple steps, are not user-friendly, and often fail to trigger proper action, allowing underage users to remain active. The European Commission says Meta’s risk assessment is incomplete and ignores evidence that 10–12% of under-13s use these platforms, as well as research showing younger children are more vulnerable to harm. As a result, Meta is urged to revise its risk evaluation methods and strengthen measures to detect, prevent, and remove underage users, ensuring better privacy, safety, and protection for minors.

“At this stage, the Commission considers that Instagram and Facebook must change their risk assessment methodology, in order to evaluate which risks arise on Instagram and Facebook in the European Union, and how they manifest.” continues the press release. “Moreover, Instagram and Facebook need to strengthen their measures to prevent, detect and remove minors under the age of 13 from their service.”

Instagram and Facebook can now review the Commission’s evidence and respond to the preliminary findings, while also taking steps to address the issues under the 2025 DSA Guidelines. The European Board for Digital Services will be consulted. If breaches are confirmed, Meta could face fines of up to 6% of its global annual turnover, along with periodic penalties to enforce compliance. These findings are not final.

The case stems from formal proceedings launched in May 2024, based on extensive analysis of internal data, risk reports, and input from experts and civil society. The Commission used DSA guidelines as a benchmark, stressing the need for effective age verification tools that are accurate, reliable, and privacy-friendly, and has proposed an EU age verification app as a reference model.

“The Commission continues its investigation into other potential breaches that are part of these ongoing proceedings, including Meta’s compliance with DSA obligations to protect minors and the physical and mental well-being of users of all ages.” concludes the press release. “This investigation covers also the assessment and mitigation of risks arising from the design of Facebook’s and Instagram’s online interfaces, which may exploit the vulnerabilities and inexperience of minors, leading to addictive behaviour and reinforcing the so-called ‘rabbit hole’ effects.”

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, European Commission)

How the EU’s NIS2 directive is changing how CIOs think about digital infrastructure

In conversations I’ve had with CIOs over the past year, there’s been a noticeable shift in how NIS2 (Network and Information Security Directive 2) is being discussed. It used to be filed away as another regulatory hurdle to clear, but now it’s prompting CIOs and their teams to think a little deeper about how well they understand the systems they depend on. For a long time, risk has been largely framed within the boundaries of the organization — something that could be managed through internal controls, policies and audits. But that no longer reflects how digital services are built or delivered. Most organizations I encounter rely on a web of providers spanning cloud platforms, data centers, network operators and software vendors, all working together to create a “patchwork” ecosystem. NIS2 is different because it acknowledges that reality and, in doing so, it’s forcing a broader and sometimes more uncomfortable reassessment of where risk really sits.

What stands out to me is that NIS2 doesn’t just focus on individual accountability, but on the very definition of resilience itself. It recognizes that disruption rarely originates within a single process, or even a single organization. More often, it emerges from the connections between them; from unseen dependencies, indirect relationships and assumptions about how systems will behave under pressure. That’s novel, because it moves the conversation away from whether individual systems are secure, and toward whether the overall architecture those systems sit within can continue to function when something inevitably goes wrong. In that sense, NIS2 is less about tightening cybersecurity controls and more about encouraging a different way of thinking, where resilience is shaped as much by how infrastructure is designed and connected as it is by how it is protected.

NIS2 expands the definition of risk beyond the enterprise

One of the most immediate impacts I’m seeing from NIS2 is how it challenges long-held assumptions about control. Speak to any CIO, and they’ll usually talk about securing what sits within their own environments — their applications, services and data. But in practice, very little of today’s digital estate is fully owned because it’s so distributed among third parties with countless links and dependencies. Virtually all business services depend on layers of external providers, each with its own dependencies, architectures and risk profiles. According to the World Economic Forum, the top supply chain risk in 2026 is the inheritance risk — the inability to ensure the integrity of third-party software, hardware or services. NIS2 brings that into sharp focus by extending accountability beyond direct suppliers to include the wider ecosystem that supports them. In essence, it prompts businesses to shift from asking “are we secure?” to “how secure is everything we rely on to operate?”

That’s quite a challenge, because it’s not enough for businesses to simply know their suppliers — they need to understand how deeply interconnected those relationships are. In many cases, the real exposure sits several steps removed, in the providers behind your providers or in shared infrastructure that underpins multiple services at once. The “uncomfortable reassessment” I mentioned earlier is the squaring of this circle — how many organizations have full visibility into that sprawling landscape, let alone the means to control it?

NIS2 is compelling organizations to map dependencies more rigorously, to ask harder questions of their partners and network infrastructure, and to recognize that resilience is only as strong as the most fragile link in the chain. The WEF shows that in 2026, only 33% of organizations map their entire IT supply chain to gain this visibility. And even then, the added risk of unknown service providers, such as is the case when suing the public Internet, where data pathways are neither visible nor controllable, is difficult to quantify.

Compliance is the trigger, but architecture is the challenge

What I find interesting about NIS2 is that it goes deeper than compliance — it’s trying to trigger a shift in culture. It’s relatively straightforward to introduce new policies, expand reporting requirements or formalize supplier assessments. But what happens when those requirements collide with the reality of how modern IT environments are built? Many organizations simply don’t have a clear, end-to-end view of how their services are delivered, how data flows between providers or how incidents might spread like wildfire across the ecosystem they depend on. NIS2 asks CIOs to look beyond governance frameworks and examine whether their operating models support the level of oversight and responsiveness the directive expects.

And that is where the architecture question becomes essential. It’s one thing to require suppliers to report incidents or meet certain security standards; it’s another thing entirely to ensure that the underlying infrastructure is designed to absorb disruption without cascading failure. In my experience, this is where many organizations begin to realize that resilience cannot be layered on afterwards. It must be built into how systems are structured, how dependencies are managed and how connectivity is established between environments. NIS2 may define what needs to be done, but it doesn’t prescribe how to do it. That responsibility sits with CIOs, who now have to translate regulatory intent into practical design decisions about where workloads run, how services interconnect and how failure is contained when it occurs.

Infrastructure design is now resilience design

What this ultimately leads to is a big infrastructure rethink. I’m privileged to have had some interesting discussions with CIOs and other executives about this very topic, so I know that resilience is beginning to be understood as more than a set of security controls. Connectivity is now at the heart of resilience, and in that sense, NIS2 has succeeded in getting organizations to think differently about what resilience really means. If a service depends on a single cloud region, a single network path or a tightly coupled set of providers, then no amount of policy or monitoring will prevent disruption when one of those elements fails. I’m pleased to see organizations starting to question these assumptions — not just asking whether systems are secure, but whether they are structured in a way that allows them to continue operating under stress. That shift in thinking does away with the abstract theory of resilience and defines it as something that can be designed and architected.

From a connectivity perspective, this means building in diversity at every level. Distributing workloads across geographically separate locations, establishing multiple, independent network paths and avoiding unnecessary concentration of critical services all contribute to a more resilient architecture. Interconnection plays a starring role here as the mechanism that allows different parts of the digital ecosystem to communicate in controlled, redundant and predictable ways. When designed properly, this kind of architecture limits the blast radius of any single point of failure and makes it easier to maintain service continuity even when parts of the system are down or under strain. The real takeaway here is that resilience is not something any single organization can achieve in isolation. It emerges from the collective design of the entire ecosystem, where each participant contributes to the overall stability of the services they all depend on.

When regulatory pressure gives way to strategic opportunity

The building blocks are already there. Practices like supplier due diligence, security certifications and business continuity planning are not new. What NIS2 does is raise the bar on how consistently and how deeply they are applied. It also brings a level of structure to conversations that were previously fragmented, particularly when it comes to expectations between partners. And therein lies the strategic upside. Organizations that can clearly demonstrate how they manage risk across their supply chains, how they design for resilience and how they respond to disruption are in a stronger position, not just from a regulatory standpoint, but in how they engage with customers and partners. In some sectors, we’re already seeing this play out through increased requests for transparency, self-assessments and proof of compliance. That trend is only going to accelerate. For CIOs, it’s a golden opportunity to move beyond a defensive posture and position resilience as a key competitive differentiator. It becomes a way to build trust, strengthen relationships and support more sustainable growth, rather than simply a requirement to satisfy regulators.

NIS2 may be the catalyst, but the underlying change runs deeper. It’s pushing CIOs to think beyond compliance and toward a more structural understanding of risk that reflects how digital services operate today.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

FCC targets foreign router imports amid rising cybersecurity concerns

The FCC will ban new foreign-made routers in the U.S. over security risks, unless approved by DHS or defense authorities.

The U.S. FCC announced a ban on importing new foreign-made consumer routers, citing unacceptable cyber and national security risks. The decision, backed by Executive Branch assessments, means such devices can no longer be sold or marketed in the U.S. unless they receive special approval.

Routers will be added to the Covered List, with exceptions only for those cleared by the Department of Homeland Security or defense authorities after the Department of Homeland Security or defense authorities verify they pose no threat to communications networks.

“Today, the Federal Communications Commission updated its Covered List to include all consumer-grade routers produced in foreign countries. Routers are the boxes in every home that connect computers, phones, and smart devices to the internet.” reads the announcement published by FCC. “This followed a determination by a White House-convened Executive Branch interagency body with appropriate national security expertise that such routers “pose unacceptable risks to the national security of the United States or the safety and security of United States persons.””

The U.S. “Covered List” is a security list maintained by the Federal Communications Commission under the Secure and Trusted Communications Networks Act.

It identifies communications equipment and services that pose national security risks to U.S. networks. Anything placed on this list is effectively banned from being authorized, marketed, or sold in the United States.

U.S. authorities warn that foreign-made routers create serious supply chain and cybersecurity risks, potentially disrupting the economy, critical infrastructure, and national defense. Policy guidance stresses reducing dependence on foreign components for essential technologies.

These routers have already been exploited by threat actors for hacking, espionage, and intellectual property theft, and were linked to major cyber espionage campaigns like Volt Typhoon, Flax Typhoon, and Salt Typhoon targeting U.S. infrastructure.

Manufacturers can still request Conditional Approval if their devices are proven safe. The rules apply only to new models, meaning existing routers already in use or previously approved can still be sold and used without restrictions.

Currently, only a few products, like drones and software-defined radios from SiFly Aviation, Mobilicom, ScoutDI, and Verge Aero, are approved. Router manufacturers can seek Conditional Approval, while U.S.-made devices such as Starlink routers are exempt.

The FCC warns foreign routers pose major supply chain and cybersecurity risks, potentially disrupting infrastructure and the economy. Weak security in home and small office routers has already been exploited for hacking, espionage, and data theft, and can also turn devices into botnets for large-scale cyberattacks.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, router)

Reading White House President Trump’s Cyber Strategy for America (March 2026)

White House released President Trump’s Cyber Strategy for America, framing cyberspace as a strategic domain to project power and counter growing cyber threats

The White House has released President Trump’s Cyber Strategy for America,” a document that outlines how the United States intends to maintain dominance in cyberspace and confront an increasingly hostile digital landscape.

The strategy reflects a broader shift: cyberspace is no longer viewed merely as a technical domain to defend, but as a strategic arena where national power is exercised, protected, and projected.

Donald Trump presented the document outlining the administration’s vision and priorities for addressing cyber threats targeting citizens, businesses, and critical infrastructure. From financial systems and healthcare to water utilities and telecommunications networks, the strategy highlights how both state-backed adversaries and cybercriminal groups increasingly exploit digital systems to advance geopolitical interests and economic gain.

To address this evolving threat landscape, the strategy introduces six policy pillars that will guide federal actions in the coming years:

  • Build Cyber Workforce
    Expand cyber talent through education, training, and collaboration between government, academia, and industry.
  • Shape Adversary Behavior
    Use offensive and defensive cyber operations and national power tools to deter, disrupt, and impose costs on state and criminal cyber adversaries.
  • Promote Common-Sense Regulation
    Streamline cyber and data regulations to reduce compliance burdens and enable faster, more effective private-sector responses to threats.
  • Modernize Federal Networks
    Secure and upgrade federal systems with zero-trust, cloud migration, AI-driven security, and post-quantum cryptography.
  • Secure Critical Infrastructure
    Protect key sectors—energy, finance, telecom, water, healthcare—and strengthen supply chain resilience with government-industry cooperation.
  • Sustain Tech Superiority
    Protect innovation and leadership in AI, quantum computing, cryptography, and emerging technologies critical to national security.

Modernizing federal networks represents another key priority. The strategy calls for the adoption of zero-trust architectures, post-quantum cryptography, cloud migration, and AI-driven security tools to strengthen the resilience of government systems. At the same time, it emphasizes protecting critical infrastructure and supply chains, including energy grids, financial systems, telecommunications, hospitals, and data centers.

A central element of the strategy is the need to maintain U.S. superiority in emerging technologies. The United States aims at maintaining technological sovereignty. Artificial intelligence, quantum computing, and advanced cryptography are treated not simply as technological priorities but as strategic assets tied directly to national security and economic power.

Equally important is the development of a stronger cyber workforce. The document describes cybersecurity talent as a strategic national asset, calling for deeper collaboration between academia, industry, and government to train the next generation of specialists and strengthen operational capabilities.

Perhaps the most significant message of the strategy is its posture. The United States declares that it will act rapidly, deliberately, and proactively to disrupt cyber threats, leveraging coordinated actions between government agencies, private companies, and international allies.

Another key element is the integration of the private sector into national cyber defense. The strategy acknowledges that much of the infrastructure underpinning the digital economy is owned and operated by private companies, making collaboration essential to building resilient systems and responding quickly to emerging threats.

In this vision, cyberspace is no longer only a domain of defense, it is a key theater of geopolitical competition where technological leadership and national power increasingly converge.

For policymakers and security experts worldwide, the message is clear: cybersecurity is no longer just about protecting networks, it is about sustaining national power in the digital age.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, White House President Trump’s Cyber Strategy)

Irish regulator probes X after Grok allegedly generated sexual images of children

Ireland’s Data Protection Commission opened a probe into X over Grok AI tool allegedly generating sexual images, including of children.

Ireland’s Data Protection Commission has launched another investigation into X over Grok’s AI image generator. The probe focuses on reports that the tool created large volumes of non-consensual and sexualized images, including content involving children, potentially violating EU data protection laws.

“The Data Protection Commission (DPC) has today announced that it has opened an inquiry into X Internet Unlimited Company (XIUC) under section 110 of the Data Protection Act 2018.” reads the Ireland’s DPC’s press release. “The inquiry concerns the apparent creation, and publication on the X platform, of potentially harmful, non-consensual intimate and/or sexualised images, containing or otherwise involving the processing of personal data of EU/EEA data subjects, including children, using generative artificial intelligence functionality associated with the Grok large language model within the X platform.”

In January, X’s safety team blocked the @Grok account from editing images of real people to add revealing clothing, such as bikinis, for all users. Image creation and editing features now remain available only to paid subscribers, adding an accountability layer to deter abuse and policy violations.

“We have implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.” reads the X announcement. “Image creation and the ability to edit images via the [@]Grok account on X are now only available to paid subscribers globally. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the [@]Grok account to violate the law or our policies can be held accountable.”

Ireland’s Data Protection Commission’s probe will assess whether X breached key GDPR provisions on lawful data processing, privacy by design, and impact assessments. As X’s lead EU regulator, the DPC said it had already engaged with the company and will now conduct a large-scale investigation into its compliance with fundamental data protection obligations.

“The decision to commence the inquiry was notified to XIUC on Monday 16 February.” Ireland’s DPC continues. “The purpose of the inquiry is to determine whether XIUC has complied with its obligations under the GDPR, including its obligations under Article 5 (principles of processing), Article 6 (lawfulness of processing), Article 25 (Data Protection by Design and by Default) and Article 35 (requirement to carry out a Data Protection Impact Assessment) with regard to the personal data processed of EU/EEA data subjects.”

The Irish DPC joins a growing list of regulators investigating X, including the European Commission, the UK’s ICO and Ofcom, and authorities in Australia, Canada, India, Indonesia, and Malaysia. France has also been conducting a broad investigation since January, expanding its scope as new concerns arise.

“The DPC has been engaging with XIUC since media reports first emerged a number of weeks ago concerning the alleged ability of X users to prompt the @Grok account on X to generate sexualised images of real people, including children. As the Lead Supervisory Authority for XIUC across the EU/EEA, the DPC has commenced a large-scale inquiry which will examine XIUC’s compliance with some of their fundamental obligations under the GDPR in relation to the matters at hand.” said Deputy Commissioner Graham Doyle.

An interesting report published by the nonprofit watch group Center for Countering Digital Hate (CCDH) estimates that Grok generated around 3 million sexualized images in just 11 days after X launched its image-editing feature, an average of about 190 per minute. Among them, roughly 23,000 appeared to depict children, or one every 41 seconds, plus another 9,900 cartoon sexualized images of minors. Researchers found that 29% of identified child images remained publicly accessible, highlighting the scale and speed of the content spread.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Grok)

❌