Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • The AI economy needs a new vocabulary
    Technology is evolving faster than the language we use to describe it. As a result, people are often talking past each other about what software, AI and automation are. These are treated as single categories when in reality they contain several fundamentally different disciplines and economic models. And when reality changes faster than our language, confusion follows. That’s roughly where we are with technology right now. This challenge is not technical, it is seman
     

The AI economy needs a new vocabulary

6 de Maio de 2026, 06:00

Technology is evolving faster than the language we use to describe it. As a result, people are often talking past each other about what software, AI and automation are. These are treated as single categories when in reality they contain several fundamentally different disciplines and economic models. And when reality changes faster than our language, confusion follows.

That’s roughly where we are with technology right now.

This challenge is not technical, it is semantic. When different groups use the same words to mean different things, alignment becomes difficult. A software engineer, product manager and executive may all use the word “software,” but they are often referring to entirely different categories of work.

This lack of precision becomes more problematic as systems scale. Decisions about hiring, tooling and strategy depend on understanding what kind of work is being done. Without clear vocabulary, those decisions and the resulting actions are often based on incorrect assumptions.

Why language is falling behind technology

We need terms that clarify understanding and convey a clear concept so that we can properly express the intended meaning. Software, AI, content generation and many other tech terms are being discussed; each can now have multiple meanings. They contain several fundamentally distinct ideas, disciplines and economic models. Because we lack clearly differentiated terms, people often end up talking past each other.

So, I’m going to propose a few terms. They may not be the ones that ultimately stick, but we need to start somewhere.

Bizware

Bizware is already the dominant form of software. I’ve previously used this term to describe the class of software that exists primarily to support business infrastructure rather than advance computing itself. Tools like Docker, Kubernetes, React and Angular exist to help organizations assemble and operate the digital part of a business. They solve operational and integration problems rather than fundamental computing problems. Millions of developers now work primarily in this ecosystem. It has its own tools, expectations and culture that are distinct from traditional computer science. Concepts like sprints, deployment pipelines and infrastructure orchestration dominate bizware and arise from the intersection of software and business rather than from computing itself.

The rise of bizware can be seen in the widespread adoption of platforms, like the aforementioned Docker and Kubernetes, and exist to standardize the deployment of software infrastructure at scale. Docker, for example, enables developers to package applications into consistent environments, reducing variability between systems. Kubernetes extends this by orchestrating those environments across distributed systems, allowing organizations to manage complex deployments reliably.

These tools are not advancing computing theory. They are solving operational problems that arise when software becomes infrastructure. That distinction is what defines bizware.

Usage example: Our company builds bizware to integrate AWS datasets with high-speed data queries for front-end rendering.

AI Slop

I obviously didn’t invent the term AI Slop, but it still lacks a precise definition despite heavy use. And not all AI output has the same value. I propose AI Slop should differentiate between content that has some purpose and content that is fundamentally useless. Therefore, AI Slop is content that exists, or seems to exist, for no purpose other than existing or content that is so fundamentally flawed it cannot be used for any intended purpose.

An example of this is the videos of Will Smith eating spaghetti. It exists because people are entertained by the fact that it can exist. Anthropic’s C compiler would fit into the latter category. It is so flawed that it has no applicable use case, nor does it do anything novel, particularly with respect to existing solutions.

One of the reasons the blanket term “AI” creates confusion is that it produces outputs across multiple categories at once. The same system generates truly useless content, while also generating content that can serve a function and generate value.

Without language to distinguish these outcomes, discussions about AI tend to become circuitous. If two people didn’t agree on what the color red is, it would be very difficult to discuss art. Right now, people don’t agree on the term “AI Slop” so we have a challenge coming to a consensus about the nature of what AI generates.

Usage example: Anthropic’s C compiler is AI Slop.

GEA (Good Enough AI)

Not everything AI produces is useless. The real divide is economic, not technical. I’ve often said that AI automates mediocrity. But in many circumstances, mediocre output is economically valuable.

I refer to this category as GEA: Good Enough AI.

GEA is AI-generated material that performs its intended function even if the quality is far from exceptional. The output may require small corrections or modifications, but it is good enough to complete the task. In a business context, “working” is often far more valuable than “excellent.” If someone needs a simple Android app to track gym workouts, AI can generate code that isn’t elegant but still does the job. In that situation, perfection has little economic value.

The important distinction here is, as mentioned above, mostly economic, not technical. GEA is generated content that has value, whereas AI slop does not. It doesn’t imply a quality of the output, only that the quality is high enough that it represents value to the prompter.

This is where many organizations struggle. They attempt to apply a single standard of quality across all outputs, rather than recognizing that different categories of work require different thresholds. In many business contexts, speed and cost efficiency outweigh perfection. In others, precision and originality are critical. Treating all outputs as if they should meet the same standard leads to inefficiency and misaligned expectations.

Usage example: With the right prompts, Claude produced GEA SQL queries roughly 75% of the time.

HRC (Human Required Content)

Some work will remain human by definition and some categories will require human expertise. I propose we refer to these as HRC: Human Required Content. Even when AI produces higher-quality output, that output is instantly accessible to everyone. As a result, it tends to redefine the baseline for mediocrity rather than the ceiling for excellence. Since the best work will always command an economic premium, there will always be economic value in humans that outperform AI.

This class of work is not going away. If anything, it is probably going to demand a higher premium as companies decide what about their business should be “industry-leading” versus what part of their business can merely function. 

Usage example: Our clients demand high-quality HRC for their customer-facing frontend products.

Why this matters

For companies, adopting this vocabulary has practical implications. It allows leaders to better define roles, set expectations and allocate resources. It also helps clarify where AI can be effectively deployed and where human expertise remains essential.

More importantly, it reduces confusion. When teams can clearly distinguish between different types of work, they can make better decisions about how to approach each one.

Technological change always outpaces language. When a new technology emerges, we initially try to describe it using the vocabulary we already have. Eventually, that stops working. New terms appear to describe new categories of work, new economic realities and new technical disciplines.

We are currently in that transitional moment with AI and modern software.

Bizware represents one new category of software work. AI Slop, GEA and HRC describe different tiers of AI-generated output and the economic roles they play.

These terms may not be the ones that ultimately stick, but the categories they describe already exist. As AI capabilities stabilize and genuine business models emerge, our language will evolve to reflect how these systems are used.

When that happens, the conversation around AI and software will become a lot clearer.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The triple squeeze: Why the SaaSpocalypse story you’re hearing is missing the most dangerous part
    In early February 2026, nearly $285 billion in market value evaporated from software and related sectors in 48 hours. Atlassian dropped 36% for the month. The iShares Software ETF fell more than 30% from its September 2025 highs. Traders called it the “SaaSpocalypse.” The popular narrative goes like this. AI coding tools have gotten so good that customers can build their own software, so why pay for a SaaS subscription when an engineer can vibe-code a replacement over a
     

The triple squeeze: Why the SaaSpocalypse story you’re hearing is missing the most dangerous part

5 de Maio de 2026, 07:00

In early February 2026, nearly $285 billion in market value evaporated from software and related sectors in 48 hours. Atlassian dropped 36% for the month. The iShares Software ETF fell more than 30% from its September 2025 highs. Traders called it the “SaaSpocalypse.”

The popular narrative goes like this. AI coding tools have gotten so good that customers can build their own software, so why pay for a SaaS subscription when an engineer can vibe-code a replacement over a weekend?

That’s the least interesting version of what’s happening. The real story involves three forces converging on SaaS simultaneously, creating a structural trap that puts hundreds of thousands of white-collar jobs at risk. The force that will decide their fate isn’t AI. It’s a spreadsheet in a private equity office.

Force #1: AI isn’t replacing your product. It’s replacing the problem your product solves

Most enterprises won’t rebuild their tech stack with vibe coding, because that’s not how large organizations work. The bigger threat is that AI agents are making entire workflow categories obsolete. Take a SaaS ticketing product. The threat isn’t a competing ticketing system built in-house, it’s that customers are deploying AI agents to handle support directly, rethinking the pipeline from scratch. The old system isn’t replaced by a better one. It’s replaced by a fundamentally different approach to the job.

Satya Nadella telegraphed this on the BG2 podcast in December 2024, saying business applications would “probably collapse” in the agent era because they’re “CRUD databases with a bunch of business logic.” “All the logic will be in the AI tier.”

The data backs him up. Gartner forecasts worldwide AI spending will hit $2.5T in 2026, up 44% YoY, while overall IT budgets grew ~10%. That money is coming from other budgets. Average SaaS apps per company dropped 18% between 2022 and 2024 (BetterCloud). Among large enterprises, 82% are actively reducing vendor count (NPI Financial). Even companies not directly losing customers face fewer new purchases, slower expansions and harder renewals, because buyers are looking somewhere else.

Force #2: The $440 billion leverage trap

Between 2015 and 2025, private equity acquired more than 1,900 software companies in deals worth over $440 billion. The thesis was elegant. Sticky recurring revenue, high margins, predictable cash flows and high switching costs, all perfect for leveraged buyouts. It worked brilliantly for a decade. Then it stopped.

  • The setup (2020-2022). Public SaaS traded at a median 18x revenue in 2021 (Asana touched 89x). PE paid premium multiples with enormous debt. Anaplan went to Thoma Bravo for $10.4B. Coupa sold for $8B with $4.5-5B in leverage. Zendesk went private for $10.2B backed by ~$5B in private credit.
  • The collapse. By late 2025, the median public SaaS revenue multiple had fallen to 5.1x, over 70% below peak. Private software M&A multiples dropped below 3x in 2024.

Here’s the math. A PE firm buys a $100M-revenue SaaS company in 2021 at 8x ($800M), financing 40% with floating-rate debt, a $320M loan at SOFR plus 500 bps. The initial rate runs 5-6%. After Fed hikes, about 10%, or $32M annual interest. Then the multiple collapses. Even if revenue grows to $120M, at 2-3x the business is worth $240-360M. The loan is $320M. Equity sits somewhere between negative and barely positive.

This isn’t hypothetical. Wells Fargo now uses “keys handover” for cases where PE hands underwater portfolio companies to lenders. A record $25B of software leveraged loans trade below 80 cents on the dollar. Total tech distressed debt sits near $46.9B. Apollo cut its software exposure nearly in half during 2025.

When equity is underwater, PE has two choices. Walk away or shift into margin-maximization mode by cutting headcount, consolidating and extracting cash.

Force #3: AI is the cost-cutting weapon PE has been waiting for

Here’s the cruel irony. AI is killing revenue, the debt still needs servicing and AI is also the most powerful cost-cutting tool ever handed to a PE operating partner.

Most SaaS employees are white-collar knowledge workers, including engineers, PMs, marketers, CS, sales, support and analysts. Precisely where AI is making fastest inroads. Anthropic’s research found AI-exposed workers earn 47% more on average and are nearly 4x as likely to hold a graduate degree. Stanford Digital Economy Lab and Dallas Fed research shows employment among 22-25-year-olds in AI-exposed roles fell 13-16% between late 2022 and mid-2025, nearly 20% among young software developers.

Wall Street has picked its side. When Atlassian announced 1,600 layoffs (10% of workforce) to fund AI investment, the stock rose. When Block cut 4,000 jobs and Jack Dorsey said, “a significantly smaller team, using the tools we’re building, can do more and do it better,” the stock surged over 20%.

PE is moving too. Anthropic is reportedly in talks with Blackstone, Hellman & Friedman and Permira on a JV to embed Claude across portfolio companies. OpenAI is in parallel talks with Advent, Bain, Brookfield and TPG. Blackstone alone manages $1.3T+ across manufacturing, healthcare, real estate and financial services. Many licenses those companies cancel will belong to SaaS firms in other PE portfolios. As CNBC put it, “Private equity built the SaaS installed base. It may also be the one that rips it out.”

The loop closes. AI slows revenue, valuation collapses, debt becomes unsustainable and PE uses AI to cut headcount to service it. That’s the Triple Squeeze.

So, what can you actually do?

  • Assess exposure across three dimensions. First, your company. Is it PE-owned, and what vintage? Deals done at peak 2021-2022 valuations with heavy leverage are most precarious, and PitchBook or Crunchbase will tell you. Second, your role. Cost center or revenue engine? When growth stalls, PE defaults to margin maximization, and G&A, parts of marketing, internal tools and legacy product teams are vulnerable. Third, AI itself. How automatable is your day-to-day? If your core workflow is routing information, synthesizing documents or managing processes, the timeline is shorter than you think.
  • Supersize your T-shape. AI’s Achilles’ heel is scarce context. It doesn’t know your customers, your industry or why that one integration keeps breaking. Widen across adjacent roles while deepening your core with AI. Engineers can learn PM, UX and AI-assisted QA. Marketers can automate operational work with agents and build AI creative pipelines. Become an AI multiplier, someone who directs these tools with cross-functional judgment they can’t generate alone. If your employer isn’t giving you enough exposure, don’t wait. Vibe-code a side project. Pressure-test a financial model against your usual approach.
  • Build reputation while you still have a platform. Write publicly, contribute to communities, ship open source. Individual brand is a hedge against rising company-level risk, and far easier to build while employed than while competing with thousands of displaced workers.
  • If exposure is real, move early and deliberately. A wave of PE-backed SaaS layoffs would flood the market with experienced workers chasing a shrinking pool of roles. Those who fare best move while they can still be selective. But “move” doesn’t mean jumping to the first company with AI in its pitch deck. Apply the same structural thinking. Look for durable revenue, a real plan for AI-native competition, and profitability or a credible path.

The bottom line

The SaaSpocalypse narrative everyone’s debating, whether AI coding will kill SaaS, is a sideshow. The real story is financial, structural and already in motion.

Private equity spent a decade and $440 billion buying up software on a thesis that just broke. The debt doesn’t care about AI timelines or market sentiment. It comes due regardless. The only variable PE can control now is cost, and AI just made that variable dramatically easier to cut.

If you work in this industry, especially at a PE-backed company, it’s time for clear-eyed assessment of your exposure before the math makes the decision for you.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The $570K canary: What AI coding agents reveal about enterprise AI’s real gaps
    Boris Cherny, creator of Anthropic’s Claude Code, says he hasn’t written a line of code by hand in months. He shipped 22 pull requests one day, 27 the next, all AI-generated. Company-wide, Anthropic reports that 70 to 90% of its code is now written by AI. CEO Dario Amodei has predicted that AI could handle “most, maybe all” of what software engineers do within months. And yet Anthropic typically has dozens of software engineering openings, one reportedly carrying $570K
     

The $570K canary: What AI coding agents reveal about enterprise AI’s real gaps

4 de Maio de 2026, 07:00

Boris Cherny, creator of Anthropic’s Claude Code, says he hasn’t written a line of code by hand in months. He shipped 22 pull requests one day, 27 the next, all AI-generated. Company-wide, Anthropic reports that 70 to 90% of its code is now written by AI. CEO Dario Amodei has predicted that AI could handle “most, maybe all” of what software engineers do within months.

And yet Anthropic typically has dozens of software engineering openings, one reportedly carrying $570K in total compensation. As one observer noted, the company is simultaneously predicting the end of the profession and paying top dollar to hire into it.

Meanwhile, during his GTC 2026 keynote, NVIDIA CEO Jensen Huang said that 100% of NVIDIA now uses AI coding tools, including Claude Code, Codex and Cursor, often all three. Then, in a conversation on the All-In Podcast during GTC week, Huang sharpened the point: A $500,000 engineer who doesn’t consume at least $250,000 in AI tokens annually is like “one of our chip designers who says, guess what, I’m just going to use paper and pencil.”

This isn’t cognitive dissonance. It’s a signal. And CIOs who look past the headlines will find a pattern that explains not just where AI coding is going, but where all of enterprise AI is headed.

Tellers, not toll booth workers

The instinct is to see this as an extinction event. AI writes all the code; engineers become toll booth workers, replaced entirely by automation with no complementary role left behind. But the data tells a different story, one I explored in a recent CIO.com article on AGI skepticism.

When ATMs rolled out, bank teller employment didn’t collapse. It doubled, from 268,000 in 1970 to 608,000 in 2006. The machines eliminated the routine transaction. But cheaper branch operations meant banks opened more locations, which created demand for tellers who could handle complex financial conversations. Economists call this Jevons Paradox: When technology makes something more efficient, demand expands rather than contracts.

Software engineers are bank tellers, not toll booth workers. AI agents are eliminating routine implementation: The boilerplate, the CRUD endpoints, the standard test scaffolding. But that efficiency is expanding the total surface area of what “engineering” means. Anthropic isn’t paying $570K for someone to type code. They’re paying for the judgment to orchestrate AI agents that type code: Deciding what to build, evaluating whether the output is correct, governing what gets deployed and maintaining systems that are increasingly written by machines.

Cherny confirmed this shift directly. His team now hires generalists over specialists, because traditional programming specialties are less relevant when AI handles implementation details. The skill premium has moved from writing code to supervising it, from production to orchestration.

The reason AI coding agents work

Here’s the question CIOs should be asking: Why are AI agents succeeding in software development faster than in any other enterprise function?

It’s not because coding models are better than models for customer service, legal review or financial analysis. The underlying LLMs are the same. The difference is that software development already had the infrastructure that every other enterprise function lacks.

Developers didn’t build this infrastructure for AI. They built it for themselves, over decades. But it maps almost perfectly to the six infrastructure gaps that are currently blocking AI agents from moving beyond employee-facing pilots into customer-facing production.

6 gaps the SDLC already solved

1. Governance: Right data, right users, right permissions

In software development, governance is built into the workflow. Branch protection, code review policies and role-based access controls create a clear chain of permission from draft to deploy, whether the author is human or agent.

Most enterprise functions have nothing equivalent. When an AI agent drafts a customer response, accesses a patient record or modifies a financial model, the governance layer (who approved this action, what data was it allowed to see, which policies constrain its output) is either ad hoc or absent. Microsoft’s 2026 Cyber Pulse survey found that while 80% of Fortune 500 companies have deployed AI agents, only 47% have agent-specific security policies in place.

2. Observability: Trace and audit the decision trail

Every line of AI-generated code has a paper trail. Git blame shows who (or what) wrote it. CI/CD pipelines log every build, test and deployment. When something breaks in production, engineers can trace the failure from alert to commit to the specific agent session that produced the change.

Outside of engineering, AI agent decisions are largely opaque. A customer-facing agent that denies a claim or escalates a complaint leaves no audit trail. Without observability, enterprises can’t debug bad outcomes, satisfy regulators or build the trust necessary to expand agent autonomy.

3. Evaluation: Measure correctness at scale

Unit tests, integration tests, type checking, linting and automated QA give software engineering something no other enterprise function has: Continuous, objective measurement of whether AI-generated output is correct. That provides a foundation for proving an agent gets it right.

This is the gap other enterprise functions feel most acutely. DigitalOcean’s 2026 survey of 1,100 technology leaders found that 41% cite reliability as their number one barrier to scaling AI agents. Reliability is an evaluation problem: Without automated, continuous measurement of agent output quality, organizations can’t trust agents enough to put them in front of customers.

4. Memory: Persistent context beyond the context window

Developers take persistent context for granted. Version control, documentation and architectural decision records provide context that survives across sessions, teams and years. An AI coding agent can read the commit history, understand why a design choice was made in 2019, and factor it into today’s implementation.

Most enterprise AI agents operate in a memoryless state. Each customer interaction starts from scratch. Each agent session has no awareness of prior decisions, escalations or context beyond what fits in the context window. This is why employee-facing agents (IT help desks, NOC ticketing) succeed where customer-facing agents stall: Internal users tolerate repeating context. Customers do not.

5. Cost controls: Manage LLM spend across providers

Jensen Huang’s $250K-per-engineer token budget isn’t an abstraction. It’s a real cost management challenge that engineering teams are already navigating. Smart teams route differently depending on the task: Use a lightweight model for boilerplate generation, a reasoning model for architectural decisions and a code-specific model for refactoring. They set token budgets per agent session. They measure cost-per-PR and cost-per-feature, not just cost-per-token.

Enterprises deploying AI agents in other functions rarely have this granularity. When Goldman Sachs stated AI near-zero GDP impact in 2025, the missing variable was cost discipline at the workflow level. Without the ability to route, throttle and measure LLM spend per agent task, scaling agents means scaling costs linearly, which eventually kills ROI.

6. Deployment flexibility: Any cloud, on-prem, no lock-in

In software development, the runtime has always been portable. Code that runs on AWS today can run on Azure tomorrow, or on bare metal in your own data center. Containerization, Kubernetes and infrastructure-as-code tools like Terraform mean that engineering teams can change their minds about where workloads run without rewriting the application. Software has had this mindset for decades.

We’re early enough in this agentic development game that it’s tempting to take short cuts. Organizations that build on a single hyperscaler’s agent framework find themselves locked into that provider’s model ecosystem, observability tooling and pricing structure. As agentic AI matures, deployment flexibility (the ability to run agents on any cloud, on-prem or across hybrid environments without vendor lock-in) will separate organizations that scale from those that stall.

Sometimes you’ll want agents to run close to your data. Other times, you’ll want agents close to the users. And you’ll want your developers to be able to move back and forth between different agent code bases without having to learn a different framework between them.

What CIOs should watch at Build and I/O

Google I/O and Microsoft Build will dominate May with dueling AI coding announcements. The temptation will be to compare model benchmarks. That’s the wrong lens. The models are converging. The real competition is one layer down, in the infrastructure that makes AI agents viable outside of software development.

CIOs watching these conferences should evaluate each announcement against the six gaps: Is Microsoft closing the governance gap with Azure AI Foundry? Is Google advancing observability through Vertex AI? Which platform is making it easier to evaluate agent output at scale, maintain persistent memory across sessions, control costs at the workflow level and deploy without lock-in?

The company that wins the AI coding war will be the one that builds the infrastructure layer that transfers to every other enterprise function. That’s the real stakes of May’s developer conferences, and it’s the real reason CIOs should be paying attention.

The canary’s message

Software engineers are the first knowledge workers to live inside a fully agentic workflow. They’re the canary in the coal mine for every other enterprise function. And right now, the canary is singing, not dying.

The lesson isn’t that AI coding agents have made engineers obsolete. It’s that AI coding agents work because engineers already built the infrastructure that makes agents trustworthy. Governance, observability, evaluation, memory, cost controls and deployment flexibility: These aren’t nice-to-haves. They’re the reason Anthropic can ship 27 AI-generated pull requests in a day and sleep at night.

Every other enterprise function will need to build its own version of that infrastructure before AI agents can move from employee-facing pilots to customer-facing production. The models aren’t the bottleneck. The scaffolding around them is.

Anthropic paying $570K for a software engineer whose job might not exist in a year isn’t a contradiction. It’s Jevons Paradox. And it’s the most expensive leading indicator in enterprise AI.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • How the EU’s NIS2 directive is changing how CIOs think about digital infrastructure
    In conversations I’ve had with CIOs over the past year, there’s been a noticeable shift in how NIS2 (Network and Information Security Directive 2) is being discussed. It used to be filed away as another regulatory hurdle to clear, but now it’s prompting CIOs and their teams to think a little deeper about how well they understand the systems they depend on. For a long time, risk has been largely framed within the boundaries of the organization — something that could be mana
     

How the EU’s NIS2 directive is changing how CIOs think about digital infrastructure

23 de Abril de 2026, 08:00

In conversations I’ve had with CIOs over the past year, there’s been a noticeable shift in how NIS2 (Network and Information Security Directive 2) is being discussed. It used to be filed away as another regulatory hurdle to clear, but now it’s prompting CIOs and their teams to think a little deeper about how well they understand the systems they depend on. For a long time, risk has been largely framed within the boundaries of the organization — something that could be managed through internal controls, policies and audits. But that no longer reflects how digital services are built or delivered. Most organizations I encounter rely on a web of providers spanning cloud platforms, data centers, network operators and software vendors, all working together to create a “patchwork” ecosystem. NIS2 is different because it acknowledges that reality and, in doing so, it’s forcing a broader and sometimes more uncomfortable reassessment of where risk really sits.

What stands out to me is that NIS2 doesn’t just focus on individual accountability, but on the very definition of resilience itself. It recognizes that disruption rarely originates within a single process, or even a single organization. More often, it emerges from the connections between them; from unseen dependencies, indirect relationships and assumptions about how systems will behave under pressure. That’s novel, because it moves the conversation away from whether individual systems are secure, and toward whether the overall architecture those systems sit within can continue to function when something inevitably goes wrong. In that sense, NIS2 is less about tightening cybersecurity controls and more about encouraging a different way of thinking, where resilience is shaped as much by how infrastructure is designed and connected as it is by how it is protected.

NIS2 expands the definition of risk beyond the enterprise

One of the most immediate impacts I’m seeing from NIS2 is how it challenges long-held assumptions about control. Speak to any CIO, and they’ll usually talk about securing what sits within their own environments — their applications, services and data. But in practice, very little of today’s digital estate is fully owned because it’s so distributed among third parties with countless links and dependencies. Virtually all business services depend on layers of external providers, each with its own dependencies, architectures and risk profiles. According to the World Economic Forum, the top supply chain risk in 2026 is the inheritance risk — the inability to ensure the integrity of third-party software, hardware or services. NIS2 brings that into sharp focus by extending accountability beyond direct suppliers to include the wider ecosystem that supports them. In essence, it prompts businesses to shift from asking “are we secure?” to “how secure is everything we rely on to operate?”

That’s quite a challenge, because it’s not enough for businesses to simply know their suppliers — they need to understand how deeply interconnected those relationships are. In many cases, the real exposure sits several steps removed, in the providers behind your providers or in shared infrastructure that underpins multiple services at once. The “uncomfortable reassessment” I mentioned earlier is the squaring of this circle — how many organizations have full visibility into that sprawling landscape, let alone the means to control it?

NIS2 is compelling organizations to map dependencies more rigorously, to ask harder questions of their partners and network infrastructure, and to recognize that resilience is only as strong as the most fragile link in the chain. The WEF shows that in 2026, only 33% of organizations map their entire IT supply chain to gain this visibility. And even then, the added risk of unknown service providers, such as is the case when suing the public Internet, where data pathways are neither visible nor controllable, is difficult to quantify.

Compliance is the trigger, but architecture is the challenge

What I find interesting about NIS2 is that it goes deeper than compliance — it’s trying to trigger a shift in culture. It’s relatively straightforward to introduce new policies, expand reporting requirements or formalize supplier assessments. But what happens when those requirements collide with the reality of how modern IT environments are built? Many organizations simply don’t have a clear, end-to-end view of how their services are delivered, how data flows between providers or how incidents might spread like wildfire across the ecosystem they depend on. NIS2 asks CIOs to look beyond governance frameworks and examine whether their operating models support the level of oversight and responsiveness the directive expects.

And that is where the architecture question becomes essential. It’s one thing to require suppliers to report incidents or meet certain security standards; it’s another thing entirely to ensure that the underlying infrastructure is designed to absorb disruption without cascading failure. In my experience, this is where many organizations begin to realize that resilience cannot be layered on afterwards. It must be built into how systems are structured, how dependencies are managed and how connectivity is established between environments. NIS2 may define what needs to be done, but it doesn’t prescribe how to do it. That responsibility sits with CIOs, who now have to translate regulatory intent into practical design decisions about where workloads run, how services interconnect and how failure is contained when it occurs.

Infrastructure design is now resilience design

What this ultimately leads to is a big infrastructure rethink. I’m privileged to have had some interesting discussions with CIOs and other executives about this very topic, so I know that resilience is beginning to be understood as more than a set of security controls. Connectivity is now at the heart of resilience, and in that sense, NIS2 has succeeded in getting organizations to think differently about what resilience really means. If a service depends on a single cloud region, a single network path or a tightly coupled set of providers, then no amount of policy or monitoring will prevent disruption when one of those elements fails. I’m pleased to see organizations starting to question these assumptions — not just asking whether systems are secure, but whether they are structured in a way that allows them to continue operating under stress. That shift in thinking does away with the abstract theory of resilience and defines it as something that can be designed and architected.

From a connectivity perspective, this means building in diversity at every level. Distributing workloads across geographically separate locations, establishing multiple, independent network paths and avoiding unnecessary concentration of critical services all contribute to a more resilient architecture. Interconnection plays a starring role here as the mechanism that allows different parts of the digital ecosystem to communicate in controlled, redundant and predictable ways. When designed properly, this kind of architecture limits the blast radius of any single point of failure and makes it easier to maintain service continuity even when parts of the system are down or under strain. The real takeaway here is that resilience is not something any single organization can achieve in isolation. It emerges from the collective design of the entire ecosystem, where each participant contributes to the overall stability of the services they all depend on.

When regulatory pressure gives way to strategic opportunity

The building blocks are already there. Practices like supplier due diligence, security certifications and business continuity planning are not new. What NIS2 does is raise the bar on how consistently and how deeply they are applied. It also brings a level of structure to conversations that were previously fragmented, particularly when it comes to expectations between partners. And therein lies the strategic upside. Organizations that can clearly demonstrate how they manage risk across their supply chains, how they design for resilience and how they respond to disruption are in a stronger position, not just from a regulatory standpoint, but in how they engage with customers and partners. In some sectors, we’re already seeing this play out through increased requests for transparency, self-assessments and proof of compliance. That trend is only going to accelerate. For CIOs, it’s a golden opportunity to move beyond a defensive posture and position resilience as a key competitive differentiator. It becomes a way to build trust, strengthen relationships and support more sustainable growth, rather than simply a requirement to satisfy regulators.

NIS2 may be the catalyst, but the underlying change runs deeper. It’s pushing CIOs to think beyond compliance and toward a more structural understanding of risk that reflects how digital services operate today.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • 6 ways agentic AI will reshape the enterprise software market
    Microsoft CEO Satya Nadella raised some eyebrows recently when he predicted that traditional business applications will “collapse” in the agentic AI era. Investor concerns that agentic AI could disrupt the enterprise software market came to a head in early February when Anthropic’s release of Cowork — a clear shot across the bow at Microsoft Copilot — triggered a massive selloff in U. S. software company stocks. But is the “SaaS-pocalypse” simply a Wall Street phenome
     

6 ways agentic AI will reshape the enterprise software market

14 de Abril de 2026, 07:01

Microsoft CEO Satya Nadella raised some eyebrows recently when he predicted that traditional business applications will “collapse” in the agentic AI era.

Investor concerns that agentic AI could disrupt the enterprise software market came to a head in early February when Anthropic’s release of Cowork — a clear shot across the bow at Microsoft Copilot — triggered a massive selloff in U. S. software company stocks.

But is the “SaaS-pocalypse” simply a Wall Street phenomenon or does it impact enterprise CIOs who have invested millions into enterprise software systems that run mission-critical business processes? Is the “SaaS is dead” rhetoric real or overblown?

There’s no question that agentic AI is a game changer, but maybe in unexpected ways. Here’s what expert industry observers are saying.

Enterprise incumbents will gain early advantages

Industry outlook: Incumbent market leaders will likely retain much of their dominance for the foreseeable future by embedding agents into their platforms.

In terms of the fate of the software market, Forrester analyst Kate Leggett says, “There’s investor and company valuations and then there’s the reality of what happens in large enterprises, as well as the timeline that these changes will happen.”

“Core applications are not going to go away anytime soon,” she tells CIO, although there will be erosion around the edges. In terms of timeline, Leggett says, “It could take decades for these workloads to be completely taken over by AI agents.”

Industry consultant William Flaiz adds, “Executive-level decisions are not being made to rip out CRM systems.” However, enterprise CIOs are highly incentivized to add agentic AI to existing platforms to squeeze more value from their current multimillion-dollar investments. “They’re looking to see what they can do better using the tools they have available,” he says.

“There’s a lot of black-and-white thinking about the level of disruption that these incumbents are facing,” says Alex Demeule, senior analyst at Technology Business Review Inc. (TBRI). “Obviously AI is going to have a big impact on software vendors. But when we look at what the future holds, framing it in a 5- to 10-year span, they’re positioned a lot better to make the pivot into the AI era compared to what we’re seeing in the stock price.”

Demeule adds, “When talking about large enterprises, I still think the risk of handing off autonomy to an agentic system is not really there yet.” His prediction is that agentic AI will be rolled out slowly and that humans will remain in the loop for many years to come.

Agentic AI will transform pricing models

Industry outlook: Agentic AI will trigger a seismic shift from subscription-based pricing to consumption- or outcome-based pricing.

Dana Gardner, president and principal analyst at Interarbor Solutions, says, “The short to medium-term concern is less about ripping and replacing systems of record, and more about the end of the current level of pricing power from these vendors.”

He adds, “Savvy CIOs will be leveraging AI to reduce the total cost of IT.” AI agents have the capacity to understand consumption and usage patterns of business applications and CIOs will be able to turn those insights into more favorable contracts, says Gardner.

Bain & Co., in a report on the impact of AI on the SaaS market, says, “If an agent replaces a human task, customers will expect to pay based on outcomes, not log-ons. Leaders, such as Intercom and Salesforce, are already shifting in this direction. The fundamental shift is to stop charging for access and start charging for work done.”

And IDC, in its FutureScape: Worldwide Agentic AI 2026 Predictions report, says that by 2028, pure seat-based pricing will be obsolete, with 70% of software vendors refactoring their pricing strategies around new value metrics, such as consumption, outcomes, or organizational capability.

Forrester’s Leggett says this shift away from subscription pricing could play out in a number of ways. For example, a CIO with a subscription for 100 seats could swap 10 or 20 of those seats for consumption-based or outcome-based pricing. Vendors will likely offer different licensing tiers or flex options that include some type of agentic-based pricing. Or consultants could come in and offer to manage agentic AI implementations and charge a percentage of the outcomes.

Software platforms will merge, creating new rivalries

Industry outlook: Because AI agents don’t care where data comes from, the lines between traditional enterprise software categories like CRM and ERP will blur.

To be effective, AI agents need access to data no matter where it resides. SaaS vendors recognize this imperative and are breaking down the lines of demarcation between CRM, ERP, IT service management, and other categories.

Leggett points out that vendors such as Oracle and Microsoft are building unified data platforms that integrate with the open-source Model Context Protocol (MCP) to support complex AI-based workflows.

Oracle offers an integrated suite of cloud-based ERP and CRM applications along with a fully managed agentic platform. Microsoft offers both ERP and CRM functionality under its Dynamics365 umbrella, as well as industry-specific agentic offerings based on small language models (SLMs), which are more lightweight and cost-effective than LLMs.

SAP is integrating its Signavio business process management suite, its LeanIX SaaS tool for enterprise architecture management, and its Joule AI agent into one cohesive system.

Salesforce is merging its Mulesoft integration and automation platform-as-a-service offering with its Data360 customer data platform and its Agentforce AI platform.

IT service management powerhouse (ITSM) ServiceNow recently completed its acquisition of agentic AI platform vendor Moveworks and is challenging Salesforce in CRM.

Winners and losers

Industry outlook: Agentic AI will have a major impact on point product vendors; those with a generic app will struggle; those with a domain-specific tool are better positioned to survive.

Forrester’s Leggett divides the enterprise software market into three segments. She says that simple point products such as workflow, spreadsheet, or lightweight project management apps “are going to go away in fairly short order,” because they are easy to replicate and don’t offer differentiation.

Highly verticalized apps are more insulated from disruption because they deliver deep domain expertise and integrations with adjacent systems such as CAD or medical imaging. Examples include Epic and Cerner for electronic healthcare record (EHR) management, IQVIA for pharmaceuticals and life sciences, or Procore in construction.

The major CRM platform players have built-in advantages, including a moat around their data, industry-specific knowledge and workflows, deep partner networks, industry best practices as well as expertise in areas such as regulatory compliance, according to Leggett.

TBRI’s Demeule also points out that these incumbent vendors have survived precisely because they have been able to successfully pivot each time a potential disruption occurred, whether that’s moving from on-prem to the cloud, or shifting from perpetual licenses to subscriptions.

Vibe coding to disrupt certain segments

Industry outlook: Vibe coding could disrupt SaaS vendor dominance, empowering end users to spin up their own agents.

Vibe coding, the use of AI agents to write software based on simple natural language prompts, takes the low-code, no-code movement to another level.

With vibe coding, end users can ask OpenAI ChatGPT, Google Gemini, Anthropic Claude, Cursor Chat, GitHub Copilot, or others to build a productivity app outside the boundaries of a traditional CRM or ERP platform, for example.

Leggett says vibe coding is a legitimate threat because it can potentially enable workers to be more productive, while sidestepping traditional enterprise software platforms, which for many end users are seen as bloated and overly complicated.

On the other hand, organizations that are not technologically sophisticated might not have the skills or the confidence to build and deploy their own agents that impact mission-critical workflows.

“Vibe coding in terms of a point solution is something we see as being disruptive,” says Demeule. “But it’s the ability to build point solutions that’s more at risk of being disrupted versus something managing an entire customer database or supply chain database.”

The agentic orchestration layer emerges

Industry outlook: Traditional SaaS applications will still exist, but they will likely be hidden behind an agentic orchestration layer.

Analysts agree that the user interface of the future will not be the traditional SaaS platform; it will be agentic. That doesn’t mean the underlying CRM or ERP will go away, but it will be hidden.

IDC analyst Bo Lykkegaard says, “Complexity is the Achilles’ heel of the SaaS model. Each SaaS application demands its own learning curve and user interface, often used sporadically and inefficiently. AI offers a compelling remedy. Instead of navigating multiple dashboards, users could interact with agent-driven, conversational interfaces that perform tasks across systems. The result? AI as the new interface layer, which is one that abstracts away complexity, automates repetitive processes, and redefines how enterprises consume software.”

TBRI’s Demeule envisions an orchestration agent that assigns tasks to an LLM, an SLM, or a robotic process automation (RPA) tool, based on which approach is more appropriate in terms of efficiency, as well as other considerations such as cost and energy usage.

The question that will play out over the next several years is whether enterprise CIOs obtain that functionality from their current providers or from disruptors like OpenAI, Anthropic, Palantir, UiPath, or others.

❌
❌