Visualização normal

Ontem — 8 de Maio de 2026Stream principal
Antes de ontemStream principal
  • ✇Security | CIO
  • Why the future of software is no longer written — it is architected, governed and continuously learned
    We are entering a decade where software is no longer just an enabler of business — it is the primary mechanism through which intelligence is created, scaled and monetized across the enterprise. For CIOs, this is not another technology cycle. This is a leadership inflection point. Across boardrooms, investor discussions and strategic planning sessions, the conversation is shifting rapidly: From “How fast can we build software?” To “How intelligently can we desi
     

Why the future of software is no longer written — it is architected, governed and continuously learned

7 de Maio de 2026, 08:00

We are entering a decade where software is no longer just an enabler of business — it is the primary mechanism through which intelligence is created, scaled and monetized across the enterprise.

For CIOs, this is not another technology cycle. This is a leadership inflection point.

Across boardrooms, investor discussions and strategic planning sessions, the conversation is shifting rapidly:

  • From How fast can we build software?”
  • To How intelligently can we design, govern and scale decision systems?”

This is a fundamental reframing of the CIO mandate.

The organizations that recognize this shift early will not just move faster — they will compound intelligence faster, creating asymmetric advantage in markets where speed alone is no longer sufficient.

The following perspective must therefore be read not as a technology trend, but as a strategic operating model shift for CIOs entering 2026 and beyond.

The next inflection point: Software development is no longer about code

Over the past two decades, software development has evolved through predictable phases — manual coding, agile acceleration, cloud-native scaling and DevOps automation. But as we enter 2026, that trajectory is no longer linear.

We are now witnessing a structural break.

Generative AI and agentic systems are not simply accelerating development — they are redefining the very nature of software creation, ownership and accountability.

This shift mirrors the broader transformation outlined in the CIO 3.0 paradigm, CXO 3.0: How intelligent leadership will redefine enterprise value, where technology leadership has moved from operating systems to architecting enterprise intelligence itself.

In software development, this translates into a fundamental question for boards, CIOs, CTOs, CISOs and chief AI officers (CAIOs): Are we still building software or are we now orchestrating intelligence systems that build themselves?

What makes this transition particularly consequential is that it is already happening quietly but decisively.

Across high-performing organizations:

  • AI-generated code is already contributing meaningfully to production systems
  • Development cycles are compressing from weeks to days — and in some cases, hours
  • Decision-making is increasingly embedded directly into software systems rather than layered on top

Yet, in many enterprises, governance, accountability and operating models have not kept pace.

This gap between capability acceleration and governance maturity is where both the greatest opportunity and the greatest risk now reside.

2 forces reshaping software development in 2026

1. AI across the full software development lifecycle (SDLC)

Generative AI has moved beyond coding assistance into end-to-end lifecycle orchestration, consistent with broader enterprise AI adoption trends where organizations are embedding AI across multiple functions (McKinsey State of AI: The state of AI in 2025: Agents, innovation and transformation):

  • Planning & Design → AI-driven requirements synthesis, architecture generation
  • Development → Code generation, refactoring, pattern enforcement
  • Testing → Autonomous test case creation and validation
  • Deployment → Intelligent CI/CD pipelines with adaptive optimization
  • Maintenance → Self-healing systems, anomaly detection, auto-remediation

The developer is no longer just a coder. The developer is becoming a curator of intent, constraints and outcomes.

The compression of the SDLC

What historically required:

  • Weeks of design
  • Months of development
  • Iterative testing cycles

Can now be orchestrated through multi-agent AI systems operating in parallel.

This introduces a new dynamic: Software development is no longer a sequential process — it is becoming a continuously adaptive system.

For CIOs, this means:

  • Traditional governance checkpoints may become bottlenecks
  • Legacy approval workflows may inhibit innovation velocity
  • Organizational design must evolve alongside technical capability

2. Intensifying competition in AI coding ecosystems

The competitive landscape is accelerating rapidly, particularly across ecosystems led by:

  • Microsoft (GitHub Copilot, Azure AI)
  • Google (Gemini, Vertex AI, developer tooling)
  • Apple (on-device AI, developer ecosystem integration)

Events like Google I/O and Microsoft Build are no longer just developer conferences—they are strategic battlegrounds for control over the future of software creation (Google I/O: Google I/O | Microsoft Build: Microsoft Build).

The stakes are clear:

  • Whoever controls the AI development stack controls the next generation of digital economies
  • Whoever defines the developer experience defines the innovation velocity of entire ecosystems

Platform gravity is becoming strategic gravity

The implication for CIOs is profound.

Choosing a development ecosystem is no longer a tooling decision — it is a strategic alignment decision that determines:

  • Data gravity
  • Talent alignment
  • Innovation velocity
  • Long-term vendor dependency

In effect: Your AI development platform choice is becoming your enterprise’s innovation ceiling.

From SDLC to IDLC: The rise of the Intelligent Development Lifecycle

Traditional SDLC frameworks are becoming obsolete.

In their place, a new paradigm is emerging: The Intelligent Development Lifecycle (IDLC)

This is not simply an evolution — it is a redefinition of how software is conceived, built and governed.

Key characteristics of IDLC:

  • Intent-driven development: Developers define what and why, not just how
  • Agentic execution: AI agents perform multi-step development tasks autonomously
  • Continuous learning loops: Systems improve based on real-time feedback and usage patterns
  • Embedded governance: Compliance, security and auditability are built into execution (NIST AI Risk Management Framework)
  • Decision-centric architecture: The primary output is not code — it is decision capability

IDLC as a leadership operating model

IDLC is not just a development methodology.

It is an enterprise operating model for intelligence creation.

It changes:

  • How teams are structured
  • How accountability is defined
  • How value is measured

For CIOs, adopting IDLC means shifting from:

  • Managing delivery pipelines
  • To governing decision supply chains

The emerging reality: Developers as intelligence orchestrators

As AI agents take over repetitive and even complex coding tasks, the developer role is undergoing a profound transformation.

From:

  • Writing code line by line
  • Debugging manually
  • Managing environments

To:

  • Designing system intent
  • Governing AI agents
  • Ensuring ethical and secure outcomes
  • Orchestrating multi-agent collaboration

This is not a reduction in developer relevance.

It is an elevation of developer responsibility.

Talent transformation is now a CIO priority

This shift introduces a critical challenge:

Most current developer skill models are not aligned to this future state.

CIOs must now proactively invest in:

  • AI-native engineering skills
  • Prompt and intent engineering
  • Model governance literacy
  • Cross-disciplinary collaboration

Because the future developer is not just technical — they are decision designers.

The CXO convergence: Why this is no longer just a CTO conversation

The transformation of software development is not confined to engineering teams.

It now sits at the intersection of four critical leadership domains, reflecting the broader evolution of CIOs into strategic business leaders shaping enterprise outcomes (State of the CIO: State of the CIO):

CIO: The intelligence architect

  • Aligns AI-driven development with enterprise strategy
  • Ensures scalability and integration across platforms
  • Drives value realization from software investments

CTO: The innovation orchestrator

  • Defines architecture patterns for AI-native development
  • Leads platform engineering and developer experience
  • Drives competitive differentiation

CISO: The trust enforcer

  • Ensures secure AI-generated code
  • Governs data lineage and model integrity
  • Mitigates risks from autonomous systems

CAIO: The intelligence governor

This convergence reflects a broader reality: Software development is no longer a technical function — it is an enterprise risk, value and governance function.

Introducing a new framework: SAFE-AI DevOps

To navigate this transformation, enterprises require a disciplined, Board-ready approach.

SAFE-AI DevOps Framework (Secure, Adaptive, Federated, Explainable AI Development Operations)

This is a next-generation operating model for AI-driven software development.

1. Secure by Design (S)

  • AI-generated code must meet zero-trust security principles
  • Continuous vulnerability scanning integrated into AI pipelines
  • Secure prompt engineering and model access controls

CISO-led mandate: Trust is the new runtime environment

2. Adaptive Intelligence (A)

  • Systems learn and evolve continuously
  • AI models adapt to changing requirements and environments
  • Feedback loops drive improvement across lifecycle

CIO-led mandate: Learning velocity is the new productivity metric

3. Federated Development (F)

  • Multi-agent collaboration across distributed environments
  • Integration across cloud, edge and on-prem ecosystems

CTO-led mandate: Scale innovation without losing control

4. Explainable Execution (E)

  • Every AI-generated decision must be traceable
  • Audit trails for code generation and deployment

CAIO-led mandate: Explainability is the new compliance baseline

5. AI-Native DevOps (AI)

  • Autonomous CI/CD pipelines
  • Predictive deployment optimization
  • Self-healing systems and automated incident response

Cross-CXO mandate: Automation is no longer optional — it is foundational

The competitive battlefield: Ecosystems, not tools

The next phase of competition is not about individual tools.

It is about ecosystem dominance, as hyper-scalers invest heavily in AI infrastructure, platforms and developer ecosystems (McKinsey Technology Strategy Insights: McKinsey Global Tech Agenda 2026).

Key battlegrounds:

  • Developer platforms
  • Model ecosystems
  • Data gravity
  • AI infrastructure

As highlighted in your CIO.com perspective, infrastructure itself is becoming a strategic intelligence decision, not just an operational one.

The risk dimension: AI-generated code is not inherently safe

While productivity gains are undeniable, risks are escalating:

  • Hallucinated code vulnerabilities
  • Licensing and IP violations
  • Model bias and ethical concerns
  • Regulatory exposure (EU AI Act, NIST AI RMF)

This creates a new category of risk: AI Development Risk

This requires structured governance aligned with emerging regulatory and risk frameworks (NIST AI RMF: AI Risk Management Framework).

Blockchain and quantum: The next convergence layer

As we move beyond 2026, two additional forces will reshape AI-driven development:

Blockchain

  • Immutable audit trails for AI-generated code
  • Smart contracts governing software execution

Quantum Computing

  • Breakthroughs in optimization and cryptography

Together with AI, they form a converging intelligence stack that will redefine software engineering, consistent with broader enterprise transformation trends toward intelligent systems.

Boardroom implications: What investors and directors must understand

The shift to AI-driven development is not just technical — it is financial.

Research shows AI delivers the greatest impact when integrated into enterprise strategy rather than siloed initiatives (BankInfoSecurity: C-Suite Leaders Must Rewire Businesses for True AI Value).

Key board-level questions:

  • How much of our software is AI-generated?
  • What governance exists for AI-generated decisions?
  • How do we ensure security and compliance at scale?
  • What is our dependency on external AI ecosystems?
  • How does this impact enterprise valuation?

Because the reality is: Software is no longer a cost center — it is a capital engine.

The new metrics: Measuring success in AI-driven development

Traditional metrics are insufficient.

Old metrics:

  • Lines of code
  • Development velocity
  • Bug counts

New metrics:

  • Decision throughput
  • AI-assisted productivity ratio
  • Model governance maturity
  • Security incident reduction
  • Time-to-intelligence (TTI)

The leadership mandate for 2026 and beyond

The transformation of software development demands a new leadership mindset.

Three defining mandates for 2026:

  1. Architect intelligence, not just applications
  2. Govern AI as an enterprise asset
  3. Align ecosystems with strategy

The future of software is a leadership decision

As we look ahead to 2026 and beyond, one reality becomes undeniable: The future of software development will not be decided by developers alone.

It will be shaped by:

  • CIOs who architect intelligence
  • CTOs who orchestrate innovation
  • CISOs who enforce trust
  • CAIOs who govern AI responsibly
  • Boards that understand the strategic implications

Because in this new era, code is no longer the product. Intelligence is. And the organizations that learn fastest will not just build better software — they will redefine entire industries.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • When AI writes code, it joins the software supply chain
    AI tools designed to assist developers are no longer staying in the background. They are starting to shape what actually gets built and deployed. They open pull requests. They modify dependencies. They generate infrastructure templates. They interact directly with repositories and CI/CD pipelines. At some point, this stops being assistance. It becomes participation. And participation changes the problem. When assistance becomes participation The
     

When AI writes code, it joins the software supply chain

7 de Maio de 2026, 07:00

AI tools designed to assist developers are no longer staying in the background. They are starting to shape what actually gets built and deployed.

They open pull requests.

They modify dependencies.

They generate infrastructure templates.

They interact directly with repositories and CI/CD pipelines.

At some point, this stops being assistance.

It becomes participation.

And participation changes the problem.

When assistance becomes participation

The shift from generative to agentic behavior is the inflection point.

Earlier tools operated inside a tight loop. A developer prompted. The system suggested. The developer reviewed. Nothing moved without human intent.

That boundary is eroding.

Newer systems propose changes, update libraries, remediate vulnerabilities and interact with development pipelines with limited human intervention. They don’t just accelerate developers. They begin to shape the artifacts that move through the software supply chain — code, dependencies, configurations and infrastructure definitions.

That makes them something different.

Not tools.

Participants.

And once something participates in the supply chain, it inherits the same question every other participant does:

How is it governed?

A simple scenario

Consider a common pattern already emerging in many environments.

An AI system identifies a vulnerable dependency.

It opens a pull request updating the library.

A workflow triggers automated tests.

The change is promoted into a staging environment.

Four steps.

No human review.

No explicit governance checkpoint.

Each step is individually valid. Nothing looks wrong in isolation.

But taken together, they create something fundamentally different: A system that can change enterprise software without human intent being re-established at any point. Research from Black Duck found that while 95% of organizations now use AI in their development process, only 24% properly evaluate AI-generated code for security and quality risks.

This is autonomous change propagation across the software supply chain.

The “human-in-the-loop” fallacy

Many organizations rely on a “human-in-the-loop” (HITL) requirement as a safety mechanism for AI-generated code.

At low volumes, this works.

At scale, it breaks.

When an AI system generates dozens of pull requests in a short window, review becomes a throughput problem, not a control. The cognitive load of validating machine-generated logic exceeds what a human can realistically govern.

What remains is not oversight, but a checkpoint.

And checkpoints without effective review are not controls.

The governance gap

Most governance models assume a stable truth: Humans are the primary actors.

Controls tied identity to individuals, approvals to intent and audit trails to accountability.

Even automation systems are treated as extensions of human intent — predictable, bounded and deterministic.

AI systems break that model.

They can generate new logic, act on it and propagate changes across systems. Yet in most environments, they are still governed as if they were static tools.

That mismatch is the gap.

Machine identity is no longer what it was

One way to see this clearly is through identity.

Every interaction an AI system has — repository access, pipeline execution, API calls — requires credentials. In practice, these systems operate as machine identities.

But they are not traditional machine identities.

A service account executes predefined logic. Its behavior is known in advance. Its risk is bounded by what it was configured to do.

An AI-driven system is different. It generates the logic it then executes.

It can propose new code paths, interact with new systems and trigger actions that were not explicitly predefined at the time access was granted.

That is a category change.

Not just a new identity type, but a new attack surface: Identities that can generate the behavior they are authorized to execute.

The World Economic Forum has identified this class of non-human identity as one of the fastest-growing and least-governed security risks in enterprise AI adoption.

Measuring exposure before solving it

Most organizations already track access-related metrics. Those metrics were designed for human-driven systems.

They are no longer sufficient.

If AI systems are participating in the software supply chain, organizations need to measure where and how that participation introduces risk.

A few signals matter immediately:

  • AI-generated artifact footprint: What portion of code, dependencies or infrastructure definitions in production originates from AI-assisted processes?

  • Authority scope of AI systems: What systems can these identities access — and what actions can they take across repositories and pipelines?

  • Autonomous change rate: How often are changes introduced and propagated without explicit human review?

  • Cross-system interaction surface: How many systems does a single AI workflow touch as part of normal operation?

  • Auditability of AI-driven actions: Can changes be traced cleanly to a system, workflow and triggering context?

These are not abstract concerns. They are measurable.

And until they are measured, they are not governed.

The regulatory imperative

This is not just a technical shift. It is a governance and liability shift.

As regulatory expectations evolve — from AI accountability frameworks to cybersecurity disclosure requirements — organizations are increasingly responsible for explaining and controlling automated decisions inside their environments.

If an AI-driven change introduces a vulnerability or leads to a material incident, “the system generated it” will not be an acceptable answer.

Accountability will still sit with the enterprise.

That raises the bar: Governance must extend to how autonomous systems act, not just how they are accessed.

The architecture gap

Diagram of the AI architecture governance gap
AI systems operate horizontally across systems, while governance remains vertical

Puneet Bhatnagar

The issue is not that any one control is missing.

It is that AI systems operate across the seams of systems designed to govern within their own boundaries.

Repositories enforce code controls.

Pipelines enforce deployment controls.

Identity systems enforce access controls.

Security tools enforce policy checks.

Each works as designed.

But AI systems move across all of them.

They read from one system, generate changes, trigger another and influence a third. Authority is exercised across systems, while governance remains within them.

That is the architectural gap.

A different governance model

Most organizations will respond to this shift by trying to extend existing access controls. That instinct is understandable — and insufficient.

The problem is no longer just who or what can access a system. It is how control is maintained when authority can generate new actions dynamically.

This requires a different model of governance.

One that treats software systems as actors whose behavior must be bounded, observed and continuously evaluated across workflows — not just permitted or denied at a point of access. Governance becomes less about static permissions and more about controlling the shape and impact of actions across systems.

That is the shift.

Conclusion

The conversation around AI in software development often focuses on productivity.

But as AI systems begin to participate in producing and modifying enterprise software, the more important question becomes governance.

AI is not just accelerating the software development lifecycle. It is becoming part of the software supply chain itself.

And that changes the problem.

The challenge for CIOs is no longer just managing developers, tools or pipelines. It is understanding and governing the authority that software systems exercise across them.

Because in a world where software can act on behalf of the enterprise, governance is no longer just about access.

It is about authority — what systems are allowed to do, and how that authority is controlled and measured over time.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • I gave our developers an AI coding assistant. The security team nearly mutinied
    I’ve sat in enough risk meetings to know the sound a bad surprise makes before anyone names it. It usually starts with a pause. Then a throat gets cleared. Then someone says, “We may need to bring the CISO into this.” That happened over a developer tool. Not a breach. Not a regulator. Not ransomware at 2:00 a.m. A coding assistant. At first, I thought the reaction was overcooked. I’d seen the same pattern in other boardrooms and delivery teams. A new tool appears.
     

I gave our developers an AI coding assistant. The security team nearly mutinied

6 de Maio de 2026, 09:00

I’ve sat in enough risk meetings to know the sound a bad surprise makes before anyone names it. It usually starts with a pause. Then a throat gets cleared. Then someone says, “We may need to bring the CISO into this.”

That happened over a developer tool.

Not a breach. Not a regulator. Not ransomware at 2:00 a.m. A coding assistant.

At first, I thought the reaction was overcooked. I’d seen the same pattern in other boardrooms and delivery teams. A new tool appears. Engineers like it because it saves time. Leadership likes it because it promises more output without hiring half a city. Security hates it because security has the social burden of being the adult in the room when everyone else is buying fireworks.

I backed the rollout because the case was clean on paper. Developers were drowning in repetitive work. Deadlines were tightening. Technical debt had started breeding in the dark. The assistant could draft tests, explain old code, suggest refactors and help junior engineers stop treating Stack Overflow like an underground pharmacy. And this was no longer fringe behavior. In 2025, Microsoft said that 15 million developers were already using GitHub Copilot, and the tool has spread further since then.

So yes, I approved it.

Then security nearly revolted.

That week taught me something I now say to clients more bluntly than I used to. AI coding tools do not just change software delivery. They change the terms of trust inside the company. They force you to answer ugly questions about control, proof, accountability and review discipline. Most public coverage still stares at productivity. The harder story sits elsewhere. Governance.

The part that looked sensible

The truth is, I didn’t approve the tool because I was dazzled. I approved it because I’ve spent years watching good people waste good hours on bad repetition.

You can only tell a team to “be strategic” so many times before they start laughing at you. Developers were buried under boilerplate, documentation drift, brittle legacy code and the kind of ticket churn that makes bright people look tired. A coding assistant looked like a relief. Not magic. Relief.

That distinction matters.

In advisory work, I’ve learned that many poor decisions do not begin as foolish decisions. They begin as reasonable decisions made inside an outdated control model. That’s what this was. The business case made sense. The mistake was assuming the old review system could keep up with the new speed.

That old assumption dies hard. Leaders often think software risk changes when the code changes. Often, it changes earlier, as production conditions change. If a machine now drafts what humans once wrote line by line, the issue is not only code quality. It is code volume, code origin and the shrinking time between suggestion and production.

That is a different risk shape.

Why security lost its patience

The security team was upset because they could see the math.

Code output was about to rise. Review time was not.

That gap is where trouble rents office space.

Many non-security leaders still imagine the concern is simple. “The AI might write bad code.” That’s the kindergarten version. The real concern is broader and nastier. Who reviewed the output? What hidden package did the model nudge into the build? What sensitive context got pasted into the prompt window? Which junior engineer trusted the suggestion because it sounded calm and looked polished? Which policy assumed human authorship when the draft came from somewhere else?

Those are not philosophical questions. They are operating questions.

Recent security work has made this much harder to dismiss. Snyk described a February 2026 case in which a vulnerability chain turned an AI coding tool’s issue triage bot into a supply chain attack path. That is the sort of sentence that makes security teams sit up straight and ask for names, logs and meeting invites.

And that is before you get to the quieter problem. AI-generated code can look tidy long before it is safe. Security people know that neat syntax can hide weak controls, lazy validation, poor handling of secrets and dependency choices nobody meant to own.

So when the team escalated, they weren’t staging a mutiny over a plugin. They were reacting to a change in production logic that nobody had yet governed.

What the fight was really about

Once the temperature dropped, the shape of the dispute became obvious to me. It was not engineering versus security. It was speed versus proof.

More precisely, it was four things:

  1. Velocity. The assistant increased output far faster than assurance could keep pace.
  2. Visibility. We did not have a clear sight of where the tool was used, what prompts were fed into it, what code it influenced or what external components it smuggled into the discussion.
  3. Validation. Existing checks were built for a world in which humans produced most of the first draft. That world is fading. When code generation speeds up, review cannot stay ceremonial.
  4. Governance. Nobody had written the rules that mattered most. Which use cases were fine? Which were off-limits? Who owned the risk of acceptance? What evidence would prove that the tool was used safely enough?

That last point gets too little airtime. Governance sounds dull until you don’t have it. Then it becomes the difference between controlled use and polite chaos.

NIST’s recent work on monitoring deployed AI systems makes the same point more broadly. Organizations need post-deployment measurement and monitoring because real-world behavior drifts, surprises occur and governance after launch remains immature. Different setting, same lesson. You cannot inspect your way out of weak operating design.

What we did next

We did not ban the tool. That would have been theatre dressed as courage.

We also did not waive it through and tell security to “partner more closely.” I’ve heard that sentence enough times to know it usually means, “Please absorb more risk with better manners.”

We did something less dramatic and more useful. We narrowed the rollout and rewrote the conditions of trust.

Low-risk use cases stayed in play. Drafting tests. Explaining old functions, helping with documentation and suggesting boilerplate. Those were manageable.

High-risk areas got tighter boundaries. Auth flows. Secrets handling. Encryption logic. Infrastructure-as-code for sensitive environments. Anything tied to regulated data or material security controls. Those needed a stricter review or stayed out of scope.

We also drew a hard line on prompt hygiene. No customer data. No credentials. No confidential architecture details were dropped into a chat window because someone wanted a faster answer on a Friday afternoon. You would think that goes without saying. It does not.

Then we raised the review standard. Human sign-off meant real sign-off, not a quick skim and a merge. Scanning had to cover dependencies and code changes with more discipline. Provenance mattered more. Logging mattered more. Exception paths had to be explicit, not social.

Most importantly, security moved from late-stage critic to co-designer. That changed the tone. The question stopped being, “Can we use this?” and became, “Under what conditions can we trust its use enough to defend it later?”

That small shift matters more than many policy documents.

What both sides got right — and wrong

Developers were right about the waste. They were right that these tools remove drudgery. They were right that refusing every new capability is not a strategy. A team that cannot experiment eventually decays into compliance theatre and backlog sorrow.

They were wrong to assume readable code is trustworthy code. They were wrong to treat assistance as neutral. Tools shape behavior. That is what tools do. Once suggestions arrive fast and fluently, people accept more than they admit.

Security was right about review debt. Right about supply chain exposure, right about data leakage risk. Right, governance should not arrive three incidents late, wearing a blazer and a lessons-learned slide.

They were wrong at first, as many security teams are when they feel cornered. They made the conversation sound like a moral referendum. That never helps. If security cannot offer a usable path, the business routes around it. Then you get the worst of both worlds: Secret adoption and public optimism.

I don’t say that with smugness. I say it because I’ve watched good teams damage each other by defending the right thing in the wrong way.

The bigger lesson for leaders

This is where the story stops being about one rollout and starts becoming board material.

If your developers can now produce more code with less effort, your governance burden rises even if your headcount does not. The old ratio between output and oversight has broken. Many firms have not adjusted.

That matters because software governance is no longer just about secure coding standards or release gates. It is about production conditions. Who can generate? Under what rules? With what evidence? Across which risk zones? With whose approval? And if something goes wrong, who owns the final act of acceptance?

Those questions sound administrative until the first incident report lands, and nobody can explain whether the flawed logic was written, suggested, copied, reviewed or merely assumed.

The market is moving quickly. Microsoft’s own recent security reporting says organizations adopting AI agents need observability, governance and security now, not later. Snyk is making a similar argument from the perspective of the software supply chain. Visibility first. Then prevention. Then governance that holds under pressure.

That is why I now advise something that used to sound severe and now sounds merely accurate. If you deploy AI coding tools without redesigning your control model, you are not buying productivity. You are buying ambiguity at machine speed.

What you should ask before you approve the next tool

You do not need a grand doctrine. You need a few hard questions asked before excitement turns into policy by accident.

Where can this tool be used, and where can’t it be used?

What data may enter it?

How will you know when the generated code reaches production?

What review standard applies when the first draft came from a machine?

Who can approve exceptions?

What logs, scans and decision records will let you defend the setup six months later, when memories blur and staff rotate?

That is not bureaucracy. That is self-respect.

I still believe these tools have value. I’d be foolish not to. But I trust them the way I trust a very fast junior colleague with a beautiful writing style and uneven judgment. Useful. Impressive. Worth keeping. Not someone you leave unsupervised near the crown jewels.

The near-mutiny turned out to be healthy. It forced the truth into the room before a failure did. Security was not blocking progress. They were objecting to unmanaged speed. Developers were not being reckless. They were asking for relief from the grind. Leadership’s job was not to pick a side. It was to write a better contract between them.

That is the part that too many firms still miss.

The argument was never only about a coding assistant. It was about whether we still knew how to govern work once the work started moving faster than our habits. That is a much bigger story. And if you listen carefully, you can hear it starting in many companies right now.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Vibe coding goes enterprise: What you need to know about AI-driven legacy modernization
    Google’s CEO says vibe coding makes programming “enjoyable” and “exciting again.” Klarna’s CEO prototypes products in 20 minutes instead of waiting two weeks. Collins Dictionary named “vibe coding” its Word of the Year for 2025. The message seems clear: AI has democratized software development. Just describe what you want in plain English and let AI handle the code. For CIOs managing enterprise software estates, this narrative doesn’t fully capture the complexity of thei
     

Vibe coding goes enterprise: What you need to know about AI-driven legacy modernization

5 de Maio de 2026, 09:00

Google’s CEO says vibe coding makes programming “enjoyable” and “exciting again.” Klarna’s CEO prototypes products in 20 minutes instead of waiting two weeks. Collins Dictionary named “vibe coding” its Word of the Year for 2025. The message seems clear: AI has democratized software development. Just describe what you want in plain English and let AI handle the code.

For CIOs managing enterprise software estates, this narrative doesn’t fully capture the complexity of their reality.

I’ve watched clients become captivated by the vibe coding promise. They see demos where AI generates a working prototype in minutes. They imagine their legacy modernization problems solved. Then they try applying these tools to a 25-year-old mainframe application processing millions of transactions daily and discover why speed alone doesn’t solve enterprise problems.

The gap between prototyping a new app and modernizing critical infrastructure isn’t about coding velocity. It’s about preserving decades of undocumented business logic while simultaneously transforming the technical foundation beneath it. That requires a fundamentally different approach than telling AI to “build me a customer portal.”

Diagram: Two approaches to AI-assisted development

Dotun Opasina

What vibe coding solves (and what it doesn’t)

Vibe coding — using natural language to prompt AI into generating code — has legitimate enterprise applications. A product manager can validate an idea without engineering resources. A business analyst can prototype a workflow automation without waiting for sprint capacity. A marketing team can build internal tools without IT tickets.

These are real productivity gains. When Sundar Pichai says vibe coding has “made coding so much more enjoyable,” he’s describing how AI removes friction from exploration and experimentation. The barrier between “I wish we had this” and “here’s a working version” has essentially collapsed.

But enterprise modernization isn’t exploration. It’s surgery on mission-critical systems where the patient can’t be sedated.

Consider the typical enterprise modernization scenario I encounter: A leading health care organization needed to modernize 10,000+ COBOL  mainframe screens to improve claims processing and customer service. These systems were built before most current developers were born. The original architects retired years ago. Documentation is incomplete or contradictory. Business rules are embedded in code that nobody fully understands anymore.

Vibe coding tools can generate modern code quickly. What they can’t do is tell you whether that code implements the same business logic as the legacy system — logic that represents decades of regulatory compliance decisions, edge case handling and institutional knowledge that was never written down.

This is where the “vibe coding hangover” hits enterprise IT. Fast code generation creates new problems when applied to complex, tightly coupled systems.

The specification problem nobody talks about

Here’s the uncomfortable truth about AI-assisted development: AI generates perfect code for poorly defined problems.

I’ve seen this pattern repeatedly in client work. Teams use AI to accelerate development. Code gets written faster than ever. Then they discover the code solves the wrong problem because the requirements weren’t clear enough to begin with.

For greenfield projects building something new, you can iterate quickly. Wrong assumption? Rewrite it. Missed a requirement? Add it next sprint. The cost of mistakes is measured in developer time and missed deadlines.

For legacy modernization, mistakes compound differently. You’re not just building new functionality. You’re replacing systems that process payroll, manage inventory, handle financial transactions, route customer service calls — critical operations where “oops, we missed a business rule” isn’t acceptable.

Traditional modernization approaches tried to solve this through massive requirements-gathering efforts. Armies of business analysts documenting every screen, every workflow, every edge case. These projects took years and often failed because by the time you finished documenting, the business had evolved.

The enterprise-grade AI approach inserts a different layer: specification extraction.

Rather than jumping from legacy code to modern code, systems that work at enterprise scale first extract what the legacy system does — the business rules, the dependencies, the logic flow — into a clear specification. That specification becomes the source of truth for generating modern code. It’s verifiable. It’s traceable. It preserves institutional knowledge that exists nowhere else.

At Publicis Sapient, our proprietary AI platform Sapient Slingshot embodies this specification-first approach. When RWE needed to modernize a 24-year-old application with no source code or documentation, the platform analyzed the running system to extract business logic before generating replacement code. What would have taken two weeks of manual reverse-engineering happened in two days, with human oversight ensuring accuracy.

This isn’t about speed. It’s about preserving what works while transforming how it runs.

Diagram: Why the specification layer matters.

Dotun Opasina

Why enterprise context changes everything

The difference between prototyping and production isn’t just scale. It’s context.

Vibe coding tools work well for isolated problems. Build a dashboard. Generate a data transformation script. Create an internal tool. These tasks have clear boundaries and limited dependencies.

Enterprise systems don’t have clear boundaries. A seemingly simple change to how customer addresses are validated might cascade through order processing, shipping logistics, tax calculation, fraud detection and customer service routing. Understanding those dependencies requires context that exists across thousands of files, dozens of databases and years of incremental changes.

This is where general-purpose AI coding assistants hit their limits. They can read individual files. They can suggest code completions. They can even generate multi-file changes. What they can’t do is understand how your 15-year-old inventory management system integrates with your 10-year-old order fulfillment platform which talks to your 5-year-old customer service tool — and why changing one piece breaks another.

Enterprise-grade AI modernization requires building an Enterprise Context Graph — a living map of how code, architecture, data and business rules connect. This context allows AI to make informed decisions about modernization, not just fast guesses.

When a health care organization used this approach to modernize critical legacy systems, the platform identified hidden dependencies that would have caused production failures if missed. The AI didn’t just generate modern code faster. It generated modern code that worked in the complex environment where it needed to run.

Diagram: AI coding context requirements

Dotun Opasina

What this means for CIO technology strategy

The vibe coding phenomenon signals something important: AI is changing how software gets built. But for enterprise leaders, the strategic question isn’t “Can AI write code faster?” It’s “Can AI help us escape decades of technical debt while keeping critical systems running?”

The answer is yes — but only with the right approach.

  • Stop optimizing for coding speed. Your constraint isn’t how fast developers can write code. It’s how accurately you can understand and preserve business logic while modernizing the technical foundation. Tools that prioritize speed over comprehension will create more problems than they solve.
  • Start measuring specification accuracy. The new productivity metric isn’t lines of code generated. It’s code-to-spec accuracy — how reliably the generated code implements verified business requirements. Platforms achieving 99% code-to-spec accuracy enable modernization projects that were previously too risky to attempt.
  • Treat institutional knowledge as a strategic asset. Your legacy systems contain decades of business logic that represents real competitive advantage — edge cases handled, regulatory requirements met, customer workflows optimized. Modernization approaches that discard this knowledge to move faster are destroying value in the name of speed.
  • Invest in context preservation, not just code generation. The winners in enterprise AI adoption won’t be organizations that generate code fastest. They’ll be organizations that can systematically extract, verify and modernize business logic at scale.

The modernization opportunity hiding in plain sight

Here’s what makes March 2026 different from March 2024: We now have AI systems capable of reading legacy code, extracting business rules and generating verified modern replacements at enterprise scale. The technology matured.

According to the Stanford AI Index 2025, 78% of organizations used AI in 2024, up from 55% in 2023. But adoption and effectiveness are different metrics. Most organizations are still experimenting with AI tools for individual developer productivity.

The strategic opportunity isn’t faster coding. It’s systematic technical debt elimination.

Consider the typical enterprise IT budget: 60-80% goes to maintaining legacy systems. That maintenance cost compounds annually as skills become scarcer and systems become more brittle. Every dollar spent keeping COBOL running is a dollar not spent on innovation.

Vibe coding tools won’t solve this. They’re built for creation, not preservation. Enterprise modernization requires AI that understands what you have before transforming it into what you need.

Organizations applying this approach are seeing 75% faster delivery timelines, 40% higher productivity and up to 50% savings in modernization costs. More importantly, they’re tackling modernization projects that were previously shelved as too risky or expensive to attempt.

The specification-first future

The vibe coding phenomenon will continue to accelerate. More business users will build tools. More prototypes will become products. More organizations will democratize software creation beyond traditional engineering teams.

For CIOs, this creates both opportunity and risk.

The opportunity: Free your engineering teams from routine development by enabling business users to build their own solutions. The risk: Create a fragmented estate of AI-generated tools that nobody can maintain.

The solution requires treating AI-assisted development as a spectrum. Prototypes and internal tools can embrace the speed and accessibility of vibe coding. Mission-critical systems and legacy modernization need specification-first approaches that prioritize accuracy and traceability over velocity.

Your competitors are experimenting with AI coding tools. The question is whether they’re building sustainable transformation capabilities or accumulating a new generation of technical debt at AI speed.

The CIOs who understand this distinction will spend 2026 systematically eliminating legacy constraints, while others remain focused on incrementally improving existing systems. By 2027, that gap will be difficult to close. Vibe coding democratized software creation. Enterprise-grade AI makes transformation predictable. Choose your tools accordingly.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • SAP’s new API policy restricts AI access, draws customer criticism
    With the rise of AI, APIs have once again become increasingly vital tools for fueling transformation. Enterprise software APIs, in particular, provide a critical link for CIOs’ AI strategies, enabling them to extract data from core business systems and feed it into their AI models of choice, for analysis, decision-making, and action. In response to the rapidly increasing use of APIs by non-SAP systems, enterprise software giant SAP has introduced a new API policy limiti
     

SAP’s new API policy restricts AI access, draws customer criticism

4 de Maio de 2026, 13:29

With the rise of AI, APIs have once again become increasingly vital tools for fueling transformation. Enterprise software APIs, in particular, provide a critical link for CIOs’ AI strategies, enabling them to extract data from core business systems and feed it into their AI models of choice, for analysis, decision-making, and action.

In response to the rapidly increasing use of APIs by non-SAP systems, enterprise software giant SAP has introduced a new API policy limiting access to the data housed in its systems. According to an official statement, the policy stipulates that only those interfaces listed in the SAP Business Accelerator Hub or in the respective product documentation are considered published APIs.

“Customer and third-party applications must not access, invoke, or interact in any manner with APIs that are not Published APIs,” the policy states.

‘This is unacceptable.’

While SAP justifies its new API policy as “designed to safeguard solution health” and as a necessary guarantee of technical stability, the policy could jeopardize the security of customers’ strategic plans as well as their innovation capabilities, the German-speaking SAP User Group (DSAG) warns.

“For SAP-to-non-SAP scenarios, this means: They will only be reliably supported where SAP has explicitly published and documented the underlying interfaces,” DSAG Chairman Jens Hungershausen explained in a statement.

Furthermore, the DSAG believes that the SAP Business Accelerator Hub and the vaguely defined product documentation have not yet been clearly established as contractual components. From the customer’s perspective, this necessitates the creation of clear and reliable framework conditions to enable early assessment of the impact of changes, Hungershausen stated.

“The DSAG has long been demanding absolutely reliable contract documents. However, SAP has taked a contrary position, for example with the SAP Business Data Cloud and now with its API Policy,” says Michael Bloch, DSAG board member for licenses, contracts, and support. Customers currently have questions regarding the interpretation of the documentation, and from DSAG’s perspective, there is a need for clarification regarding their contractual classification. “This is unacceptable,” Bloch states.

Cutting off AI system access?

The DSAG points out that potential new pricing models or usage regulations surrounding APIs must be communicated transparently — and early — to ensure planning fidelity for customers and partners. SAP, for example, has already developed a pricing model with its Digital Access model for creating certain document types in indirect usage.

“According to SAP information, there will be a fair-use model. However, the specific details are currently unclear and should be transparently documented in the API policy,” Bloch says.

Another critical point is that SAP links API usage to technical and organizational requirements. Moreover, use of APIs is restricted for certain scenarios, including:

  • Undocumented purposes
  • Systematic or large-scale data extractions
  • In conjunction with use of (semi-)autonomous or generative AI systems

Here, API usage is permitted only if it explicitly takes place within architectures or services provided by SAP.

“Except through and within the limits of SAP-endorsed architectures, data services, or service-specific pathways expressly identified and intended for such purposes, SAP prohibits API use for: (a) interaction or integration with (semi-)autonomous or generative AI systems that plan, select, or execute sequences of API calls, and (b) scraping, harvesting, or systematic and/or large-scale data extraction or replication,” the policy states.

“According to the information available to us, existing customer integrations and authorized partner solutions are not affected,” says DSAG CTO Stefan Nogly. However, he believes this important protection for existing integrations should be explicitly stated in SAP’s API policy.

Nogly points out that many user companies are already working on proofs of concept (PoC) and pilot projects based on the current interpretation of API usage. “From a customer perspective, we see a significant need for clarification and adaptation — especially to avoid disrupting existing business-critical end-to-end processes or making them legally vulnerable,” he says.

width="1024" height="576" sizes="auto, (max-width: 1024px) 100vw, 1024px">
Stefan Nogly, DSAG Executive Board Member for Technology: “In an era of increasingly heterogeneous architectures and intensive AI experiments, APIs are a key driver of innovation.”

DSAG

More transparency and transition periods needed

The SAP user group is particularly critical of SAP’s lack of transparency. Its members point out that the new API policy does not clearly document which specific APIs are affected, nor is the extent of the impact clearly defined. “The question is which interfaces are used in the partner solutions,” says DSAG Chairman Hungershausen.

According to DSAG’s understanding, those using official APIs don’t need to take any action, although the lack of contractual safeguards doesn’t guarantee absolute security. For some partner companies, however, the effort involved could be significant, and business models could collapse.

“Therefore, it is essential that SAP grants customers more time for the transition,” Hungershausen says. Customers and partners also need concrete technical and organizational support for switching to SAP-supported interfaces.

From DSAG’s perspective, it is crucial that customers are not forced to resort to other solution providers due to a lack of viable alternatives when existing scenarios are limited.

  • ✇Security | CIO
  • Agentic AI is rewiring the SDLC
    The next wave of AI in software development goes beyond better code generation: agents are starting to take accountability throughout planning, design, build, test, release and operations. In the teams I work with, this is already changing team dynamics, leadership priorities and what CIOs must do to maintain quality, security and control.   The biggest shift I see is genuine delegation: AI can now draft backlog items, inspect codebases, propose implementation paths, cr
     

Agentic AI is rewiring the SDLC

4 de Maio de 2026, 09:00

The next wave of AI in software development goes beyond better code generation: agents are starting to take accountability throughout planning, design, build, test, release and operations. In the teams I work with, this is already changing team dynamics, leadership priorities and what CIOs must do to maintain quality, security and control.  

The biggest shift I see is genuine delegation: AI can now draft backlog items, inspect codebases, propose implementation paths, create tests, summarize reviews and prep releases before teams fully agree on ‘done.’ This marks a shift from AI as an assistant to AI as an active participant. That is why this topic matters for CIOs right now. With Google I/O on May 19–20 and Microsoft Build on June 2–3, attention will continue to rise around AI coding models, agentic development workflows and the platforms that now span planning through operations. Microsoft and GitHub are embedding agents more deeply into the engineering workflow.

Gemini Code Assist, GitHub Copilot’s coding agent, OpenAI Codex and Claude Code all reflect the same direction: AI is beginning to participate across planning, building, testing, reviewing and operations, not just within the editor. Google is trying to provide coding assistance to broader lifecycle support. Amazon is leaning into operationalization. OpenAI and Anthropic are pushing agentic coding and repository reasoning. Newer prompt-to-app platforms such as Lovable and Replit are compressing the path from idea to working application. The market signal is clear: AI is moving beyond code suggestion and into software delivery itself.

For business and technology executives, the strategic question is no longer whether AI can generate output. It is whether the organization can use AI to improve delivery without creating faster paths to weak requirements, inconsistent standards, poor testing and vague governance. That is why I frame this conversation around software delivery rather than relying too heavily on the older SDLC label. SDLC still makes sense, but it sounds procedural for what is actually happening. Agentic AI is not just accelerating tasks inside a fixed lifecycle. It is rewiring the operating model of delivery. Recent DORA research reinforces what I see in practice: AI tends to amplify an organization’s existing strengths and weaknesses and the biggest returns come not from the tool alone, but from improving the delivery system around it.

Where agentic AI is creating the most value

The first place CIOs should focus on is where agentic AI is creating measurable value across the lifecycle. In planning and requirements, AI can already do meaningful first-pass work. Teams can ask it to inspect an existing codebase, summarize dependencies, suggest implementation paths, draft user stories, refine acceptance criteria and surface tradeoffs before engineers begin building. Used well, that reduces administrative drag and improves consistency. It also changes where the bottleneck appears. What I see most often is that teams adopt agentic tools expecting a boost, but the first real bottleneck appears upstream when acceptance criteria are too loose for the agent to interpret safely. The teams that struggle most are not the ones with weak prompts. They are the ones with vague intent. AI amplifies ambiguity as efficiently as it amplifies insight. OpenAI’s guidance for AI-native engineering teams describes agents contributing to scoping, ticket creation and other lifecycles work well before code is merged.

A practical model of agentic AI across the software delivery lifecycle.
A practical model of agentic AI across the software delivery lifecycle

Vipin Jain

In architecture and design, the real gain is not that AI can produce more diagrams. It can help teams compare options faster, trace dependencies, expose inconsistencies and document decisions with less manual effort. But architecture is not just pattern matching. It is a judgment about resilience, security, compliance, integration, cost and long-term business fit. The strongest teams use AI to explore options while architects define the guardrails, review points and non-functional requirements that the system must adhere to. In an agentic environment, architecture becomes more important, not less, because someone still has to define what the system is allowed to do. What I see in the strongest teams also matches Anthropic’s experience: simpler, well-bounded agent patterns usually outperform elaborate multi-agent complexity when the goal is reliable software delivery.

Build, test and review are changing even faster. GitHub Copilot’s coding agent, Claude Code, Amazon Q Developer, OpenAI Codex and Google’s broader agentic tooling all point in the same direction: the market is moving from AI-assisted coding to AI-assisted flow. In practice, that means agents can decompose work, generate code, create tests, run checks, summarize failures and prepare work for human review. The important metric is no longer lines of code per developer. It is the amount of safe, reviewable work the team can move through the pipeline without increasing rework. That is a more executive-relevant measure because it ties AI to throughput and quality rather than just speed. Benchmarks such as SWE-bench matter here because they test models against real repository-level software tasks, rather than isolated code snippets, which is much closer to the work CIOs are actually trying to improve.

Deployment, operations and maintenance are where the enterprise’s stakes become highest. This is the point that many organizations underestimate. Writing code is visible. Governing agent behavior in production is harder, less glamorous and much more important. In the teams I see gaining the most value, leaders are using AI to support release readiness, detect anomalies, summarize incidents, draft remediation steps and improve documentation around recurring issues. I have also seen teams pilot agents successfully in build, then stall at release because no one had clearly defined what the agent could change on its own, what required approval or who owned rollback when something went wrong. The organizations that make progress are the ones that answer those questions early. That is where trust is built. That is also why the market is shifting toward governed runtime and operations support, not just coding help; Amazon Bedrock AgentCore is one example of that broader move toward secure deployment, monitoring and controlled agent operation at scale.

How roles and teams are evolving

Agentic AI changes agile teams by shifting what roles contribute. Developers spend less time on first drafts and more time steering AI, validating diffs, hardening edge cases and managing exceptions. Their leverage shifts from typing speed to judgment—knowing what to trust, challenge or escalate. Leaders should recognize this meaningful change in role identity.

Architects also move up the value chain. In traditional environments, they often spend too much time creating static documentation that teams interpret unevenly. In agentic environments, the more valuable work is defining executable guardrails: approved patterns, tool boundaries, policy controls, integration rules and quality gates that both humans and agents can follow. That makes architecture more operational and more consequential.

QA, platform and SRE teams also gain influence. Testing becomes less about writing every case manually and more about building evaluation strategies, validating behavior, instrumenting pipelines and preserving rollback discipline. The closer AI moves to release and operations, the more essential traceability, observability and control become. Product owners and business analysts also need to raise their game. When requirements are fuzzy, human teams usually compensate through conversation. Agents often execute fuzziness literally. In practice, that means the teams that benefit most from agentic AI are the ones that improve intent, edge-case thinking and acceptance discipline. One more shift deserves attention: pro-code and low-code are converging. Microsoft’s Copilot Studio, IBM WatsonX Orchestrate, Lovable and Replit are lowering the barrier between idea and execution for a broader set of contributors. That is good news for experimentation and business alignment, but it also raises the risk of software sprawl outside shared architecture and security controls. CIOs should not dismiss these tools as toys, nor let them float free of governance. The most effective organizations will connect pro-code and low-code through common guardrails rather than force a false choice between them.

width="1024" height="519" sizes="auto, (max-width: 1024px) 100vw, 1024px">
How agentic AI is shifting the center of gravity for core delivery roles.

Vipin Jain

What CIOs should do now?

As roles and delivery processes evolve, what concrete actions should CIOs consider now? The organizations I see getting the most from agentic AI are not treating it as a coding-assistant bakeoff. They are redesigning the delivery system around it. That starts with intent. Leaders should raise the quality of requirements before work enters agentic pipelines. If the business outcome, constraints and acceptance criteria are unclear, the AI will often produce technically plausible but strategically wrong work.

Next comes guardrails and autonomy. Leaders should define what agents can do on their own, what requires approval, what systems and data they can touch and what evidence the pipeline must capture. This is not bureaucracy for its own sake. It is the difference between acceleration and avoidable damage. Teams need clear security rules, architecture patterns, approval boundaries and rollback paths before they scale autonomy. Google Research offers a useful counterweight to the hype here: more agents do not automatically produce better outcomes, especially when the task design, coordination model and workflow are weak.

The management system leaders need for agentic software delivery.
The management system leaders need for agentic software delivery.

Vipin Jain

Then comes observability. If an agent drafts code, generates tests, touches data, triggers a workflow or influences a release decision, leaders should be able to see that activity, evaluate it and audit it later. This is where many pilots remain weak. They prove that AI can do something. They do not prove that the organization can repeatedly trust it. That is why a more formal evaluation matter. Microsoft’s guidance on agent evaluators is useful here because it focuses on operational signals leaders actually need: task completion, task adherence, intent resolution and tool-call accuracy.

Finally, leaders should change how they measure success. Code volume and demo velocity are weak proxies. Better measures include defect escape, rework, release confidence, cycle time for work that reaches production safely and the percentage of work that moves through the pipeline with clear evidence and human accountability. Start with bounded use cases such as maintenance tasks, test generation, documentation, technical debt reduction and lower-risk feature work with strong review. Build supervision muscle before you try to scale autonomy.

The executive takeaway

The strategic mistake I see most often is treating this moment as a tool refresh or a beauty contest among AI coding platforms. Google, Microsoft, Amazon, OpenAI, Anthropic and the next wave of prompt-to-app players matter because they signal where the market is going. But the winning question for leaders is not which demo looks smartest. It is whether the organization is redesigning software delivery so AI can contribute without weakening quality, security or control.

More generated code is not the prize. Better software delivery is. The enterprises that win will connect business intent to engineering execution more tightly, instrument agent behavior more rigorously and redesign team roles around judgment, supervision and accountability. They will make AI part of the team, not just another tab in the IDE.

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The $570K canary: What AI coding agents reveal about enterprise AI’s real gaps
    Boris Cherny, creator of Anthropic’s Claude Code, says he hasn’t written a line of code by hand in months. He shipped 22 pull requests one day, 27 the next, all AI-generated. Company-wide, Anthropic reports that 70 to 90% of its code is now written by AI. CEO Dario Amodei has predicted that AI could handle “most, maybe all” of what software engineers do within months. And yet Anthropic typically has dozens of software engineering openings, one reportedly carrying $570K
     

The $570K canary: What AI coding agents reveal about enterprise AI’s real gaps

4 de Maio de 2026, 07:00

Boris Cherny, creator of Anthropic’s Claude Code, says he hasn’t written a line of code by hand in months. He shipped 22 pull requests one day, 27 the next, all AI-generated. Company-wide, Anthropic reports that 70 to 90% of its code is now written by AI. CEO Dario Amodei has predicted that AI could handle “most, maybe all” of what software engineers do within months.

And yet Anthropic typically has dozens of software engineering openings, one reportedly carrying $570K in total compensation. As one observer noted, the company is simultaneously predicting the end of the profession and paying top dollar to hire into it.

Meanwhile, during his GTC 2026 keynote, NVIDIA CEO Jensen Huang said that 100% of NVIDIA now uses AI coding tools, including Claude Code, Codex and Cursor, often all three. Then, in a conversation on the All-In Podcast during GTC week, Huang sharpened the point: A $500,000 engineer who doesn’t consume at least $250,000 in AI tokens annually is like “one of our chip designers who says, guess what, I’m just going to use paper and pencil.”

This isn’t cognitive dissonance. It’s a signal. And CIOs who look past the headlines will find a pattern that explains not just where AI coding is going, but where all of enterprise AI is headed.

Tellers, not toll booth workers

The instinct is to see this as an extinction event. AI writes all the code; engineers become toll booth workers, replaced entirely by automation with no complementary role left behind. But the data tells a different story, one I explored in a recent CIO.com article on AGI skepticism.

When ATMs rolled out, bank teller employment didn’t collapse. It doubled, from 268,000 in 1970 to 608,000 in 2006. The machines eliminated the routine transaction. But cheaper branch operations meant banks opened more locations, which created demand for tellers who could handle complex financial conversations. Economists call this Jevons Paradox: When technology makes something more efficient, demand expands rather than contracts.

Software engineers are bank tellers, not toll booth workers. AI agents are eliminating routine implementation: The boilerplate, the CRUD endpoints, the standard test scaffolding. But that efficiency is expanding the total surface area of what “engineering” means. Anthropic isn’t paying $570K for someone to type code. They’re paying for the judgment to orchestrate AI agents that type code: Deciding what to build, evaluating whether the output is correct, governing what gets deployed and maintaining systems that are increasingly written by machines.

Cherny confirmed this shift directly. His team now hires generalists over specialists, because traditional programming specialties are less relevant when AI handles implementation details. The skill premium has moved from writing code to supervising it, from production to orchestration.

The reason AI coding agents work

Here’s the question CIOs should be asking: Why are AI agents succeeding in software development faster than in any other enterprise function?

It’s not because coding models are better than models for customer service, legal review or financial analysis. The underlying LLMs are the same. The difference is that software development already had the infrastructure that every other enterprise function lacks.

Developers didn’t build this infrastructure for AI. They built it for themselves, over decades. But it maps almost perfectly to the six infrastructure gaps that are currently blocking AI agents from moving beyond employee-facing pilots into customer-facing production.

6 gaps the SDLC already solved

1. Governance: Right data, right users, right permissions

In software development, governance is built into the workflow. Branch protection, code review policies and role-based access controls create a clear chain of permission from draft to deploy, whether the author is human or agent.

Most enterprise functions have nothing equivalent. When an AI agent drafts a customer response, accesses a patient record or modifies a financial model, the governance layer (who approved this action, what data was it allowed to see, which policies constrain its output) is either ad hoc or absent. Microsoft’s 2026 Cyber Pulse survey found that while 80% of Fortune 500 companies have deployed AI agents, only 47% have agent-specific security policies in place.

2. Observability: Trace and audit the decision trail

Every line of AI-generated code has a paper trail. Git blame shows who (or what) wrote it. CI/CD pipelines log every build, test and deployment. When something breaks in production, engineers can trace the failure from alert to commit to the specific agent session that produced the change.

Outside of engineering, AI agent decisions are largely opaque. A customer-facing agent that denies a claim or escalates a complaint leaves no audit trail. Without observability, enterprises can’t debug bad outcomes, satisfy regulators or build the trust necessary to expand agent autonomy.

3. Evaluation: Measure correctness at scale

Unit tests, integration tests, type checking, linting and automated QA give software engineering something no other enterprise function has: Continuous, objective measurement of whether AI-generated output is correct. That provides a foundation for proving an agent gets it right.

This is the gap other enterprise functions feel most acutely. DigitalOcean’s 2026 survey of 1,100 technology leaders found that 41% cite reliability as their number one barrier to scaling AI agents. Reliability is an evaluation problem: Without automated, continuous measurement of agent output quality, organizations can’t trust agents enough to put them in front of customers.

4. Memory: Persistent context beyond the context window

Developers take persistent context for granted. Version control, documentation and architectural decision records provide context that survives across sessions, teams and years. An AI coding agent can read the commit history, understand why a design choice was made in 2019, and factor it into today’s implementation.

Most enterprise AI agents operate in a memoryless state. Each customer interaction starts from scratch. Each agent session has no awareness of prior decisions, escalations or context beyond what fits in the context window. This is why employee-facing agents (IT help desks, NOC ticketing) succeed where customer-facing agents stall: Internal users tolerate repeating context. Customers do not.

5. Cost controls: Manage LLM spend across providers

Jensen Huang’s $250K-per-engineer token budget isn’t an abstraction. It’s a real cost management challenge that engineering teams are already navigating. Smart teams route differently depending on the task: Use a lightweight model for boilerplate generation, a reasoning model for architectural decisions and a code-specific model for refactoring. They set token budgets per agent session. They measure cost-per-PR and cost-per-feature, not just cost-per-token.

Enterprises deploying AI agents in other functions rarely have this granularity. When Goldman Sachs stated AI near-zero GDP impact in 2025, the missing variable was cost discipline at the workflow level. Without the ability to route, throttle and measure LLM spend per agent task, scaling agents means scaling costs linearly, which eventually kills ROI.

6. Deployment flexibility: Any cloud, on-prem, no lock-in

In software development, the runtime has always been portable. Code that runs on AWS today can run on Azure tomorrow, or on bare metal in your own data center. Containerization, Kubernetes and infrastructure-as-code tools like Terraform mean that engineering teams can change their minds about where workloads run without rewriting the application. Software has had this mindset for decades.

We’re early enough in this agentic development game that it’s tempting to take short cuts. Organizations that build on a single hyperscaler’s agent framework find themselves locked into that provider’s model ecosystem, observability tooling and pricing structure. As agentic AI matures, deployment flexibility (the ability to run agents on any cloud, on-prem or across hybrid environments without vendor lock-in) will separate organizations that scale from those that stall.

Sometimes you’ll want agents to run close to your data. Other times, you’ll want agents close to the users. And you’ll want your developers to be able to move back and forth between different agent code bases without having to learn a different framework between them.

What CIOs should watch at Build and I/O

Google I/O and Microsoft Build will dominate May with dueling AI coding announcements. The temptation will be to compare model benchmarks. That’s the wrong lens. The models are converging. The real competition is one layer down, in the infrastructure that makes AI agents viable outside of software development.

CIOs watching these conferences should evaluate each announcement against the six gaps: Is Microsoft closing the governance gap with Azure AI Foundry? Is Google advancing observability through Vertex AI? Which platform is making it easier to evaluate agent output at scale, maintain persistent memory across sessions, control costs at the workflow level and deploy without lock-in?

The company that wins the AI coding war will be the one that builds the infrastructure layer that transfers to every other enterprise function. That’s the real stakes of May’s developer conferences, and it’s the real reason CIOs should be paying attention.

The canary’s message

Software engineers are the first knowledge workers to live inside a fully agentic workflow. They’re the canary in the coal mine for every other enterprise function. And right now, the canary is singing, not dying.

The lesson isn’t that AI coding agents have made engineers obsolete. It’s that AI coding agents work because engineers already built the infrastructure that makes agents trustworthy. Governance, observability, evaluation, memory, cost controls and deployment flexibility: These aren’t nice-to-haves. They’re the reason Anthropic can ship 27 AI-generated pull requests in a day and sleep at night.

Every other enterprise function will need to build its own version of that infrastructure before AI agents can move from employee-facing pilots to customer-facing production. The models aren’t the bottleneck. The scaffolding around them is.

Anthropic paying $570K for a software engineer whose job might not exist in a year isn’t a contradiction. It’s Jevons Paradox. And it’s the most expensive leading indicator in enterprise AI.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • 개발의 민주화를 이끄는 바이브 코딩 도구 19선
    AI가 인간이 한 달 동안 작성할 코드를 몇 분 만에 만들어준다면 마다할 사람이 있을까. 마법 같은 기술을 싫어할 이유가 있을까. 바이브 코딩을 둘러싼 기대는 등장 초기부터 개발자와 비즈니스 사용자 모두에게 이런 가능성을 제시해왔다. 그리고 이제 그 기대가 현실이 될 만큼 기술이 성숙했다는 평가가 나온다. 물론 신중한 리더들이 “숨겨진 함정은 없는가? 혹시 위험한 기술은 아닌가?”라고 의문을 제기하는 것도 타당하다. AI는 인간이 작성한 코드를 학습해 발전해왔고, 인간은 실수를 하기 때문이다. 실제로 일부 바이브 코딩 사용자와 업계 전문가들은 문서화되지 않은 엔드포인트나 민감 데이터 유출과 같은 취약점이 존재할 수 있다고 지적한다. 그럼에도 불구하고 많은 사용자들은 바이브 코딩을 적극적으로 도입하고 있으며, 긍정적인 결과를 보고하고 있다. 이들은 새로운 도구들이 “놀라울 정도로 강력하다”고 평가한다. 간단한 설명만으로 몇 분
     

개발의 민주화를 이끄는 바이브 코딩 도구 19선

4 de Maio de 2026, 05:52

AI가 인간이 한 달 동안 작성할 코드를 몇 분 만에 만들어준다면 마다할 사람이 있을까. 마법 같은 기술을 싫어할 이유가 있을까. 바이브 코딩을 둘러싼 기대는 등장 초기부터 개발자와 비즈니스 사용자 모두에게 이런 가능성을 제시해왔다.

그리고 이제 그 기대가 현실이 될 만큼 기술이 성숙했다는 평가가 나온다.

물론 신중한 리더들이 “숨겨진 함정은 없는가? 혹시 위험한 기술은 아닌가?”라고 의문을 제기하는 것도 타당하다. AI는 인간이 작성한 코드를 학습해 발전해왔고, 인간은 실수를 하기 때문이다. 실제로 일부 바이브 코딩 사용자와 업계 전문가들은 문서화되지 않은 엔드포인트나 민감 데이터 유출과 같은 취약점이 존재할 수 있다고 지적한다.

그럼에도 불구하고 많은 사용자들은 바이브 코딩을 적극적으로 도입하고 있으며, 긍정적인 결과를 보고하고 있다. 이들은 새로운 도구들이 “놀라울 정도로 강력하다”고 평가한다. 간단한 설명만으로 몇 분 안에 프로토타입을 만들고, 몇 차례 반복을 거쳐 최소 기능 제품(MVP)까지 구현할 수 있다. 과거에는 몇 주가 걸리던 작업이 단시간에 가능해진 셈이다. 특히 비즈니스 사용자의 경우 개발 리소스를 확보하는 과정에서 발생하던 각종 절차와 제약도 크게 줄일 수 있다. 물론 오류나 누락이 발생할 수 있지만, 이는 인간 개발팀에서도 충분히 발생할 수 있는 수준이라는 시각도 있다.

결국 현실은 분명하다. 바이브 코딩은 기업이 실험을 시작할 만큼 충분히 실용적인 기술로 자리 잡았다. 다만 플랫폼마다 특성은 크게 다르다. 일부는 대규모 코드베이스를 다루는 전문 개발자를 지원하는 데 적합하고, 다른 일부는 개발 경험이 많지 않지만 원하는 결과를 알고 있는 사용자들을 돕는 데 초점을 맞춘다. 특정 프로그래밍 언어에 익숙하지 않거나 프로그래밍 자체에 대한 이해가 부족한 사용자도 활용할 수 있다. 더 나아가 컴퓨터 사용이 익숙하지 않은 초보자까지 겨냥한 도구도 등장하고 있다.

차별화 요소는 사용자 경험 수준에만 국한되지 않는다. 일부 도구는 전문 개발자의 생산성을 극대화하는 ‘보조 도구’ 역할에 집중하는 반면, 다른 도구는 데이터베이스부터 프런트엔드까지 전체 애플리케이션을 자동으로 생성한다. 특히 후자의 경우 대규모 사용자 환경에 바로 적용하기에는 한계가 있을 수 있지만, 경영진이나 투자자에게 제시할 프로토타입 제작에는 매우 효과적이다. 또한 소규모 사용자 환경에서는 충분히 실용적으로 활용될 수 있다.

그렇다면 이 도구들은 실제로 충분히 완성도 높은 수준일까. 단점을 보완해 활용할 수 있을까. 이를 확인하는 방법은 단 하나, 직접 실행해보고 결과를 확인하는 것이다.

다음은 주목할 만한 바이브 코딩 도구 19종을 알파벳 순으로 정리한 목록이다. 이들 모두 애플리케이션 개발 과정에 ‘마법 같은 경험’을 제공하겠다는 공통된 목표를 내세우고 있다.

베이스44 (Base44)

베이스44(Base44, 현재는 윅스(Wix)가 인수)는 ‘빌더 챗(Builder Chat)’에서 데이터 아키텍처를 중심으로 논의를 시작하는 방식으로 동작한다. 이후 사용자가 입력한 설명을 바탕으로 리액트(React)와 테일윈드(Tailwind) 기반 프런트엔드와 디노(Deno) 백엔드를 결합해 애플리케이션을 생성한다. 전자상거래, 콘텐츠 관리, 생산성 등 다양한 활용 사례에 맞는 템플릿을 제공해 개발 속도를 높일 수 있다. 또한 드래그앤드롭 방식의 시각적 UI 편집기를 통해 세부 요소를 빠르게 수정할 수 있어, 텍스트로 변경 사항을 일일이 설명하는 방식보다 효율적이다.

베티 블록스(Betty Blocks)

베티 블록스(Betty Blocks)는 ‘시민 개발자’를 주요 타깃으로 하는 노코드 플랫폼이다. 이는 프로그래밍 경험은 없지만 조직 내 요구사항을 잘 이해하고 있는 사용자들을 의미한다. 해당 플랫폼은 사용자의 설명을 기반으로 리액트 코드를 생성하며, 이를 코드 저장소로 내보내 추가 개발에 활용하거나 WASM 형태로 컴파일해 배포할 수 있다. 또한 시각적 인터페이스를 통해 세부 조정이 가능한 ‘로우코드’ 기능도 함께 제공한다.

블링크(Blink)

블링크(Blink)는 코드 생성 에이전트로, 두 가지 모드에서 타입스크립트(TypeScript) 기반 리액트 애플리케이션을 생성한다. 하나는 실제 개발을 수행하는 ‘에이전트 모드’, 다른 하나는 논의와 설계를 위한 ‘채팅 모드’다. 개발자는 채팅을 통해 방향을 설정한 뒤 두 모드를 오가며 작업을 진행한다. 완성된 애플리케이션은 블링크의 내부 CDN에서 호스팅할 수 있으며, 자체 서버나 클라우드 환경으로 내보내는 것도 가능하다.

볼트(Bolt)

볼트(Bolt)는 다양한 백엔드 AI 코딩 모델을 하나의 시각적 인터페이스에서 활용할 수 있도록 설계된 노코드 채팅 서비스다. 앤트로픽의 클로드, 구글 제미나이 등 주요 모델을 포함해 여러 코딩 에이전트를 선택적으로 사용할 수 있다. 특히 디자인 레이어를 분리해 공통 디자인을 정의하고, 이를 모든 생성 애플리케이션에 일관되게 적용할 수 있는 것이 특징이다. MIT 라이선스로 공개된 오픈소스 버전도 제공된다.

버블(Bubble)

버블(Bubble)은 단순한 채팅 기반 인터페이스를 넘어 다양한 시각적 기능을 제공하는 노코드 도구다. 전체 UI를 직접 조정할 수 있는 시각적 편집기와 함께, 내부 동작 구조를 확인할 수 있는 워크플로우 뷰를 제공한다. 이를 통해 사용자는 결과를 추측하는 수준을 넘어 실제 동작을 이해하고 제어할 수 있다. 궁극적으로 인간과 AI가 협력하는 개발 환경을 구현하는 것을 목표로 한다.

클로드 코드(Claude Code)

AI 기업 앤트로픽의 클로드 코드는 새로운 애플리케이션 생성은 물론 기존 코드 수정까지 다양한 개발 요구를 처리하는 데 강점을 보인다. 백엔드는 VS코드(VSCode) 등 주요 IDE와 연동되며, 터미널 창이나 슬랙(Slack) 채널을 통해 접근하는 사용자도 많다. 특히 대규모 코드베이스에서 문제 해결의 출발점을 찾는 용도로 자주 활용된다. 사용자들은 클로드가 각 조직의 코딩 표준에 유연하게 적응하는 점을 높이 평가한다.

컨티뉴(Continue)

컨티뉴(Continue)는 오픈소스 기반 에이전트로, 추가적인 지원이 필요한 전문 개발자에게 적합한 도구다. 코드 저장소를 지속적으로 모니터링하면서 새로운 풀 리퀘스트 등 특정 이벤트가 발생하면 AI 에이전트를 호출해 반복 작업을 처리한다. 모든 작업을 자동화하기보다는 단순하고 반복적인 업무를 대신 수행해 개발자가 창의적인 작업에 집중할 수 있도록 돕는 것이 목표다. 다양한 IDE와 AI API와의 연동도 지원한다.

크리에이트(Create.xyz)

크리에이트(Create.xyz)는 ‘애니띵(Anything)’이라는 이름의 도구를 통해 간단한 텍스트 입력만으로 다양한 리액트/테일윈드 애플리케이션을 생성할 수 있도록 한다. 데이터베이스 접근 등 주요 기능은 미리 설계된 컴포넌트로 구성되며, 브라우저와 모바일 환경 모두에서 원활하게 작동한다. 이후 개발자는 생성된 코드에 직접 접근해 필요한 부분을 수정하거나 추가 기능을 구현할 수 있다.

커서(Cursor)

커서(Cursor)는 기존 개발 환경에서 활용할 수 있는 AI 기반 개발 도우미로, 많은 숙련 개발자들 사이에서 높은 평가를 받고 있다. 새로운 코드 작성, 기존 코드 검토, 슬랙 등 협업 채널에서 발생하는 이슈 추적까지 다양한 역할을 수행한다. 여러 파일을 동시에 처리하고 전체 코드베이스를 분석해 실행 계획을 제시한 뒤 실제 작업까지 수행하는 것이 특징이다. 전통적인 개발 환경을 기반으로 하기 때문에 ‘노코드’ 도구로 보기는 어렵지만, 실제로는 개발자가 직접 작성하는 코드의 양을 크게 줄여준다.

이머전트(Emergent)

이머전트(Emergent)는 여러 AI 에이전트를 결합한 웹 애플리케이션으로, 사용자의 텍스트 설명을 기반으로 프런트엔드(React), 백엔드(Node.js), 데이터베이스(MongoDB), 그리고 다양한 API(Stripe 등 통합 포함)까지 한 번에 생성한다. 단순한 보조 기능을 넘어 개발 과정의 복잡성을 사용자에게서 완전히 숨기는 것이 목표다. 비개발자는 원하는 애플리케이션을 직접 구축할 수 있고, 개발자는 빠르게 프로토타입을 제작해 실제 서비스에 가까운 형태로 배포할 수 있다.

킬로 코드(Kilo Code)

킬로 코드(Kilo Code)는 대규모 코드베이스를 유지·확장하는 개발자를 위한 오픈소스 코딩 에이전트다. ‘오케스트레이터 모드(Orchestrator Mode)’를 통해 작업 계획을 수립할 수 있으며, 코드 리뷰 기능은 오류를 이중으로 점검한다. 또한 ‘메모리 뱅크(Memory Bank)’를 통해 프로젝트 아키텍처에 대한 핵심 정보를 저장·관리할 수 있다. 500개 이상의 AI 모델과 연동을 지원해 특정 플랫폼에 종속되지 않고, 상황에 맞는 최적의 모델을 선택할 수 있는 유연성도 제공한다.

린디(Lindy)

린디(Lindy)는 슬랙 메시지나 코드 저장소의 새로운 커밋과 같은 이벤트를 트리거로 동작하는 ‘에이전트’ 생성에 초점을 맞춘 도구다. 이러한 이벤트는 주요 클라우드 서비스와 지라(Jira), 조호(Zoho) 등 업무 관리 플랫폼을 포함한 수백 개 웹 애플리케이션에서 발생한다. 사전 정의된 템플릿을 활용하면 일반적인 애플리케이션을 보다 빠르게 구축할 수 있으며, 대표적인 활용 사례로는 고객 지원 에이전트가 꼽힌다.

러버블(Lovable)

러버블(Lovable)은 대화를 통해 요구사항을 파악한 뒤 자체 클라우드 환경에서 애플리케이션 생성과 배포까지 자동으로 수행하는 노코드 플랫폼이다. UI(리액트와 테일윈드), 비즈니스 로직, 데이터베이스(주로 Supabase)를 모두 처리한다. 비교적 완성도 높은 인터페이스와 함께 보안 및 접근 제어 기능을 갖춘 엔터프라이즈급 프로토타입을 빠르게 구축할 수 있는 것이 특징이다.

리플릿(Replit)

리플릿(Replit)은 최대 30개 프로그래밍 언어를 지원하는 노코드 솔루션으로, 주요 언어는 물론 다양한 비주류 언어까지 폭넓게 지원한다. 기본 인터페이스는 노코드 챗봇 형태지만, 생성된 코드는 이후 저장소에 저장돼 전통적인 방식으로 추가 개발이 가능하다. 데이터베이스 계층을 분리해 운영 환경과 테스트 환경을 구분하는 등 보다 전통적인 개발 접근도 지원한다. 또한 팀 단위 협업 기능을 통해 구성원들이 함께 대화하며 애플리케이션을 개선할 수 있다.

소프트젠(Softgen)

소프트젠(Softgen)은 간단한 텍스트 설명을 기반으로 Next.js 웹 애플리케이션을 생성하는 도구다. 클로드 4.5, 제미나이 등 주요 AI 모델과 연동해 최소 기능 제품(MVP)을 빠르게 구축할 수 있다. 사용량 기반 과금 방식을 적용해 애플리케이션에 필요한 토큰만큼만 비용을 지불하면 된다.

솔리드(Solid)

기본적인 애플리케이션 생성이 점차 쉬워진 가운데, 솔리드(Solid)는 ‘엔터프라이즈급’ 배포에 초점을 맞추고 있다. 고도화된 보안 모델과 분산 배포 환경을 지원하는 것이 특징이다. 문서에서는 AI와 반복적으로 협업하며 강점을 활용하는 방식을 강조하는데, 주로 리액트/테일윈드 기반 프런트엔드와 노드JS 상에서 타입스크립트로 구현된 백엔드, 그리고 포스트그레SQL 데이터베이스 조합을 중심으로 구성된다.

템포 랩스(Tempo Labs)

템포 랩스(Tempo Labs)는 시각적 편집기를 통해 개발 속도를 크게 높이는 데 초점을 맞춘 도구다. 간단한 UI 작업은 AI에 의존하지 않고 직접 수행할 수 있으며, 디자인 중심의 개발 환경을 제공한다. 프로젝트별로 표준 UI 요소 라이브러리를 유지하고, 기존 리액트 코드베이스를 불러와 사전 개발된 컴포넌트와 템플릿으로 확장할 수 있다. 직관적인 인터페이스를 통해 ‘바이브 코딩’ 경험을 극대화하는 것이 특징이다.

버셀(Vercel)

버셀(Vercel)의 v0 시스템은 다양한 템플릿을 기반으로 애플리케이션 설계를 시작할 수 있도록 지원한다. 기본 인터페이스는 여전히 채팅 형태지만, 템플릿은 설계 아이디어를 제공하는 동시에 명세 작성의 공통 언어 역할을 한다. 또한 디자인 템플릿을 활용해 여러 애플리케이션 간 UI 일관성을 유지할 수 있으며, 모바일 브라우저에 최적화된 구조를 통해 모바일 대응 웹사이트를 쉽게 구축할 수 있다.

윈드서프(Windsurf)

윈드서프(Windsurf)는 대규모 코드베이스를 다루는 팀을 위한 AI 내장형 IDE다. 버그 수정이나 기능 추가와 같은 복잡한 작업을 여러 단계에 걸쳐 처리할 수 있도록 설계됐다. 전통적인 개발 방식을 보완하는 형태의 바이브 코딩 도구라고 할 수 있다. 특히 탭(Tab) 키를 활용한 인터페이스가 특징으로, 제안된 수정안을 단계별로 확인하며 승인할 수 있다. 이 IDE는 이러한 다단계 작업 흐름을 ‘캐스케이드(cascade)’라고 부른다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • 19 vibe coding tools for democratizing app development
    Who doesn’t want an AI to pump out more code in minutes than a human might write in a month? Who doesn’t like magic? That’s what the hype around vibe coding has asked of developers and business users alike since its inception. But now the tools might have matured enough to deliver. Yes, cautious leaders are right in wondering, “What’s the catch? Is this a trap?” After all, the AIs learned to code from examining code created by humans, and humans fail. So it should be
     

19 vibe coding tools for democratizing app development

1 de Maio de 2026, 07:01

Who doesn’t want an AI to pump out more code in minutes than a human might write in a month? Who doesn’t like magic? That’s what the hype around vibe coding has asked of developers and business users alike since its inception.

But now the tools might have matured enough to deliver.

Yes, cautious leaders are right in wondering, “What’s the catch? Is this a trap?” After all, the AIs learned to code from examining code created by humans, and humans fail. So it should be no surprise that some vibe coders and industry experts are reporting vulnerabilities such as undocumented endpoints and sensitive data leakage.

Still, many are diving right into the vibe-code deep end and are reporting positive results. The new tools, they say, are amazing. Vibe coders can build a prototype in minutes and a minimum viable product in a few iterations. In goes a few handwavy sentences, and out comes something that used to take weeks to produce — not to mention all the red tape in tapping development time, if you’re a business user. Sure there can be errors and omissions, but are they any worse than what a team of humans might inadvertently include or overlook?

The reality is that vibe coding is legit enough that enterprises need to start experimenting. The platforms offer numerous differences. Some are better suited to helping professional developers who often need to work with large code bases. Others want to help not-so-professional programmers who know what they want but aren’t ready to write it all by themselves. Maybe they don’t know a particular programming language or maybe they don’t know much about programming at all. Still others are aimed at complete novices who can barely turn on a computer.

And it’s not just the level of experience that distinguishes them. Some tackle smaller wishes —the kind of tools that professional developers can use as a “force multiplier.” Others create entire applications, from database to front-end, and they’re best for people who want to create a prototype. The architecture may not be ready to scale for a large user base, but they’re ideal for presenting to the boss or some investors. They can also do a pretty good job with a smaller collection of users.

Are any good enough? Can we work around their flaws? The only way to find out is to fire them up and look at the results.

Here is a list of 19 vibe coding tools worth checking out, in alphabetical order. They all promise to provide some amount of app-dev magic.

Base44/Wix

The process at Base44 (now owned by Wix) begins with a Builder Chat in which the discussion focuses on the data architecture. From there, your words guide a tool that merges React and Tailwind code for the front-end with a Deno backend. To speed things up, templates can jumpstart common use cases — ecommerce, content management, productivity, etc. Many parts of the UI can be adjusted with a drag-and-drop visual editor that can be much faster than trying to describe all your changes with words.

Betty Blocks

The developers of the Betty Blocks no-code system say they’re targeting “citizen developers” —non-programmers who know what their corner of the enterprise needs. The platform takes a description and then produces React code that can be exported to a code repository for future development or be compiled down to WASM level deployment. They also offer a “low-code” approach that gives a visual interface for further tweaking and refinement.

Blink

The code generation agent from Blink produces TypeScript React applications starting with two modes: agent mode, for building; and chat mode, for discussing and planning. Developers start with chatting and then toggle back and forth. Blink will host any application with its internal CDN or allow you to export it to your own servers or cloud.

Bolt

The no-code chat service from Bolt is designed to provide a single visual interface to various backend coding AIs. It’s possible to work with different coding agents, including some of the best-known ones, such as Anthropic’s Claude and Google’s Gemini. The design layer is broken out making it possible to create a standard design that is then adopted by any app that’s produced by the AI. An open-source version released under the MIT license is also available.

Bubble

The no-code tool from Bubble includes several features that let human users do more than just chat. A full visual editor lets developers adjust the interface directly. A workflow view outlines much of what’s going on underneath, again making it possible for the user to do more than guess. The goal is to make humans more of a partner.

Claude Code

Anthropic’s main LLM, Claude, is skilled in answering many programming needs, including either creating new applications or fixing old ones. The backend connects to many traditional IDEs, such as VSCode, but many users connect with Claude through a terminal window or even a Slack channel. One common use is to search through a large code base for the right place to begin fixing an issue. Many praise how it adapts to your local coding standards.

Continue

The open-source agent from Continue is best for professional developers who need a bit of extra help vibe coding. The tool will watch a code base for triggers, such as new pull releases, and then invoke AI agents to handle many of the chores. The goal isn’t to do everything, just the most boring things, so humans can be creative. The tool integrates with many IDEs and AI APIs.

Create

The tool from Create.xyz is called Anything because they want users to be able to create any possible React/Tailwind app from a simple text prompt. The results are crafted from many stylized components for tasks such as database access that run well in either the browser or a mobile platform. Developers can then dive into the code and add any human touches.

Cursor

Many old-school developers love Cursor because it’s designed to be the assistant they’ve never had. It writes new code, audits old code, and tracks issues evolving over channels like Slack. The tool can juggle multiple files and analyze entire codebases before proposing a plan of action and then executing it. It’s not exactly fair to call it “no-code” because it’s designed to work in a traditional development environment, but many users aren’t writing much code anymore because Cursor does so much.

Emergent

The web application from Emergent is a front-end for a team of AI agents that will turn your text description into a frontend (React), backend (Node.js), databases (MongoDB), and collection of APIs with full integrations (Stripe, etc.).The goal is not just to perform hand-holding, but to hide all the complexity of development behind a big facade. Non-developers can build everything they want while developers can knock off prototypes — all deployed in close to production-ready form.

Kilo Code

The open-source coding agent from Kilo has a number of features that will appeal to coders maintaining and extending larger code bases. Orchestrator Mode, for instance, helps create work plans while Code Review double checks for errors. A Memory Bank stores high-level details about the project’s architecture. Connections to more than 500 models avoids lock in and lets you choose the right model that’s delivering just what you want.

Lindy

The tool from Lindy focuses on creating “agents” that tend to be bits of code that sit in the background and respond to triggers like a Slack message or a new commit to a repository. There are hundreds of different web applications that can generate these events, including all major clouds and office organization sites such as Jira or Zoho. Many of the standard applications can be built more quickly by leveraging pre-defined templates. Some of the most common use cases include support agents.

Lovable

The no-code interface from Lovable chats with you for a bit and then builds the app and deployment in Lovable’s cloud. It handles the UI (React plus Tailwind), business logic, and database (mainly Supabase). The result is one of the fastest ways to spin up an enterprise-ready prototype with a fairly polished interface and many of the security and access control features required for bigger environments.

Replit

The no-code solution from Replit delivers code in up to 30 programming languages, supporting all the major languages and many of the minor ones. The main interface is a no-code chatbot, but after that it dumps the code in a repository where it can be further refined using traditional methods. The database layer is broken out and treated separately, which allows for a more traditional approach by enabling options such as a separate database for production and testing. There are enterprise features that allow a team to collaborate as they chat together to improve the app.

Softgen

The tool for creating full Next.js web apps from Softgen works with several of the major models, such as Claude 4.5 or Gemini, to turn a basic text description into a full minimum viable product. A pay-as-you-go option allows you to pay for only the tokens your application requires.

Solid

Creating basic applications isn’t too hard anymore. Solid emphasizes creating apps that offer “enterprise-grade” deployments with top-notch security models and distributed deployment. The documentation emphasizes working iteratively with the AI and playing to its strengths, which is crafting a React/Tailwind front end with a wide variety of backends that generally mean TypeScript code running on Node.js with PostgreSQL.

Tempo Labs

The visual editor from Tempo Labs aims to allow human users to create a React app ten times faster. Simple visual tasks can be performed without relying on the AI. The tool emphasizes design and maintains a library of standard elements for each project. Any React code base can be imported and extended with predeveloped components and templates. It’s an editor that lets you vibe.

Vercel

The v0 system from Vercel offers a large collection of templates as a foundation for any app designer. The main interface is still a chat box that accepts any designs, but the templates act as both inspiration and a shared language for writing the specifications. There are also design templates that make it simpler to harmonize several different applications by allowing you to define a look once and then reuse it easily. The templates are also focused on mobile browsers to make it simpler to create mobile-ready sites.

Windsurf

Teams working with big code bases can use Windsurf, an IDE with embedded AI that’s designed to handle longer, multi-step plans for fixing bugs or adding features to a code base. It’s vibe coding, but focused on assisting traditional techniques. The tab key in the Windsurf IDE is quite powerful. As you hit the tab key, it moves from suggested fix to suggested fix waiting for you to signal your approval by hitting it again. The IDE aims to produce multistep plans that it calls “cascades.”

  • ✇Security | CIO
  • Enterprise Spotlight: Transforming software development with AI
    Artificial intelligence has had an immediate and profound impact on software development. Coding practices, coding tools, developer roles, and the software development process itself are all being reimagined as AI agents advance on every stage of the software development life cycle, from planning and design to testing, deployment, and maintenance. Download the May 2026 issue of the Enterprise Spotlight from the editors of CIO, Computerworld, CSO, InfoWorld, and Network
     

Enterprise Spotlight: Transforming software development with AI

1 de Maio de 2026, 06:30

Artificial intelligence has had an immediate and profound impact on software development. Coding practices, coding tools, developer roles, and the software development process itself are all being reimagined as AI agents advance on every stage of the software development life cycle, from planning and design to testing, deployment, and maintenance.

Download the May 2026 issue of the Enterprise Spotlight from the editors of CIO, Computerworld, CSO, InfoWorld, and Network World and learn how to harness the power of AI-enabled development.

  • ✇Security | CIO
  • AI 코딩 보조에서 개발 파이프라인까지…오픈AI ‘심포니’의 전환 실험
    오픈AI가 이슈 트래커를 코딩 에이전트의 제어 플랫폼으로 전환하기 위한 오픈소스 사양 ‘심포니(Symphony)’를 공개했다. 심포니는 기존처럼 개별 코딩 문제를 해결하기 위해 AI를 한 번씩 호출하는 방식이 아니라, 에이전트가 이슈 트래커에서 작업을 직접 가져와 수행하도록 설계됐다. 각 에이전트는 독립된 작업 공간에서 실행되며, 지속적 통합(CI)을 모니터링하고 변경 사항을 준비해 인간의 검토 단계로 넘긴다. 오픈AI는 27일 블로그를 통해 이 시스템이 내부에서 겪은 병목 현상에서 출발했다고 설명했다. 엔지니어들이 여러 개의 코덱스(Codex) 세션을 동시에 운영하기 시작하면서, 3~5개 수준을 넘어서면 문맥 전환 부담이 커졌고 이는 코딩 에이전트의 속도 향상이 가져올 수 있는 생산성 개선 효과를 제한했다는 것이다. 성과는 빠르게 나타났다. 일부 내부 팀에서는 도입 후 3주 만에 병합된 풀 리퀘스트 수가 500% 증가한 것으
     

AI 코딩 보조에서 개발 파이프라인까지…오픈AI ‘심포니’의 전환 실험

29 de Abril de 2026, 05:12

오픈AI가 이슈 트래커를 코딩 에이전트의 제어 플랫폼으로 전환하기 위한 오픈소스 사양 ‘심포니(Symphony)’를 공개했다.

심포니는 기존처럼 개별 코딩 문제를 해결하기 위해 AI를 한 번씩 호출하는 방식이 아니라, 에이전트가 이슈 트래커에서 작업을 직접 가져와 수행하도록 설계됐다. 각 에이전트는 독립된 작업 공간에서 실행되며, 지속적 통합(CI)을 모니터링하고 변경 사항을 준비해 인간의 검토 단계로 넘긴다.

오픈AI는 27일 블로그를 통해 이 시스템이 내부에서 겪은 병목 현상에서 출발했다고 설명했다. 엔지니어들이 여러 개의 코덱스(Codex) 세션을 동시에 운영하기 시작하면서, 3~5개 수준을 넘어서면 문맥 전환 부담이 커졌고 이는 코딩 에이전트의 속도 향상이 가져올 수 있는 생산성 개선 효과를 제한했다는 것이다.

성과는 빠르게 나타났다. 일부 내부 팀에서는 도입 후 3주 만에 병합된 풀 리퀘스트 수가 500% 증가한 것으로 나타났다고 오픈AI는 밝혔다.

또한 오케스트레이션 계층은 이슈 상태를 추적하고, 중단되거나 멈춘 에이전트를 재시작하며, 이슈별 작업 공간을 관리한다. 이와 함께 CI를 감시하고 변경 사항을 리베이스하며, 충돌을 해결하고 풀 리퀘스트가 리뷰 단계로 원활히 진행되도록 관리한다.

오픈AI는 “더 근본적인 변화는 팀이 일을 바라보는 방식에 있다”며 “엔지니어가 코덱스 세션을 직접 감독하는 시간이 사라지면서 코드 변경의 경제성이 완전히 달라졌다. 구현 과정에 인간의 노력이 더 이상 투입되지 않기 때문에 각 변경에 대한 체감 비용이 낮아진다”고 설명했다.

다만 이 접근 방식은 새로운 문제도 동반한다. 오픈AI는 에이전트가 티켓 단위 작업에서 기대에 못 미칠 수 있으며, 모든 작업이 오케스트레이션에 적합한 것은 아니라고 밝혔다. 특히 모호하거나 높은 판단력이 필요한 작업은 여전히 엔지니어가 인터랙티브한 코덱스 세션을 통해 직접 처리해야 한다고 덧붙였다.

그레이하운드 리서치의 최고경영자(CEO) 겸 수석 애널리스트 산치트 비르 고지아는 “심포니는 단순한 AI 코딩 보조 도구라기보다 소프트웨어 전달을 위한 새로운 운영 계층으로 보는 것이 적절하다”고 말했다. 이어 “작업을 스케줄링하고 추적하며 재시도와 상태 유지, 흐름 제어까지 수행한다는 점에서 경량 운영체제와 유사한 형태로 진화하고 있다”고 분석했다.

기업에 미치는 영향

포레스터의 수석 애널리스트 비스와지트 마하파트라는 심포니가 AI를 단순한 개발자 생산성 도구에서 소프트웨어 작업을 수행하는 실행 모델로 전환시키고 있다고 평가했다.

마하파트라는 “에이전트 제어 플랫폼과 적응형 프로세스 오케스트레이션에 대한 포레스터 연구에 따르면, AI는 개인이 인터랙티브하게 호출하는 방식보다 워크플로에 내재화되고 대규모로 관리될 때 더 큰 가치를 창출한다”고 설명했다.

이어 “상시 동작하는 오케스트레이션은 AI를 개인용 코딩 보조 도구에서 공유 엔지니어링 인프라로 전환시키며, 팀이 이슈와 작업 중심으로 업무를 체계화하고 개발자의 인지 부담을 줄이는 데 기여한다”고 덧붙였다.

다만 기업은 코드 라인 수나 풀 리퀘스트 수와 같은 단순 산출 지표를 넘어, 품질과 전달 속도, 개발자 경험, 비즈니스 영향에 초점을 맞춰야 한다.

마하파트라는 “주요 지표에는 실제 사용 가능한 기능 구현까지 걸리는 리드 타임, 결함 유출률, 재작업 및 코드 변경 빈도, 운영 안정성, 그리고 DevEx 관점에서의 개발자 몰입도와 인지 부담 등이 포함된다”며 “포레스터의 애플리케이션 개발 연구는 생산성 향상이 단순히 코드 생성량 증가가 아니라, 더 높은 품질과 빠른 피드백 루프, 명확한 비즈니스 성과로 이어져야 한다는 점을 지속적으로 강조하고 있다”고 말했다.

고지아 역시 풀 리퀘스트 증가를 생산성 향상의 지표로 해석하는 데 경고를 보냈다. 오픈AI가 제시한 500% 증가 수치는 긍정적 신호라기보다 오히려 신중한 해석이 필요하다는 지적이다.

고지아는 “생성은 쉽게 확장되지만 검증은 그렇지 않다”며 “출력량이 증가할수록 리뷰와 테스트, 거버넌스에 대한 부담도 함께 커진다”고 말했다.

이어 동료 리뷰 과정에서의 마찰, 후속 재작업, 결함 유출, 배포 이후 사고, 복구 시간, 그리고 저연차 직원에 미치는 영향까지 함께 추적해야 한다고 강조했다.

해결해야 할 과제

카운터포인트 리서치의 리서치 부사장 닐 샤는 기업이 직면할 가장 큰 과제로 오케스트레이션 플랫폼의 보안 유지와 코딩 에이전트에 부여할 자율성 수준 결정 문제를 꼽았다.

샤는 오케스트레이터가 다양한 유형의 작업을 처리하고, 에이전트 간 작업 인계를 지원해야 하며, “포괄적인 감사 추적을 통한 완전한 투명성”을 제공해야 한다고 설명했다.

이는 자동화된 오케스트레이션 환경에서 에이전트가 스스로 작업을 생성하고 관리하기 시작하면서 인간의 직접적인 개입이 줄어드는 상황에서 더욱 중요해질 것으로 보인다.

마하파트라는 “기업들은 분산된 에이전트 환경 전반에서 일관된 보안 정책과 감사 가능성, 리스크 통제를 적용하는 데 어려움을 겪고 있다”며 “특히 오케스트레이션이 기존 소프트웨어 개발 생명주기(SDLC)나 인증 시스템과 분리될 경우 이러한 문제는 더 심화된다”고 지적했다.

또한 기업들이 오픈 에이전트 오케스트레이션 사양을 대규모로 도입하기 위해서는 레거시 도구 체인과의 통합, 에이전트 의사결정의 책임 소재, 변경 이력 추적, 역할 분리 등의 문제를 해결해야 한다고 덧붙였다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • 샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
    샤오미는 MIT 라이선스 기반으로 미모(MiMo)-V2.5와 미모-V2.5-Pro를 공개하고 오픈소스 형태로 27일 배포했다. 개발자는 해당 모델을 활용해 코딩과 업무 자동화 등 장시간 작업을 수행하는 AI 에이전트를 보다 낮은 비용으로 구축할 수 있을 것으로 보인다. 두 모델은 모두 100만 토큰 규모의 컨텍스트 윈도를 지원한다. 미모-V2.5-Pro는 복잡한 에이전트 및 코딩 작업에 최적화됐으며, 미모-V2.5는 텍스트, 이미지, 영상, 오디오를 모두 처리할 수 있는 네이티브 옴니모달 모델이다. 이번 공개는 에이전트형 AI 워크로드가 기업 AI 예산에 새로운 부담으로 작용하는 상황에서 이뤄졌다. 해당 시스템은 작업 계획 수립, 도구 호출, 코드 작성, 오류 복구 과정에서 대량의 토큰을 소모하기 때문에, 개발자들에게 비용 관리와 배포 통제의 중요성이 더욱 커지고 있다. 샤오미는 MIT 라이선스를 통해 추가 승인 없이 상용 배
     

샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥

28 de Abril de 2026, 23:59

샤오미는 MIT 라이선스 기반으로 미모(MiMo)-V2.5미모-V2.5-Pro를 공개하고 오픈소스 형태로 27일 배포했다. 개발자는 해당 모델을 활용해 코딩과 업무 자동화 등 장시간 작업을 수행하는 AI 에이전트를 보다 낮은 비용으로 구축할 수 있을 것으로 보인다.

두 모델은 모두 100만 토큰 규모의 컨텍스트 윈도를 지원한다. 미모-V2.5-Pro는 복잡한 에이전트 및 코딩 작업에 최적화됐으며, 미모-V2.5는 텍스트, 이미지, 영상, 오디오를 모두 처리할 수 있는 네이티브 옴니모달 모델이다.

이번 공개는 에이전트형 AI 워크로드가 기업 AI 예산에 새로운 부담으로 작용하는 상황에서 이뤄졌다. 해당 시스템은 작업 계획 수립, 도구 호출, 코드 작성, 오류 복구 과정에서 대량의 토큰을 소모하기 때문에, 개발자들에게 비용 관리와 배포 통제의 중요성이 더욱 커지고 있다.

샤오미는 MIT 라이선스를 통해 추가 승인 없이 상용 배포, 지속적인 학습, 파인튜닝이 가능하도록 했다고 밝혔다. 글로벌 시장조사업체 카던스 인터내셔널(Kadence International)의 수석 부사장 툴리카 실은 “기업이 제한 없이 모델을 수정하고 배포하며 상용화할 수 있다는 점에서 현재 AI 시장에서는 보기 드문 구조”라고 설명했다.

성능 측면에서도 경쟁력을 강조했다. 샤오미는 블로그를 통해 “ClawEval 벤치마크에서 미모-V2.5-Pro는 약 7만 토큰만 사용해 64%의 통과율을 기록했다”며 “이는 유사한 성능 수준의 클로드 오퍼스 4.6, 제미나이 3.1 프로, GPT-5.4 대비 약 40~60% 적은 토큰 사용량”이라고 밝혔다.

두 모델은 MoE 구조를 적용해 연산 비용을 효율적으로 관리한다. 3,100억 파라미터 규모의 미모-V2.5는 요청당 150억 파라미터만 활성화하며, 1조 200억 파라미터 규모의 Pro 버전은 420억 파라미터만 사용한다. 또한 Pro 모델의 하이브리드 어텐션 설계는 장문 컨텍스트 작업 시 KV 캐시 저장량을 최대 7배 가까이 줄일 수 있다는 설명이다.

샤오미는 장기 작업 성능을 입증하기 위한 사례도 공개했다. 미모-V2.5-Pro는 러스트 기반 SysY 컴파일러를 4.3시간 동안 672회의 도구 호출을 통해 완성했으며, 233개의 숨겨진 테스트를 모두 통과했다. 또한 11.5시간 동안 1,868회의 도구 호출을 수행해 8,192줄 규모의 데스크톱 영상 편집기를 생성했다고 밝혔다.

미모, 기업 AI 선택지 될까

샤오미의 미모-V2.5 모델이 에이전트형 코딩 및 자동화 워크로드에서 폐쇄형 프론티어 모델을 넘어 기업 개발자들 사이에서 채택될 수 있을지는 성능, 비용, 리스크에 대한 평가 방식에 달려 있다.

시장조사업체 옴디아의 수석 애널리스트 리안 지에 수는 “샤오미의 미모-V2.5와 파생 모델을 평가할 때 기업 개발자들은 총소유비용(TCO)을 중심으로 봐야 한다”며 “TCO는 토큰 효율성, 작업 성공당 비용, 그리고 독점 모델에 수반되는 라이선스 비용이 없다는 점으로 구성된다”고 설명했다. 이어 “폐쇄형 프론티어 모델은 일반적인 작업이나 가장 까다로운 극단적 사례에서는 여전히 우위를 점할 수 있지만, 대량 처리 중심의 에이전트 작업에서는 오픈 가중치 모델이 더 뛰어난 성과를 보인다”고 덧붙였다.

컨설팅 기업 파리크 컨설팅(Pareekh Consulting)의 CEO 파리크 자인은 “기업은 미모-V2.5를 클로드나 GPT의 대체재로 보기보다, 고토큰 워크로드를 위한 비용 효율적인 에이전트 모델로 평가해야 한다”고 밝혔다. 그는 “핵심 벤치마크 지표는 단순한 정확도가 아니라 ‘작업 성공당 토큰 수’”라며 “프론티어 모델은 복잡한 코딩 벤치마크에서 높은 성공률을 보이지만, 그만큼 막대한 추론 비용이 수반된다”고 설명했다. 이어 “미모-V2.5는 토큰 효율성을 중심으로 설계돼 훨씬 적은 입력·출력 토큰으로 유사한 결과를 도출한다”고 강조했다.

자인은 이러한 특성이 반복적인 코딩, 품질검증(QA), 마이그레이션, 문서화, 테스트, 자동화 작업에서 미모와 같은 모델을 ‘경제적 핵심 엔진’으로 활용할 수 있게 한다고 분석했다. 다만 가장 난도가 높은 작업에서는 여전히 폐쇄형 프론티어 모델이 품질 기준의 상한선을 유지할 것이라고 덧붙였다.

가트너의 수석 책임 애널리스트 애시시 배너지는 “미모와 같은 모델은 장기 작업 에이전트 분야에서 기업 AI 경제 구조를 실질적으로 변화시킬 수 있다”고 평가했다. 그는 “작업 규모가 수백만 토큰으로 확대되면 종량제 기반의 독점 API는 편의성이 아니라 반복 비용 부담으로 작용하게 된다”며 “반면 미모는 MIT 라이선스, 오픈 가중치, 100만 토큰 컨텍스트, 비교적 낮은 가격을 기반으로 프라이빗 클라우드나 자체 구축 환경에서도 전략적으로 충분한 경쟁력을 갖는다”고 설명했다.

다만 이는 기업들이 독점 API를 완전히 포기한다는 의미는 아니다. 배너지는 “기업들은 높은 정확도가 요구되거나 운영 부담을 최소화해야 하는 경우에는 계속해서 독점 API를 활용할 것”이라며 “대규모로 반복 가능한 에이전트 워크플로는 비용 예측 가능성, 데이터 통제, 커스터마이징이 중요한 오픈 모델로 이동하게 될 것”이라고 내다봤다. 이어 “결국 장기·대규모 에이전트형 AI 시장은 하이브리드 구조로 발전하며, 미모와 같은 오픈 모델이 API 의존도를 낮추는 역할을 하게 될 것”이라고 덧붙였다.

한편 수 애널리스트는 “중국에서 개발된 모델이라는 점이 규제가 엄격한 서구권 기업에서는 우려 요소로 작용할 수 있어 도입 과정에서 장애물이 될 가능성도 있다”라고 지적했다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • MS, 앤트로픽 ‘미토스’ 도입···보안 개발 수명주기에 생성형 AI 적용
    마이크로소프트(MS)는 앤트로픽의 AI 모델 ‘미토스(Mythos)’를 자사 보안 개발 수명주기(Security Development Lifecycle)에 통합할 계획이다. 이는 주요 소프트웨어 기업들이 취약점을 식별하고 코드 보안을 강화하는 과정에서, 고도화된 생성형 AI가 직접적인 역할을 수행하기 시작했음을 시사한다. MS는 미토스 프리뷰(Mythos Preview)를 포함한 다양한 첨단 모델을 활용해, 소프트웨어 개발 초기 단계부터 안전한 코딩과 취약점 탐지 역량을 강화하겠다고 밝혔다. 이번 발표는 앤트로픽의 미토스가 소프트웨어 취약점 발견과 실제 공격 악용 사이의 시간을 크게 단축시킬 수 있다는 우려가 커지는 가운데 나왔다. 업계 분석가들은 미토스가 주요 운영체제와 브라우저 전반에서 수천 건의 심각한 취약점을 찾아낼 수 있는 능력을 갖췄다며, AI 기반 취약점 연구에서 의미 있는 도약이라고 평가하고 있다. 오픈AI도 이
     

MS, 앤트로픽 ‘미토스’ 도입···보안 개발 수명주기에 생성형 AI 적용

27 de Abril de 2026, 05:13

마이크로소프트(MS)는 앤트로픽의 AI 모델 ‘미토스(Mythos)’를 자사 보안 개발 수명주기(Security Development Lifecycle)에 통합할 계획이다. 이는 주요 소프트웨어 기업들이 취약점을 식별하고 코드 보안을 강화하는 과정에서, 고도화된 생성형 AI가 직접적인 역할을 수행하기 시작했음을 시사한다.

MS는 미토스 프리뷰(Mythos Preview)를 포함한 다양한 첨단 모델을 활용해, 소프트웨어 개발 초기 단계부터 안전한 코딩과 취약점 탐지 역량을 강화하겠다고 밝혔다.

이번 발표는 앤트로픽의 미토스가 소프트웨어 취약점 발견과 실제 공격 악용 사이의 시간을 크게 단축시킬 수 있다는 우려가 커지는 가운데 나왔다. 업계 분석가들은 미토스가 주요 운영체제와 브라우저 전반에서 수천 건의 심각한 취약점을 찾아낼 수 있는 능력을 갖췄다며, AI 기반 취약점 연구에서 의미 있는 도약이라고 평가하고 있다.

오픈AI도 이 분야에 진입했다. 자사 대표 모델을 기반으로 방어형 사이버보안 업무에 특화한 ‘GPT-5.4-사이버(GPT-5.4-Cyber)’를 선보였다. 사이버보안과 리스크 관리 기술 업체 컨피디스(Confidis)의 창립자 겸 CEO 키스 프라부는 향후 오픈AI가 ‘스퍼드(Spud)’로 지칭되는 차세대 모델을 내놓을 경우, 더욱 강력한 경쟁자로 부상할 수 있다고 전망했다.

이번 조치는 MS 내부 엔지니어링 조직을 넘어서는 의미를 갖는다. 기업 보안 책임자들에게는 최첨단 AI 모델이 실험 단계를 넘어 핵심 사이버보안 워크플로우로 이동하고 있음을 보여주는 명확한 신호다.

이는 소프트웨어 기업의 제품 개발 방식은 물론, 동일한 AI 도구를 공격자 역시 활용할 수 있다는 점에서 보안 담당자들의 위험 인식과 대응 전략에도 변화를 가져올 것으로 보인다.

“이번 변화는 보안 소프트웨어 개발 수명주기에서 중요한 전환점이다. 과거 도구는 정적 코드 스캔을 통해 취약점을 탐지하는 수준에 그쳤지만, AI를 활용하면 실시간으로 학습하는 동적 모델을 통해 동적 취약점 분석은 물론 침투 테스트까지 수행할 수 있다”라고 컨피디스의 창립자 겸 CEO 키스 프라부는 설명했다.

프라부는 시간이 지날수록 AI 기반 보안 도구 도입에 대한 압력이 대형 소프트웨어 기업을 넘어 전반적인 시장으로 확산될 가능성이 높다고 덧붙였다.

MS 결정의 의미

컨설팅 기업 카운터포인트리서치(Counterpoint Research)의 리서치 부사장 닐 샤는 포춘 500대 기업의 95% 이상이 어떤 형태로든 마이크로소프트 애저(Microsoft Azure)를 사용하고 있으며, 애저 AI와 코파일럿 제품군 역시 약 65% 기업에 깊이 자리 잡고 있다고 밝혔다. 또한 수백만 개 기업이 MS의 다양한 제품과 클라우드 서비스를 활용하고 있다고 덧붙였다.

“MS의 보안 개발 수명주기에 미소스를 도입하면 윈도우, 애저, 마이크로소프트 365, 개발자 도구 등 주요 제품의 보안을 한층 강화할 수 있다. 해당 제품을 사용하는 모든 기업은 미소스에 직접 접근하지 않더라도 보안 개선 효과를 누릴 수 있다”라고 카운터포인트리서치의 리서치 부사장 닐 샤는 설명했다.

컨피디스의 프라부는 MS가 실제 환경 기반 탐지 엔지니어링 작업을 평가하기 위해 자사의 오픈소스 벤치마크를 활용해 미소스를 검증했으며, 그 결과 기존 모델 대비 상당한 성능 향상이 확인됐다고 언급한 점에 주목했다.

그는 “MS가 이러한 주장을 내놓았다는 것은 최신 AI 모델이 이전 세대보다 실제 악용 가능한 취약점을 식별하는 능력에서 의미 있는 개선을 이뤘음을 시사한다”라며 “다만 모든 AI 도구와 마찬가지로, 과거 학습을 기반으로 코드를 빠르게 분석하는 데 강점이 있는 만큼 인간 전문가만이 식별할 수 있는 새로운 유형의 취약점을 놓칠 가능성도 존재한다”라고 분석했다. 이어 “AI와 인간 전문가의 협업 구조가 여전히 중요하다”라고 조언했다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • Your AI coding agent isn’t a tool. It’s a junior developer. Treat it like one
    Yet that is precisely how most organizations are deploying AI coding agents today. The prevailing narrative around “AI-powered development” frames these systems as productivity tools. Vibe-coding and agentic coding are considered something closer to a faster autocomplete or a more sophisticated IDE plugin. Flip the switch, the story goes, and suddenly your engineering organization becomes dramatically more efficient. Everyone is “all in” on the first hand of cyber-Texas Hol
     

Your AI coding agent isn’t a tool. It’s a junior developer. Treat it like one

23 de Abril de 2026, 07:00

Yet that is precisely how most organizations are deploying AI coding agents today. The prevailing narrative around “AI-powered development” frames these systems as productivity tools. Vibe-coding and agentic coding are considered something closer to a faster autocomplete or a more sophisticated IDE plugin. Flip the switch, the story goes, and suddenly your engineering organization becomes dramatically more efficient. Everyone is “all in” on the first hand of cyber-Texas Hold ’Em. That mental model is wrong.

AI coding agents are not tools. They behave far more like junior developers: Capable, energetic, sometimes brilliant, but absolutely capable of causing catastrophic damage if given autonomy before they understand and respect the environment they’re operating in.

The organizations that treat AI coding agents like tools will create and accumulate technical debt at unprecedented speed. The organizations that treat them like junior engineers by onboarding them as talent, pairing with them and teaching them context will unlock the productivity gains everyone is chasing. The difference between those outcomes is not the technology. It is the management model.

The lesson every engineer learns early

Midway through the DevOps phase of my career, I worked at the CME Group, where the exchange operates one of the most critical financial infrastructures on the planet. The CME processes roughly a quadrillion dollars’ worth of contracts annually and, at the time, ran across five datacenters with more than 10,000 servers, including racks of Oracle Exadata systems costing hundreds of thousands of dollars each. The biggest SIFI of SIFIs.

You did not get root access to that environment on day one.

Instead, you were paired with a mentor. Your mentor was part of a buddy system for onboarding new hires and was effectively a docent for the infrastructure. My mentor was a deeply technical manager named Matt, one of the most capable engineers I have ever worked with. His job wasn’t simply to show me which commands to run or where to find documentation. His job was to teach me how to ask the platform, a system of systems, meaningful questions.

When you’re managing infrastructure at that scale, every question returns thousands of answers.

  • Are the matching engines pinned correctly to CPU cores?
  • Are cgroups configured properly for workload isolation?
  • Which RAID arrays are starting to show drive failures?
  • Are firmware and BIOS versions aligned across production and QA?

None of this can be learned through a quick tutorial or a training video. You learn by doing. You learn by working through the ticket queue, performing dry runs, preparing rollback plans and executing changes within narrow maintenance windows (a few minutes per week).

The lesson wasn’t simply technical. It was epistemological. Engineering expertise is not about knowing commands. It is about knowing which questions matter and how to understand the response. And that knowledge only develops through mentorship, iteration and experience.

Why the pair-programming model matters

The software industry already solved this problem decades ago through a practice called pair programming. In agile teams, a senior developer pairs with a junior one. They work together on the same problem in real time. The junior developer contributes energy and fresh thinking, while the senior developer contributes experience and judgment. The result is faster capability development without sacrificing quality. At first, it might seem an expensive allocation of resources, but when you think it through, it is really a strong knowledge management technique.

AI coding tools are like a super smart baby, a nascent intelligence that is as eager as any recent college graduate, but without much in the way of real-life experience solving real-world problems because it cannot rely on a body of lived experience and hard-won lessons in software development, release engineering and debugging. That description should sound familiar. It is essentially the profile of a junior developer.

The implication is obvious once you see it: the most effective deployment model for AI coding agents is the same pairing model that works for human developers. Human plus agent.

Not a human supervising an agent after the fact. Not just a human reviewing pull requests from an automated pipeline. But genuine co-development, with contextual education on why the vulnerability should not be introduced in the first place. When that pairing works, the productivity gains are real. When it doesn’t, you ship vulnerabilities faster than your security team can ever hope to triage them.

What the agent gets wrong first

The first time I worked alongside a coding model on a real security problem, the mistake it made was subtle but revealing. I was experimenting with ways to harden an API without introducing latency or complexity on the client side. The goal was to produce a transparent security uplift that improved the API’s defensive posture without forcing developers to substantially change how they interacted with the service.

The model generated plausible suggestions quickly. Too quickly. Some of the techniques it proposed were technically correct but operationally obsolete. Others referenced security mechanisms that had been deprecated. Still others ignored non-functional requirements around compliance or performance. In other words, the model surfaced relevant information but lacked the judgment to distinguish wheat from chaff. 

There is also a tendency to accept the legitimacy of the ask rather than questioning the assumptions and baseline parameters of the situation. The agent is not going to think outside the box (unless it is hallucinating a nonexistent function or package/library that solves the problem). It assumes that the question being asked for it to try to solve is a legitimate and valid question or problem to be solved.

Humans develop that discernment over time. It’s part of how we move from data to information to knowledge to wisdom. What information scientists have called the DIKW pyramid.

Models don’t struggle their way up that pyramid. They jump directly to conclusions. The struggle, however, is a messy process of trial, failure and iteration, but it is where human experience and knowledge form. That knowledge is then further refined and distilled into wisdom. When that process is skipped, real expertise never develops. This is why treating AI coding agents as tools is dangerous. Tools don’t need to exercise judgment. Junior developers do.

How trust actually develops

Think about the best junior engineer you ever worked with. How long did it take before you trusted them to work independently? Rarely less than months. Oftentimes a year or more.

Trust emerges gradually. It grows from observing how someone works through problems: how they document changes, how they write tests, how they think about rollback procedures and anticipating edge cases and race conditions. In my own teams, I’ve always preferred a management philosophy of 100% freedom and 100% responsibility (Netflix Manifesto circa 2001).

Engineers on my teams are expected to behave like owners of the company. They are indoctrinated to commit infrastructure changes as code. They document their reasoning. They attach testing artifacts to their pull requests. We track progress not just by time spent but by contributions: Commits, documentation, testing evidence and operational discipline.

That process shapes junior engineers into reliable junior engineers. The exact same logic applies to AI coding agents. Trust should expand progressively.

  • At first, the agent proposes little code snippets and stanzas.
  • Then it drafts functions and packages libraries.
  • Eventually, it might implement entire features, but only after proving it understands the environment and the risk appetite of the company.

Skip those steps, and you aren’t accelerating development. You’re accelerating chaos being driven by FOMO and FUD.

Learning from more than one chef

Over the course of my career, I’ve worked across a wide range of industries: dot-com era web development in San Francisco, trading infrastructure in European financial markets, cloud transformations for legacy enterprises and large-scale infrastructure engineering.

Each environment changed how I thought about software and security. The dot-com era taught speed and experimentation. European financial institutions taught rigorous project governance (PRINCE2 anyone?). Large-scale options and commodity exchanges taught what real operational resilience looks like.

Those experiences fundamentally reshaped how I approach engineering problems. AI agents will benefit from the same diversity. Pairing them with multiple engineers and rotating pairings over time will expose them to different coding styles, architectural philosophies and security techniques. Best practices, but not monolithic best practices aggregated and homogenized by token prediction algorithms trained on millions and billions of lines of code. Just as aspiring chefs learn from multiple masters, agents improve faster when exposed to varied expertise.

A warning for CISOs

Many security leaders today are under pressure to reduce developer headcount because executives believe AI can absorb the workload. This assumption misunderstands both security and AI. If an organization already has strong security discipline with well-documented architectures, clear coding standards and mature review processes then AI agents will amplify that core mindset and culture.

But if the organization has weak security habits, AI will amplify those weaknesses even faster. Human knowledge is like sunlight. Large language models are more like moonlight. A mere reflection of that knowledge. You cannot build a thriving ecosystem entirely under moonlight. Sooner or later, you need the sun, despite what the vampires and werewolves howling at the moon might lead you to believe.

The real promise of AI development

None of this is an argument against AI coding tools. Used properly, they are extraordinary collaborators. They can surface patterns across massive codebases, accelerate documentation and help engineers explore alternative designs more quickly than ever before.

But unlocking that potential requires the right mental model. Not as a tool, but as a junior developer. Onboard them. Pair with them. Teach them your systems, regale them with your stories of isolating a bug or race condition that took weeks to pinpoint. Rotate them across your teams. Expand their responsibilities gradually as trust develops.

That investment phase is what transforms AI from a novelty into a genuine multiplier. And like every good mentorship relationship in engineering, the payoff compounds over time. Treat your AI coding agent like a disposable tool and you’ll get disposable code (aka slop).

Treat it like a junior developer and you might just raise up the best engineering partner you’ve ever had.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Why bizware is becoming the dominant form of software
    Since the early 1950s, software has slowly moved from an obscure technical discipline to something that touches almost every person’s life every day. The transition was gradual at first. Most people didn’t have direct access to computers, but the businesses they interacted with did. Computers sat in back rooms quietly changing how companies handled inventory, accounting and customer relationships. Computing accelerated in the 1980s and 1990s. The computer went from an o
     

Why bizware is becoming the dominant form of software

20 de Abril de 2026, 06:00

Since the early 1950s, software has slowly moved from an obscure technical discipline to something that touches almost every person’s life every day. The transition was gradual at first. Most people didn’t have direct access to computers, but the businesses they interacted with did. Computers sat in back rooms quietly changing how companies handled inventory, accounting and customer relationships.

Computing accelerated in the 1980s and 1990s. The computer went from an obscure machine to something sitting on everyone’s desk and, eventually, their homes. At a minimum, people needed basic computer skills to complete everyday tasks.

Over the last 20 years, computing has evolved even further. It is no longer just a utilitarian tool; it is a fundamental part of daily life. Whether that’s good or bad is debatable, but it’s the reality we live in. And that reality requires massive technological infrastructure. Where businesses once needed buildings, now they also need websites.

To explain what this has done to software, it helps to look at another trade.

A skilled carpenter can build a beautiful mahogany table, cabinet or chair. Some spend decades mastering joinery, shaping, finishing and countless other techniques. With enough experience, they can build almost anything.

But homes are also built out of wood, and homes must be built in enormous quantities. There is massive economic pressure to build them quickly, efficiently and at scale. It would not be practical to build houses the same way master furniture makers build cabinets. The objectives are different. Home construction must happen quickly, with minimal waste, while still meeting building codes and safety standards. It is still carpentry, but it is a different discipline with different constraints.

The same thing has happened with software. The massive economic demand for digital infrastructure has created a new category of software work that operates very differently from traditional software engineering. Standing up the technology required to keep modern society running does not require deep knowledge of computer science or the inner workings of computers. Instead, it requires understanding a large ecosystem of specialized tools that assemble the components businesses need. It is still software, but software shaped by business infrastructure. This isn’t traditional software, but it is still a kind of software.

I call it bizware.

Software has split into two disciplines

This distinction becomes clearer when you look at how teams have transitioned in organizations. Traditional software teams are often organized around deep technical problems: building a compiler, optimizing a database engine or designing a new algorithm. Progress is measured by correctness, performance and innovation.

Bizware teams focus on something different. Most businesses now are not trying to develop software; instead, they need to deploy software to run their business. They are typically organized around business functions: payments, authentication, internal tools, customer dashboards or analytics pipelines. The goal is not to push the boundaries of computing, but to assemble reliable, secure systems quickly using existing components.

This difference in orientation changes how success is measured. In traditional software, elegance and efficiency matter. In bizware, speed, reliability and integration matter more. The system does not need to be perfect; it needs to work consistently and support the business.

Bizware is driven by business infrastructure, not computer science

Many traditional concepts of computer science are not central to bizware. Concepts like Von Neumann architecture, NP-completeness or decidability are rarely relevant. Instead, it is far more important to understand authentication systems, infrastructure tooling, security frameworks and deployment pipelines.

This has created an entire ecosystem of tools that primarily exist to solve business infrastructure problems.

Docker is a good example. Docker solves a deployment problem that businesses face. It does not solve a universal computing problem. Building Docker required deep software expertise, but the people using Docker are leveraging it to solve the business problems that arise from large-scale deployment. The rise of platforms like Docker and Kubernetes reflects this shift toward operational software. These tools exist because companies need consistent environments across development and production.

In the beginning, these tools were hard to use. The computers were slow and the software infrastructure was comparatively primitive. A person had to understand the tools and have a significant traditional software background to effectively and efficiently use the tools. As the tools have matured, the knowledge of traditional software development has become less relevant.

To deploy your website globally, you no longer need to understand what NP-Complete means or the nuances of von Neumann architecture. However, outside of business environments, deployment is rarely a major concern. Students, researchers and hobbyists rarely struggle with deployment the way companies do. In contrast, tools like compilers or interpreters are universal; everyone writing software needs them.

Software has effectively undergone a kind of speciation, and a new, distinct discipline has emerged. Bizware and traditional software engineering require different skill sets. Both are difficult and require significant expertise, but they emphasize different types of knowledge. Being excellent at one does not automatically make one excellent at the other.

That distinction also explains where AI is currently being applied. AI struggles with traditional software development. It is not even close to replacing engineers doing deeply technical traditional software work. For example, if I wanted to design a domain-specific language to describe Kalman filters, AI would be almost useless. That task requires deep understanding across multiple technical fields and the ability to combine them creatively in ways that have never existed before. At the same time, the market for that kind of work is relatively small compared with the need businesses have for bizware. 

Bizware also operates under very different economic pressures than traditional software. Businesses need digital infrastructure at enormous scale. These systems must be built quickly, reliably and repeatedly across thousands of organizations. Because the problems are highly repetitive, automation becomes practical and extremely valuable. AI can often produce a reasonable starting point because the patterns are well-known and widely reused.

This also explains why discussions about AI often become confusing. AI is not impacting all software equally. It is far more effective in domains where problems are repetitive and patterns are well understood.

That aligns closely with bizware.

In contrast, traditional software development often involves creating something fundamentally new. That kind of work still requires deep expertise and cannot be easily automated. I explored a related dynamic in my analysis of why hardware and software development fail, where mismatched assumptions between disciplines create systemic problems. Understanding where AI applies and where it does not becomes much easier once the distinction between bizware and traditional software is clear.

Economic pressure is reshaping how software is built

Further, this scale has created strong incentives to standardize and automate as much of the process as possible. Cloud platforms, infrastructure frameworks, containerization and orchestration systems exist primarily to solve these operational problems.

Traditional software development is different. It focuses on building new computational capabilities: compilers, algorithms, operating systems, simulation tools and domain-specific systems that push the boundaries of what computers can do.

Traditional software development solves software problems. Bizware solves business problems.  As a result, we’ve experienced a speciation of expertise and a separation of disciplines.

Why this distinction matters for companies

This divide helps explain many of the tensions inside modern technology companies. Engineers who excel at one discipline are often assumed to be interchangeable with those in the other, even though the skills and objectives are quite different.

The market for bizware is enormous. Capitalism constantly pushes toward optimization. That force becomes stronger as the market grows larger. We are seeing the same thing in construction. Companies like Reframe Systems are now building robots designed to automate large parts of home construction. The economic pressure to optimize never disappears. While skilled carpentry is still critical, homebuilding has become commoditized.

Bizware isn’t a lesser form of software, just as framing a house isn’t a lesser form of carpentry than building fine furniture. They simply exist to serve different economic needs.

Understanding that distinction clarifies what modern software development has become.

Software hasn’t disappeared. But the industry that once revolved around computer science now also revolves around operating digital infrastructure at enormous scale. For companies, this distinction has practical implications. This is not really a technical distinction. It is an operational one.

Hiring and team organization are focused on keeping the infrastructure running while also keeping it up to date. Before the internet, this used to be the purview of the store managers who needed to keep the store clean and accessible. What used to be physical infrastructure is now digital infrastructure.

Traditional software is not extinct, and it is not dying. If anything, it is more important than ever. However, it can feel that way because the scale of traditional development has been completely eclipsed by the scale of bizware.

This speciation has already happened; I’m just trying to give it a name. That way, people, businesses and organizations can all agree on what they are doing and what they want to do, because confusion around concepts like software and bizware costs money.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌
❌