Visualização de leitura

Cómo elaborar un plan de continuidad del negocio eficaz

Las organizaciones se enfrentan a un entorno operativo cada vez más amenazante y volátil. Los directivos señalan un aumento de los riesgos en múltiples áreas, incluyendo el fraude cibernético, el phishing y las interrupciones en la cadena de suministro, según el informe ‘Global Cybersecurity Outlook 2026’ del Foro Económico Mundial.

Al mismo tiempo, a los ejecutivos les preocupa cada vez más cómo la inteligencia artificial, las interdependencias digitales, la geopolítica y el complejo entorno operativo actual aumentan el riesgo a la hora de proteger la tecnología de su organización y garantizar la continuidad del negocio. Dos tercios (66%) de las organizaciones han aumentado el apoyo financiero o de recursos para la continuidad del negocio y la resiliencia en respuesta a ello, según el informe ‘State of Continuity and Resilience 2025’ del Business Continuity Institute.

Aun así, los líderes empresariales se preparan para incidentes impactantes cada vez más frecuentes, lo que hace que un plan de continuidad del negocio sólido sea más crítico que nunca. “Todas las empresas deben tener la mentalidad de que se enfrentarán a un desastre, y todas necesitan un plan para abordar los diferentes escenarios potenciales”, afirma Goh Ser Yoong, CISO de Ryt Bank y miembro del Grupo de Trabajo sobre Tendencias Emergentes de ISACA.

Un plan de continuidad del negocio ofrece a las organizaciones la mejor oportunidad de capear un desastre, al proporcionar instrucciones predefinidas sobre quién debe realizar qué tareas y en qué orden para mantener la viabilidad del negocio. Sin dicho plan, la organización tardará más de lo necesario en recuperarse de un evento o incidente —si es que llega a recuperarse.

¿Qué es un plan de continuidad del negocio?

Un plan de continuidad del negocio (BCP) es un manual estratégico creado para ayudar a una organización a mantener o reanudar rápidamente sus funciones empresariales ante una interrupción, ya sea causada por un desastre natural, disturbios civiles, un ciberataque o cualquier otra amenaza para las operaciones comerciales.

“La continuidad consiste en conocer el tiempo mínimo o la pérdida que una organización puede absorber y seguir siendo viable y operando. Se trata de la rapidez con la que puede recuperarse antes de que la situación se agrave para su clientela o su negocio, y de qué sistemas y procesos debe restablecer y en qué orden”, relata Matt Chevraux, director general de FTI Consulting.

Como tal, un plan de continuidad del negocio describe los procedimientos que la organización debe seguir para minimizar el tiempo de inactividad, abarcando los procesos empresariales, los activos, los recursos humanos, los socios comerciales y mucho más.

Un plan de continuidad del negocio no es lo mismo que un plan de recuperación ante desastres, que se centra en restaurar la infraestructura y las operaciones de TI tras una crisis. No obstante, un plan de recuperación ante desastres forma parte de la estrategia global para garantizar la continuidad del negocio, y el plan de continuidad del negocio debe servir de base para las medidas detalladas en el plan de recuperación ante desastres de una organización. Ambos están estrechamente relacionados, por lo que a menudo se agrupan bajo el término BCDR.

La continuidad del negocio también difiere de la resiliencia, aunque ambas están interrelacionadas. La continuidad del negocio se centra en restablecer las operaciones en caso de una interrupción, mientras que la resiliencia empresarial se refiere a la estrategia de una organización para responder a todo tipo de fuerzas internas y externas con el fin de garantizar su supervivencia y éxito a largo plazo.

Elementos de la planificación de la continuidad del negocio en la actualidad

Los eventos disruptivos son inevitables, según investigadores, responsables de riesgos y asesores ejecutivos. “Han quedado atrás los días en que las organizaciones utilizaban los programas de continuidad del negocio o de resiliencia como una especie de seguro por si algo fallaba. Ahora, las organizaciones deben afrontar la realidad; es solo cuestión de tiempo que se produzca un incidente catastrófico que afecte a los clientes”, escribe Forrester Research en su informe ‘Business Continuity Management Software Landscape, Q1 2026’.

Los ejecutivos no solo operan en un entorno en el que el riesgo de un incidente catastrófico no es una cuestión de ‘si’ sino de ‘cuándo’, sino que también trabajan en un mundo en el que la complejidad de las operaciones empresariales ha aumentado drásticamente.

Ahora las organizaciones deben tener en cuenta, como parte de sus planes de continuidad, un volumen creciente de usos de la IA, proveedores y conexiones digitales de terceros, afirma Ross Tisnovsky, socio de Everest Group y responsable de la práctica de investigación y asesoramiento para CIO de la empresa.

Por ejemplo, los planes actuales deben abordar la disponibilidad de la IA, así como su precisión y sus riesgos cibernéticos, como la amenaza de ataques de inyección de comandos, explica, señalando que los planes de continuidad actuales deben tener en cuenta preocupaciones más novedosas. “La preocupación con respecto a la infraestructura y las aplicaciones era la disponibilidad, pero ¿qué pasa si la IA te da basura? Esa degradación de la calidad de los resultados es una preocupación para la continuidad”.

Del mismo modo, las organizaciones deben evaluar y abordar su creciente dependencia operativa de terceros, ya sean hiperescalares o proveedores de LLM, un factor que también añade más complejidad a los planes de continuidad del negocio, afirma Tisnovsky.

“Ahora contamos con todos estos proveedores y, además, dependemos mucho más de las API y de la malla de servicios. Dependemos de conexiones potenciales de las que ni siquiera tenemos conocimiento”, explica. “Eso puede generar una exposición que no se puede controlar”.

Todas estas consideraciones se suman a la miríada de riesgos convencionales que un plan de continuidad del negocio siempre ha tenido que abordar, añade Tisnovsky.

Creación (y actualización) de un plan de continuidad del negocio

Ya sea creando el primer plan de continuidad del negocio de la organización o actualizando uno ya existente, el proceso implica varios pasos esenciales.

Evaluar los procesos de negocio en cuanto a su criticidad y vulnerabilidad: la planificación de la continuidad del negocio comienza por comprender qué es lo más importante para la empresa. Evaluar los procesos de negocio para determinar cuáles son los más críticos; cuáles son los más vulnerables y ante qué tipo de incidentes; y cuáles son las pérdidas potenciales si esos procesos se interrumpen durante un día, unos días, una semana o más.

“Empiece con un análisis de impacto en el negocio: ¿cuáles son los elementos críticos que hacen funcionar el negocio?”, recomienda Lawrence Bilker, director de sistemas de información de Lift Solutions Holdings. “Identifique los procesos de negocio y los sistemas que hacen funcionar la empresa”.

Esta evaluación es más exigente que nunca debido a la complejidad del lugar de trabajo híbrido actual, el entorno de TI moderno y la dependencia de socios comerciales y proveedores externos para ejecutar o dar soporte a los procesos críticos.

Como resultado, la evaluación requiere un inventario no solo de los procesos clave, sino también de los componentes de apoyo —incluidos los sistemas de TI, las redes, el personal y los proveedores externos—, así como de los riesgos a los que se enfrentan dichos componentes, afirma Goh.

Determine el RTO y el RPO de su organización: el siguiente paso es determinar el objetivo de tiempo de recuperación (RTO) de la organización, que es el tiempo objetivo entre el momento del fallo y la reanudación de las operaciones, y el objetivo de punto de recuperación (RPO), que es la cantidad máxima de pérdida de datos que una organización puede soportar.

Cada organización tiene su propio RTO y RPO en función de su negocio, sector, requisitos normativos y otros factores operativos. Además, las diferentes partes de una empresa pueden tener distintos RTO y RPO, que deben establecer los ejecutivos.

Algunas empresas “necesitan estar operativas en todo momento sin fallos, por lo que necesitan una alta disponibilidad, lo que significa una o dos copias de seguridad”, afirma Bilker.

Detallar los pasos, funciones y responsabilidades para la continuidad: los líderes empresariales deben entonces utilizar el RTO y el RPO, junto con su análisis de impacto en el negocio, para determinar las tareas específicas que deben realizarse, quién debe llevarlas a cabo y en qué orden, a fin de garantizar la continuidad del negocio.

Una herramienta habitual para la planificación de la continuidad del negocio es una lista de verificación que incluya suministros y equipos, la ubicación de las copias de seguridad de los datos y los centros de respaldo, dónde está disponible el plan y quién debe tenerlo, así como la información de contacto de los servicios de emergencia, el personal clave y los proveedores de los centros de respaldo.

No es necesario identificar todos los riesgos posibles para la organización al elaborar o actualizar un plan de continuidad del negocio, afirma Kayne McGladrey, miembro sénior de la asociación profesional sin ánimo de lucro IEEE.

La lista de posibles escenarios de impacto es extensa. En lugar de intentar identificarlos todos, McGladrey aconseja identificar los tipos de incidentes más probables y representativos y, a continuación, centrarse en cómo dichos incidentes podrían afectar al negocio. A partir de ahí, los líderes deben determinar qué impactos serían intolerables en función de la tolerancia al riesgo de la organización. “Piense en los riesgos empresariales, no en los riesgos técnicos ni en las causas, sino en los impactos en el negocio”, afirma McGladrey.

El objetivo, subraya, es crear un plan de continuidad del negocio capaz de indicar a la organización cómo recuperarse de un evento inesperado de cualquier tipo.

La importancia de poner a prueba el plan de continuidad del negocio

Las pruebas y los simulacros son otros componentes críticos de la planificación de la continuidad del negocio, ya que muestran si un plan funcionará y en qué medida. También ayudan a preparar a las partes interesadas para un incidente real, desarrollando la memoria muscular necesaria para responder con rapidez y confianza durante una crisis.

“Las pruebas y la formación del personal son fundamentales para que todos sepan qué hacer en caso de fallo”, recuerda Bilker.

También ayudan a identificar lagunas en el plan elaborado. Por ejemplo, Bilker señala que las pruebas y la formación podrían revelar la falta de copias de seguridad o de alternativas para sistemas, proveedores o personas críticos.

Además, las pruebas y la formación ayudan a identificar dónde puede haber una falta de alineación de objetivos. Por ejemplo, es posible que los ejecutivos hayan restado prioridad a la importancia de restaurar determinados sistemas de TI, solo para darse cuenta durante un simulacro de que estos son esenciales para respaldar procesos críticos.

Tipos y periodicidad de las pruebas

Muchas organizaciones prueban un plan de continuidad del negocio entre dos y cuatro veces al año. Los expertos afirman que la frecuencia de las pruebas, así como de las revisiones y actualizaciones, depende de la organización: su sector, su velocidad de innovación y transformación, la rotación del personal clave, el número de procesos de negocio, etc.

Las pruebas habituales incluyen ejercicios de simulación, revisiones estructuradas y simulaciones. Los equipos de prueba suelen estar compuestos por el coordinador de recuperación y miembros de cada unidad funcional.

Un ejercicio de simulación suele realizarse en una sala de reuniones, donde el equipo examina minuciosamente el plan, busca lagunas y se asegura de que todas las unidades de negocio estén representadas.

En un repaso estructurado, los miembros del equipo revisan en detalle sus componentes del plan para identificar puntos débiles. A menudo, el equipo realiza la prueba teniendo en mente un desastre específico. Algunas organizaciones incorporan simulacros y juegos de rol de desastres en el repaso estructurado. Cualquier punto débil debe corregirse y debe distribuirse un plan actualizado a todo el personal pertinente.

Algunos expertos aconsejan realizar un simulacro completo de evacuación de emergencia al menos una vez al año.

Las pruebas de simulación de desastres —que pueden ser bastante complejas— también deben realizarse anualmente. Para esta prueba, cree un entorno que simule un desastre real, con todo el equipo, los suministros y el personal (incluidos socios comerciales y proveedores) que se necesitarían. La simulación ayuda a determinar si la organización puede llevar a cabo funciones empresariales críticas durante un evento real.

Durante cada fase de las pruebas del plan de continuidad del negocio, incluya a algunos empleados nuevos en el equipo de pruebas. Una mirada fresca puede detectar lagunas o omisiones de información que los miembros experimentados del equipo podrían pasar por alto.

La revisión y actualización del plan de continuidad del negocio debe ser un proceso continuo. De lo contrario, los planes quedan obsoletos y no sirven de nada cuando se necesitan. “La frecuencia con la que debe actualizarse debe venir determinada por el negocio”, afirma Tisnovsky.

Reúna al personal clave al menos una vez al año para revisar el plan y debatir las áreas que requieren modificaciones. Antes de la revisión, solicite comentarios del personal para incorporarlos al plan. Pida a todos los departamentos o unidades de negocio que revisen el plan, incluidas las sucursales u otras unidades remotas.

Además, una función sólida de continuidad del negocio requiere revisar la respuesta de la organización en caso de un incidente real. Esto permite a los ejecutivos y a sus equipos identificar lo que la organización hizo bien y en qué aspectos debe mejorar.

Buenas prácticas adicionales

Según los asesores de gestión y los ejecutivos con experiencia, las siguientes buenas prácticas pueden ayudar a las organizaciones en su planificación de la continuidad del negocio:

Utilizar la IA para ayudar a crear y mantener el plan: Zach Rossmiller, vicepresidente asociado y director de sistemas de información de la Universidad de Montana, utiliza una herramienta de IA generativa personalizada para analizar los procesos, procedimientos, infraestructura y arquitectura de la organización, así como su plan de continuidad del negocio, con el fin de identificar posibles deficiencias, como la necesidad de probar los generadores del centro de datos de la universidad. Dado el rendimiento de la herramienta, Rossmiller aconseja a otros que utilicen la IA para la planificación y las pruebas de continuidad del negocio. Chevraux afirma que la IA también puede utilizarse para el descubrimiento de datos, la cartografía y la realización de evaluaciones de impacto en el negocio.

Por su parte, Bilker destaca la importancia de incluir planes de comunicación como parte del plan de continuidad del negocio. “Durante un incidente es difícil recordar quién recibe qué información y cuándo, y quién distribuye la información, por lo que el plan de continuidad del negocio debe detallar esa información”, afirma.

Del mismo modo, el plan debe identificar quién desempeña qué funciones y responsabilidades durante y después de un incidente para agilizar la respuesta y reducir la confusión.

Bilker también aconseja a las organizaciones que revisen sus planes de continuidad cada vez que se produzca un cambio importante en el negocio. Entrar en nuevos mercados o cambiar de un proveedor de nube clave a otro debería dar lugar a una actualización del plan de continuidad del negocio.

Cómo garantizar el apoyo y la concienciación sobre el plan de continuidad del negocio

Todo plan de continuidad del negocio debe contar con el apoyo de la cúpula directiva. Esto significa que la alta dirección debe estar representada a la hora de crear y actualizar el plan; nadie puede delegar esa responsabilidad en sus subordinados. Además, es probable que el plan se mantenga actualizado y viable si la alta dirección le da prioridad dedicando tiempo a su revisión y pruebas adecuadas.

La dirección también es clave para promover la concienciación de los usuarios. Si los empleados no conocen el plan, ¿cómo podrán reaccionar adecuadamente cuando cada minuto cuente?

Aunque la distribución del plan y la formación pueden correr a cargo de los responsables de las unidades de negocio o del personal de RR. HH., es recomendable que alguien de la alta dirección inicie la formación y destaque su importancia. Esto tendrá un mayor impacto en todos los empleados, dotando al plan de mayor credibilidad y urgencia.

Your CEO just got AI FOMO. Here are 6 tips on what to do next.

Every CIO I know has had some version of this conversation: their CEO comes back from a golf trip with their buddy, or a conference with peers, and is told AI is about to automate everything at their company, from HR to marketing and finance. No humans in the loop, just AI. The CEO then calls an all-hands Monday morning, and the CIO is suddenly on the hook to make it all happen.

The instinct for CEOs to chase unsubstantiated claims is understandable since they’re responding to competitive pressure. But that leaves CIOs responsible to close the gap between ambition and reality. Making AI work in an organization with decades of accumulated process, permission frameworks, and cultural inertia is very different from deploying it in a demo.

The best response isn’t to push back on the ambition, but redirect it. Translate the CEOs vision into an honest map of what has to happen for the organization to get there, including the infrastructure, governance, and training. That helps to convert the kneejerk compulsion to move faster into a concrete plan that leadership can get behind.

Here’s what CIOs should actually be focused on to get where their CEOs want them to go, regardless of what’s discussed on the links.

1. Start where AI can build its own credibility

The hype machine wants you to climb Everest on day one. Instead, identify the repetitive tasks where AI can prove itself on familiar ground — the workflows your team already knows well, where results are easy to verify and the bar for trust is attainable.

The goal is the Eureka moment when a skeptic on your team sees a real result and becomes a believer. Those moments compound. When someone has seen AI make their work easier in a context they understand, they’re more likely to help you move things forward. You can’t force that change, but you can engineer the conditions for it.

2. Models will commoditize. Context will not.

Every few months, a new model claims to be smarter, faster, and cheaper than the last one. Don’t be distracted by that race. The lasting advantage in enterprise AI doesn’t just come from which model you’re running, it’s in the quality, governance, and semantic clarity of the data feeding it. Enterprises that invest in consistent business definitions, well-structured data, and clear lineage will outperform those that don’t, regardless of which model is in fashion. Context is your competitive moat. Focus on building that.

3. Nail down the permissions

In a world of dashboards, you know exactly what data will appear on a given page, so you can set permissions in advance for who can access it. In an AI world, the system can generate outputs that were never pre-designed. So how do you determine who has the right to see a result that was never anticipated?

Before deploying any agent that acts on someone’s behalf, such as filing a request, surfacing payroll data, or populating a record, first determine whether your existing permissions and access control frameworks can handle outputs that were never planned for. Most can’t. This is a prerequisite of what your CEO is asking for: the unglamorous infrastructure work that determines whether your AI is trustworthy in production. It needs to happen before you scale, not after.

4. Build an editing culture, not a writing one

For decades, engineers, analysts, and operations teams have been trained to write code, build reports, and define new processes. AI upends that. The skill now is editing — auditing what the system produces, catching what it got wrong, and knowing where to push back.

The truth is most people aren’t naturally good at editing because they’ve never had to be. That’s a skills gap that needs to be closed early on. Invest in helping engineers, analysts, and managers develop the judgment to evaluate AI outputs, not just generate them. Editing must become a core enterprise competency.

5. Measure behavior change, not tool adoption

Login data is a vanity metric. If your engineers are accessing AI coding tools but aren’t changing how they build, you haven’t adopted anything. The metric that makes more sense is productivity output. In agile terms, a team that completes 20 story points per sprint should hit about 28 with AI, not because the tools are magic, but because the repetitive work gets faster. If you’re not seeing that, you’re measuring the wrong thing. Pay attention to output, not usage metrics.

6. Reframe your organization’s relationship with failure

The instinct to de-risk everything made sense when software deployments were expensive and slow to reverse. AI works differently. The outputs are probabilistic, the iteration cycles are fast, and being overly cautious can cost valuable time. CIOs need to give teams permission to experiment in ways that feel uncomfortable by traditional enterprise standards, all while building the feedback loops that make fast failure safe. That culture shift has to be modeled from the top.

FOMO isn’t going away

CEOs will keep getting pulled into cycles of urgency and FOMO, and that pressure will keep landing on CIOs. The organizations that make real progress will be the ones that redirect that energy into infrastructure that makes AI trustworthy, measurement systems that show what’s working, and cultural changes that make adoption stick. That’s the agenda that’ll move your organization forward.

AI sprawl: Why your productivity trap is about to get expensive

I have seen this movie before.

A decade ago, at Tesla, our Finance team faced a data crisis. We had information scattered across accounting, supply chain and delivery systems, all disconnected, all using different structures. The engineering team was rightfully focused on Full Self-Driving (FSD) and manufacturing. So, we did what productivity-hungry teams always do: We built our own solution. We taught ourselves Structured Query Language (SQL), normalized the data with creative IF-THEN logic and created our own reporting database.

It worked beautifully. Until it became a governance nightmare. The VP of Engineering hated our siloed system with embedded business logic. We eventually handed it over to IT, but not before our workaround forced the company to finally resource a proper data team.

The pattern is always the same: Productivity-hungry teams build workarounds faster than the organization can govern them, and by the time leadership notices, the workarounds have become the infrastructure.

That was more than a decade ago. The pattern took years to unfold.

Today, I am watching the exact same dynamic play out in insurance and industries across the board, but compressed into months, not years. AI adoption is sprawling across organizations, led by the same productivity-hungry individuals, but without central platforms or governance. Leadership has not created space for safe experimentation, so adoption spreads like a city without a highway system. The difference? Back then, we were building SQL databases. In 2026, we are building AI agents. And the cost of fragmentation is exponentially higher.

What is AI sprawl?

AI Sprawl is what happens when the cost of building AI drops faster than an organization can govern it. Teams spin up models, agents and automations independently. Each one works in isolation. None of them connect. The result is fragmented data, drifting decisions and intelligent systems that quietly get abandoned.

It happens because execution has become cheap. Large Language Model (LLM) APIs, no-code tools and cloud infrastructure have made spinning up AI trivially easy. A claims team builds an automation to speed adjudication. Underwriting builds a model to assess risk. Customer service deploys a chatbot. Each initiative delivers local value. No single project looks like a problem.

But collectively, they create an ungovernable landscape.

Over the past 18 months, the GenAI acceleration intensified what IDC calls the GenAI scramble: scattered, fragmented and sometimes redundant applications launched by business-led initiatives without central oversight. Many organizations have fallen into what researchers describe as a productivity trap: Focusing on short-sighted value generation instead of scalability, which limits their ability to create reusable capabilities across departments.

AI sprawl is everywhere

A major property and casualty carrier recently invited us to speak with their innovation leadership about implementing process automation. We spoke with more than 10 key stakeholders across multiple lines of business and found more than a dozen different POCs and local solutions across claims intake, underwriting and fraud detection.

Six of them were solving overlapping problems. None shared data infrastructure. Two had been abandoned months earlier but were still running and still being billed.

This is not an outlier. It is the norm.

AI Sprawl persists because it is insidious, hiding in plain sight unless you look for it. Business units move fast, build independently and solve immediate problems. IT discovers shadow AI only when something breaks, when an audit is triggered or when a vendor renewal surfaces a tool, nobody knew existed. And this symptom multiplies as more innovative teams exist within the organization.

The 4 hidden costs of sprawl

AI Sprawl creates costs that compound over time, many of which are not visible in any single budget line. It results in a dangerous cascade of failures:

  1. Governance becomes impossible. Companies cannot govern what they cannot see. When AI systems scatter across departments, audit trails fragment. Bias monitoring becomes inconsistent. Explainability standards vary by team.
  2. Scaling stalls. Disconnected systems cannot integrate. Every new initiative starts from scratch instead of building on shared infrastructure.
  3. Maintenance and redundant spending multiply. Teams that built AI to accelerate their work end up spending most of their time maintaining it. One carrier reported that 60% of their AI engineering capacity was devoted to maintaining existing tools rather than building new capabilities. Meanwhile, teams unknowingly pay for overlapping capabilities because nobody has a complete view of AI spending.
  4. Talent drains away. The best AI engineers want to solve hard problems. When they are cornered into spending their time maintaining fragmented infrastructure, they walk out the door.

Why traditional governance fails

Seventy percent of large insurers are investing in AI governance frameworks. Yet only 5% have mature frameworks in place. This gap is not about commitment or resources. It is about a category mistake.

For the last two decades, enterprise software governance worked because the software itself worked a certain way. Systems were point solutions. A claims platform did claims. A policy admin system did policy admin. Each tool had a clear owner, a defined scope and a predictable boundary. Governance could wrap around the edges, through access controls, audit logs, change management, vendor reviews, because the edges were visible. We governed the perimeter because the perimeter was the product.

AI is not a point solution. It is foundational technology, closer to electricity or a database than to a piece of software. It does not sit inside a defined boundary; it flows across every process, every decision and every department that touches data. And because it flows, it cannot be governed at the perimeter.

This is why carriers applying the old playbook keep running in place. Policy documents, oversight committees and compliance checklists were designed to govern systems that stood still. AI does not stand still. It is built, modified, retrained and extended by the same teams it is meant to serve, often in the same week. By the time a governance committee reviews it, three more versions exist somewhere else in the organization.

The failure is not that carriers are governing AI badly. It is that they are governing it as if it were software, when it’s actually infrastructure. Infrastructure requires a different discipline: Shared foundations, common standards and the assumption that everyone will build on top of it. You do not govern electricity by reviewing each appliance. You govern it by standardizing the grid.

Until carriers make that shift, their frameworks will keep maturing on paper while sprawl compounds underneath.

3 questions every insurance CIO should be able to answer

If the failure of traditional governance is a category mistake, the first job of leadership is to check which category they are actually operating in. These three questions are not meant to produce tidy answers. They are meant to reveal whether you are still governing AI as software when you should be governing it as infrastructure.

1. Are you governing AI at the perimeter, or at the foundation?

Look at your current AI governance artifacts, such as the policies, the committees, the review processes. Are they designed to wrap around individual tools after they are built, or to set shared standards that every tool must be built on top of? Perimeter governance asks, “is this specific model compliant?” Foundational governance asks, “does every model in this organization inherit the same definitions, the same lineage and the same guardrails by default?” If your governance only kicks in at review time, you’re still treating AI like software. You’re already behind.

2. If you standardized one thing across your entire organization tomorrow, what would create the most leverage and why haven’t you?

Every carrier has a list of things they know should be standardized but have not been. Shared definitions for core entities. Common ways of handling unstructured inputs. A single source of truth for how decisions get logged. The question is not which item belongs at the top of the list; most CIOs already know. The question is what has been blocking the standardization: Is it political, budgetary, or organizational? Because that blocker, whatever it is, is also what is letting sprawl compound. Governance frameworks cannot fix what foundational decisions have been deferred.

3. When a new AI initiative launches next quarter, what will it automatically inherit from what already exists?

This is the real test. In a point-solution world, every new system is built fresh and governance is applied afterward. In a foundational world, every new system inherits shared standards, shared definitions, shared oversight before a single line of code is written. If the honest answer is “it will inherit nothing, and we will govern it after the fact,” then you do not have an AI governance problem. You have an AI foundation problem, and no amount of policy will close the gap.

The uncomfortable truth is that most carriers will answer these questions honestly and discover they are still operating from the old playbook. It is a signal that the work to be done is not more governance, but different governance, the kind that assumes AI is the ground floor, not the top floor.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The CIO succession gap nobody admits

I have sat with three CIOs in the last two years who wanted to leave their seat and could not. One was being recruited into a larger enterprise role. One was ready to retire. One had been offered a board seat that required stepping down. In every case, the same thing stopped them. When the CEO asked who could step in, the CIO could not give a credible name. The person they had been calling their number two was technically brilliant and operationally reliable, and every one of them had been groomed into an architect, not a leader. The board would not approve an external hire during an active transformation. So the CIO stayed. One of them is still stuck.

The CIO role has the weakest succession bench in the C-suite, and most CIOs discover it the same way those three did. Not during a quarterly talent review. Not during a board retreat. They discover it the moment they try to leave. By then, the decision is already made for them. This is a leadership design problem CIOs build into their own orgs, and they inherit it when it is too late to fix quickly.

The architect trap

I have watched the same pattern form in almost every IT organization I have worked in. The people who rise to the top of the CIO’s direct reports are the ones who can hold the most architectural complexity in their heads. They are the ones the CIO trusts with the platform decisions, the vendor consolidations, the integration maps. They earn that trust legitimately. They are excellent at what they do.

But architectural trust is a different currency than leadership trust. When a CIO promotes based on architectural depth, what they get is a deputy who can design the org but cannot run it. I have seen deputies who have never owned a P&L conversation with a CFO. Deputies who have never delivered hard news to a business unit president. Deputies who have never had to defend a budget line item in a room full of people trying to take it from them. They were not hiding from those conversations. The CIO was holding the conversations for them because the CIO was good at those conversations and the deputy was good at the architecture.

The result is a bench that looks deep from inside the IT org and looks empty from the boardroom. I have watched a CEO walk out of a succession conversation saying, “I like your people, but I cannot see any of them in your chair.” That is not a compliment to the CIO. That is a verdict on how the CIO built the team.

Three moves I make before I need them

After watching this happen enough times, I stopped treating succession as something I would address later and started treating it as a design choice I had to make inside my first year. I changed how I build the bench in three ways, and I make each move early enough that the person has time to grow into it or fail out of it.

First, I give them a standing decision domain, not a “next in line” title. A deputy who is told they are being groomed for the CIO seat will manage their career instead of their work. A deputy who is given full authority over, say, all vendor escalations above a defined threshold will start making real decisions in real rooms with real consequences. That is where judgment gets built. The domain has to be something I would otherwise own myself. If I am still approving everything inside it, I am building a forwarder, not a successor.

Second, I put them in rooms where they have to lose something. One of the most damaging things a CIO can do is protect a high-potential deputy from conflict. I used to do this without realizing it. I would pull the hard conversations back to my level because I wanted to spare the deputy the political damage. The deputy came out looking clean and came out completely unprepared. Now I deliberately put deputies into conversations where they have to defend a position against a peer executive who will push back hard. Sometimes they hold the line. Sometimes they fold. Either outcome tells me something I needed to know before anyone was counting on them.

Third, I make the bench visible to the board before I have to. If the board does not know my top two or three deputies by name and track record, I do not have a succession plan. I have private notes. The CIOs I described at the beginning of this article all had deputies they believed in. None of those deputies had ever presented to the board on anything substantive. The board had no reference point. So when the succession question came up, the deputies did not exist in the board’s imagination, and the CIO’s personal endorsement was not enough to create them.

The first time I put a deputy in front of the board, they came back different. The board did not go easy on them. They came back knowing what a board conversation actually feels like, which meant the next one would not be a first impression. The board needs reps with my deputies before the seat is vacant. Once it is vacant, the reps are a job interview and a job interview is not where anyone does their best work.

What the gap actually costs

The cost of a shallow bench is not abstract. I have seen CIOs delay their own career moves by eighteen months or longer because they could not produce a credible successor. I have seen organizations pay two and a half times market to hire externally because the internal candidate did not survive a board interview. I have seen transformations stall because the CIO could not delegate enough to step back and think, because there was no one qualified to hold what they put down.

The cost to the deputies is also real. The architect-track deputy who spends six or seven years being the CIO’s most trusted technical lieutenant, and then gets passed over for the CIO role because the board does not see a leader, rarely recovers that momentum. Some of them leave. Some of them stay and quietly disengage. A few of them become the reason the new CIO’s first ninety days are harder than they should be. None of that is the deputy’s fault. It is the consequence of a design choice the previous CIO made years earlier, usually without knowing they were making it.

CIO.com has published strong guidance on this, including work on grow your own CIO strategies that treat succession as a deliberate pipeline rather than an accident of tenure.

The test is simple. If you had to leave in ninety days, could you hand the CEO a name and get a nod? If you cannot picture that nod, you do not have a successor. You have a list of people you like and trust, which is not the same thing. The successor you can actually name is the one you built on purpose, not the one who happened to look ready when the chair emptied. I have learned this by watching peers run out of time to build what they meant to build. I am trying not to be one of them.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

멀티클라우드 시대 AI 에이전트 관리 전쟁···MS·구글 전략 ‘온도차’

마이크로소프트(MS)와 구글은 기업 IT 조직이 기업 데이터에 접근하고 다양한 비즈니스 애플리케이션을 넘나들며 작업을 수행하는 도구에 대응할 수 있도록, AI 에이전트 통제 기능을 강화하고 있다.

MS는 5월 1일 기업 고객을 대상으로 ‘에이전트 365(Agent 365)’를 정식 출시했다. 이 서비스는 조직이 AI 에이전트를 탐색하고, 관리하며, 보안을 유지할 수 있도록 지원한다. 특히 MS 환경뿐 아니라 서드파티 SaaS, 클라우드, 온프레미스 등 다양한 환경에서 작동하는 에이전트까지 포괄하는 것이 특징이다.

구글은 4일 ‘워크스페이스(Workspace)’용 AI 컨트롤 센터를 발표했다. 해당 기능은 AI 사용 현황, 보안 설정, 데이터 보호 정책, 프라이버시 보호 기능 등을 중앙에서 통합적으로 확인할 수 있도록 하는 데 초점을 맞췄다.

이 같은 발표 시점은 기업 AI 활용 방식의 변화를 반영한다. 많은 기업이 더 이상 챗봇 테스트 단계에 머무르지 않고, 기업 시스템에 접근해 사용자를 대신해 업무를 수행하는 에이전트 도입을 본격화하고 있다.

이 변화는 CIO와 CISO가 기업 내 AI 에이전트를 바라보는 방식에도 영향을 미친다.

시장조사업체 포레스터의 수석 애널리스트 비스와짓 마하파트라는 “벤더들이 에이전트 통제를 신원, 접근, 데이터, 워크로드 관리와 함께 배치하면서 AI 거버넌스를 IT와 보안 조직이 공동으로 책임지는 운영 영역으로 자리매김시키고 있다”라며 “CIO 입장에서는 AI 에이전트를 다른 디지털 인력과 마찬가지로 관리해야 하며, 라이프사이클 관리와 비용 가시성, 서비스 관리 체계와의 통합이 필요하다”라고 설명했다.

CISO의 역할도 확대되고 있다. 기존의 모델 리스크나 데이터 유출 대응을 넘어, 자율성이 높아진 에이전트의 행동을 지속적으로 통제하고, 위험 발생 시 영향을 최소화할 수 있는 체계가 요구된다.

옴디아(Omdia)의 수석 애널리스트 리안 지에 수는 “AI 거버넌스가 모든 AI 기반 기업 애플리케이션의 핵심 구성 요소로 부상하고 있다”라며 “파일럿 단계를 넘어 전사적 도입으로 확대되는 과정에서, 거버넌스는 AI 구축 단계부터 필수적으로 포함돼야 한다”라고 강조했다.

MS와 구글의 차이점

MS의 ‘에이전트 365’와 구글의 AI 컨트롤 센터는 유사한 거버넌스 문제를 다루지만, 출발점은 서로 다르다.

옴디아의 수는 “기업들이 멀티클라우드와 하이브리드 IT 환경에서 AI를 점점 더 적극적으로 도입하고 있다는 점을 고려하면 두 접근 방식은 상호 보완적”이라며 “각각 자사 환경의 AI 워크로드에 최적화돼 있어 특정 벤더에 집중 투자한 기업일수록 네이티브 AI 거버넌스 경험이 훨씬 원활해질 것”이라고 설명했다.

포레스터의 마하파트라는 이러한 차이를 거버넌스 성숙도가 아닌 ‘플랫폼 범위’의 문제로 해석했다. MS는 AI 에이전트를 조직 전반에서 관리해야 하는 ‘기업 행위자’로 보는 반면, 구글은 협업 데이터와 사용자 콘텐츠 내에서 AI가 어떻게 작동하는지에 더 집중하는 경향이 있다는 분석이다.

마하파트라는 “두 접근 방식은 서로 다른 통제 영역을 다루기 때문에 완전히 경쟁 관계라고 보기는 어렵다”라면서도 “기업이 두 생태계를 동시에 표준으로 채택하지 않는 한 완전한 보완 관계라고 보기도 어렵다”라고 말했다. 이어 “시간이 지날수록 각 모델은 자사 생산성 및 데이터 플랫폼과 더욱 긴밀하게 결합되면서, AI 거버넌스 의사결정이 기업 아키텍처 전략이 아닌 특정 벤더 선택에 종속될 위험이 커질 수 있다”라고 덧붙였다.

파리크 컨설팅(Pareekh Consulting)의 CEO 파리크 자인은 보다 중립적인 시각을 제시했다. 자인은 “두 접근 방식은 보완적이면서 동시에 경쟁적 성격을 지닌다”라며 “특히 MS와 구글을 함께 사용하는 기업의 경우 AI 거버넌스가 각 벤더의 기반 플랫폼에 더욱 밀접하게 연결될 가능성이 있다”라고 분석했다.

남아 있는 리스크

새로운 통제 기능은 기업이 AI 에이전트를 보다 잘 파악할 수 있도록 돕지만, 섀도우 AI, 서드파티 통합, 자율적 행동에 대한 책임 문제 등 더 큰 리스크를 해소하지는 못한다는 분석이 나온다.

파리크 컨설팅(Pareekh Consulting)의 CEO 파리크 자인은 개발 도구, 브라우저 확장 프로그램, 로컬 어시스턴트, SaaS 코파일럿, 비인가 도구 연동 등을 통해 섀도우 AI 에이전트가 여전히 등장할 수 있다고 지적했다. 또한 서드파티 통합은 보안 검증 속도를 앞지르며 빠르게 확산될 가능성도 있다고 덧붙였다.

자인은 “감사 로그는 어떤 일이 발생했는지는 보여주지만, 자율형 에이전트가 왜 그런 행동을 선택했는지까지는 항상 설명하지 못한다”라고 말했다.

이로 인해 에이전트가 비즈니스나 보안 리스크를 유발하는 행동을 했을 때, 기업은 통제와 책임 소재를 둘러싼 어려운 문제에 직면하게 된다. 로그가 개선된다고 해서 책임이나 통제 문제가 자동으로 해결되는 것은 아니라는 의미다.

포레스터(Forrester)의 수석 애널리스트 비스와짓 마하파트라는 가장 큰 공백이 네이티브 플랫폼 외부에서 발생할 가능성이 높다고 지적했다. 로우코드 도구, 외부 API, SaaS 애플리케이션을 통해 생성된 섀도우 에이전트는 중앙 통제를 우회하고 과도하거나 상속된 권한으로 작동할 수 있다는 설명이다.

마하파트라는 “서드파티 통합은 에이전트의 활동 범위를 확장시키지만, 이후 발생하는 행동이나 데이터 전파에 대한 가시성은 동일한 수준으로 확보되지 않는 경우가 많다”라며 “여러 시스템을 거치며 연쇄적으로 작동하는 경우 감사 가능성도 균일하지 않아 의도와 결과를 구분하기 어렵고, 자율형 에이전트가 실질적인 비즈니스 또는 보안 영향을 초래했을 때 책임 소재 역시 여전히 불분명하다”라고 분석했다.

결국 MS와 구글이 제공하는 기본 통제 기능은 도움이 되지만, 전체 AI 에이전트 환경을 완전히 포괄하기는 어렵다는 것이 전문가들의 공통된 시각이다. 멀티클라우드, 다양한 SaaS, 개발 플랫폼, 브라우저 기반 AI 어시스턴트를 함께 사용하는 기업이라면 단일 벤더 콘솔을 넘어서는 거버넌스 체계를 별도로 마련해야 한다는 지적이다.
dl-ciokorea@foundryco.com

Beyond prevention: Protecting patient care through cyber recovery

Cyberattacks in healthcare can be operational crises that disrupt care delivery, delay procedures, and put patient safety at risk. As ransomware and data breaches continue to escalate, healthcare leaders are being forced to rethink what resilience actually means in practice.

For years, resilience was defined largely by prevention. But in healthcare environments shaped by legacy systems, complex clinical applications, and strict regulatory requirements, prevention alone is insufficient. Organizations now have to assume disruption will occur. The real measure of resilience is how quickly and safely they can recover.

That change reflects the reality of healthcare data environments, which are uniquely complex. Many organizations are still running legacy applications that support critical workflows, while also managing the fallout from years of mergers and acquisitions that have left behind fragmented systems and inconsistent data architectures. At the same time, many critical applications do not have well-defined recovery objectives, leaving significant gaps when incidents occur. 

In this context, recovery speed and data integrity carry far greater consequences than in most other industries. Delays are more than an inconvenience and can directly impact clinical decision-making and, in extreme cases, patient outcomes.

Restoring systems quickly is essential, but doing it correctly is just as critical. Inaccurate or incomplete data introduces new risks at the exact moment organizations are trying to stabilize operations.

Where healthcare resilience breaks down

While infrastructure or data may be recoverable in theory, executing recovery in a way that maintains compliance and protects sensitive patient data is far more difficult.

The challenges are layered. Limited budgets and staffing make it difficult to build and maintain robust recovery strategies. Data itself is highly complex, spanning structured and unstructured formats across diverse systems. And acquired datasets from mergers often arrive with limited documentation and immature architectures, creating ongoing operational friction. 

These issues are compounded by broader industry pressures. Healthcare providers are expected to modernize infrastructure, adopt cloud technologies, and improve efficiency while operating under tight financial constraints. Legacy systems slow down progress, but replacing them introduces new risks, particularly when recovery processes are not fully aligned across environments. As a result, the traditional separation between backup, security, and compliance is breaking down.

Forward-thinking organizations are moving toward a more integrated model that brings these functions together into a unified recovery strategy.

According to the most recent FBI annual internet crime report, criminals are posing as legitimate health insurers and fraud investigators to commit health care fraud. The FBI determined that the sectors most impacted by ransomware are healthcare and public health.

This is where the combined strengths of Cognizant and Rubrik become most compelling. Cognizant brings deep healthcare domain expertise and a proven track record in designing infrastructure strategies that address regulatory, operational, and clinical realities. Rubrik complements this with advanced capabilities in cyber recovery, sensitive data discovery, and ransomware resilience. Together, they enable a fundamental shift — from reactive backup management to a proactive, application-led recovery model — spanning multi-cloud environments and helping healthcare organizations restore critical systems rapidly, while preserving data integrity and maintaining compliance.

Over the next year, healthcare IT leaders must prioritize resilience and treat it as both a cyber and data challenge. That means adopting solutions that support faster implementation, tighter operational control, and measurable ROI.

Also, they must build recovery strategies that can withstand real-world disruption without compromising patient care — because in healthcare, resilience is about maintaining trust and ensuring that care can continue under pressure.

Discover how Cognizant and Rubrik are helping healthcare organizations recover faster, stay compliant, and keep patient care moving forward.

Why a modern data foundation takes more than a new platform

Too many data modernization efforts begin with the platform. The conversation turns to replacing the underlying data environment, moving reporting workloads to the cloud or retiring legacy tooling. Those decisions matter, but in my experience, they are rarely what makes the work hard.

What makes the work hard is everything that has built up around the platform over time.

I have seen this most often in organizations that inherited legacy architecture through acquisition, accumulated technical debt through years of deferred investment or saw reporting logic and master data evolve without enough enterprise discipline. On the surface, the environment may still appear functional. Dashboards are still refreshing. Reports still go out. Teams still find ways to get numbers. But once the business begins to scale, the weaknesses become much harder to hide.

The warning signs usually appear before the platform itself becomes the problem. Different teams start using different numbers for the same KPI, critical reporting logic begins to live outside core systems and analysts spend more time reconciling data than interpreting it. New business units take longer to onboard, reporting changes become harder than they should be and, before long, the issue is no longer just the data platform. It becomes a broader problem of trust, scalability and control.

That is why too many modernization efforts are scoped too narrowly. Replacing the platform is only one part of the challenge. The real work is untangling years of logic, definitions and integration patterns that were never designed to scale together.

The platform is only one layer of the problem

One of the clearest lessons I have learned is that legacy data environments rarely fail in an isolated way. They fail by becoming harder to trust and harder to change.

In many environments, the data platform is carrying far more than data. It is carrying years of workarounds for things that source systems were never able to handle cleanly. Reporting logic ends up split across ETL jobs, SQL transformations, scripts, spreadsheets and side databases. Some of it was built quickly to solve immediate business needs. Some of it was necessary at the time. But over time, those decisions create duplicated logic, hidden dependencies and handoffs that become harder to govern every time the business changes.

The issue is not only technical debt in the traditional sense. It is also reporting debt, where inconsistent definitions and duplicated logic across reports make data harder to trust and maintain. KPI definitions evolve differently across functions. Business logic gets embedded in too many places. Teams build local workarounds to compensate for mismatched source data. The business keeps moving, but the data foundation falls further behind.

That is why I think CIOs need to treat modernization less like a platform replacement and more like an effort to restore architectural separation and control.

In practice, that means separating ingestion, transformation and reporting instead of allowing all three to collapse into the same layer. It means reducing the number of places where business logic can live. It means establishing a clear source of truth for key metrics before they show up in executive dashboards. It also means making sure master data is defined consistently enough that teams are not comparing duplicate records or conflicting definitions and assuming the platform is to blame.

Fit matters more than feature depth

Platform decisions are often misunderstood.

On paper, most modern data platforms are capable. They all promise scale, flexibility and performance. But in practice, the decision is rarely about capability alone. It is about fit.

In recent modernization work, I have seen firsthand that the wrong decision is not always choosing an inferior technology. More often, it is choosing a platform that introduces unnecessary complexity into an environment that is already fragmented.

That complexity shows up quickly in the form of another cloud to manage, another billing model to track, another toolchain to support, another integration layer to maintain, another set of skills to build and another governance surface to control.

Those costs do not always show up clearly in vendor comparisons, but they show up immediately in execution.

That is why I have become more disciplined about asking a different question. Not what is the most powerful platform on paper, but what choice best aligns with the operating model, capabilities and simplification goals of the enterprise.

There is no one-size-fits-all answer. For some organizations, a separate cloud native warehouse may make perfect sense. For others, a more unified platform approach is the better fit because it leverages current skills, preserves momentum and avoids duplicating effort inside an ongoing modernization program.

That distinction matters.

The goal is not to build the most theoretically flexible architecture. It is to build one where the organization can actually govern, extend and operate over time.

Master data is where credibility starts

Modernization does not become credible until master data starts to improve.

That is not a side effort. It is part of the foundation.

In many enterprises, the root problem is not just the reporting layer. It is the fact that core entities such as customers, products, suppliers and locations are still defined differently across systems. When that happens, every downstream discussion about trust, reporting consistency and AI readiness becomes harder than it should be.

One area where this becomes tangible is syndication and deduplication. In most legacy environments, the same customer, product or supplier exists multiple times across systems, often with slight variations in naming, attributes or hierarchy. Over time, teams build local workarounds to compensate, which only reinforces the fragmentation.

Deduplication is not just a technical exercise. It forces alignment to what defines a unique entity. Syndication operationalizes that alignment, ensuring that once data is standardized, it is consistently distributed across systems and downstream processes. Without both, organizations end up maintaining multiple versions of the same truth and the platform becomes harder to trust regardless of how modern it is.

That is why I keep coming back to master data discipline. If important reports are not built on agreed business definitions and trusted logic, leaders end up looking at different versions of the same KPI. If customers, products and suppliers are not defined consistently across the business, the platform may look modern while the reporting remains hard to trust.

That is also why phased execution matters. Master data does not have to be fully resolved upfront, but it does need to be mature enough in the right domains to support the first releases and give the organization a foundation it can extend with confidence.

A modern foundation has to be engineered for change

What has worked best in my experience is a disciplined architecture that separates ingestion, transformation and reporting instead of mixing them together in ways that are hard to maintain.

That is where the medallion model becomes practical, giving the organization a structured way to separate raw data, standardized data and business-facing reporting. Bronze is where data first comes in from different systems. Silver is where it gets standardized, so the business is not working from conflicting definitions or duplicate records. Gold is where reporting and KPIs can sit on a more trusted foundation. That separation makes the environment easier to scale, troubleshoot and govern over time. The value is not in terminology, but in the discipline behind it.

I have seen organizations modernize into cloud data warehouses, data lakes and lakehouse architectures. The pattern is the same. If the underlying logic, master data and governance are still fragmented, the new platform inherits the same trust problems as the old one.

That same discipline has to carry through to the platform itself. If the environment is going to hold up under growth, the pipelines have to be observable, versioned and resilient enough to support change without constant rework. Environment separation, CI/CD workflows and operational monitoring are not extras. They are part of what makes the platform sustainable.

I also would not lead a modernization effort with AI, even when the pressure is high. AI raises the stakes, but it does not change the core problem. If the data foundation is still fragmented, poorly governed or inconsistent, a new AI layer will not solve it. That is increasingly showing up in the market, with Gartner warning that many generative AI efforts will stall because of poor data quality, inadequate risk controls, escalating costs or unclear business value. Foundry’s latest AI research reinforces this, identifying data storage and management as a top foundational investment for internal AI.

Final thought

The technology will continue to evolve.

The organizations that benefit most will not be the ones chasing every new platform. They will be the ones making disciplined decisions about how those platforms fit into their operating model and executing against them consistently.

Modernization does not fail because the technology is not good enough.

It struggles when the decisions behind it are not grounded in how the business actually runs.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Why the future of software is no longer written — it is architected, governed and continuously learned

We are entering a decade where software is no longer just an enabler of business — it is the primary mechanism through which intelligence is created, scaled and monetized across the enterprise.

For CIOs, this is not another technology cycle. This is a leadership inflection point.

Across boardrooms, investor discussions and strategic planning sessions, the conversation is shifting rapidly:

  • From How fast can we build software?”
  • To How intelligently can we design, govern and scale decision systems?”

This is a fundamental reframing of the CIO mandate.

The organizations that recognize this shift early will not just move faster — they will compound intelligence faster, creating asymmetric advantage in markets where speed alone is no longer sufficient.

The following perspective must therefore be read not as a technology trend, but as a strategic operating model shift for CIOs entering 2026 and beyond.

The next inflection point: Software development is no longer about code

Over the past two decades, software development has evolved through predictable phases — manual coding, agile acceleration, cloud-native scaling and DevOps automation. But as we enter 2026, that trajectory is no longer linear.

We are now witnessing a structural break.

Generative AI and agentic systems are not simply accelerating development — they are redefining the very nature of software creation, ownership and accountability.

This shift mirrors the broader transformation outlined in the CIO 3.0 paradigm, CXO 3.0: How intelligent leadership will redefine enterprise value, where technology leadership has moved from operating systems to architecting enterprise intelligence itself.

In software development, this translates into a fundamental question for boards, CIOs, CTOs, CISOs and chief AI officers (CAIOs): Are we still building software or are we now orchestrating intelligence systems that build themselves?

What makes this transition particularly consequential is that it is already happening quietly but decisively.

Across high-performing organizations:

  • AI-generated code is already contributing meaningfully to production systems
  • Development cycles are compressing from weeks to days — and in some cases, hours
  • Decision-making is increasingly embedded directly into software systems rather than layered on top

Yet, in many enterprises, governance, accountability and operating models have not kept pace.

This gap between capability acceleration and governance maturity is where both the greatest opportunity and the greatest risk now reside.

2 forces reshaping software development in 2026

1. AI across the full software development lifecycle (SDLC)

Generative AI has moved beyond coding assistance into end-to-end lifecycle orchestration, consistent with broader enterprise AI adoption trends where organizations are embedding AI across multiple functions (McKinsey State of AI: The state of AI in 2025: Agents, innovation and transformation):

  • Planning & Design → AI-driven requirements synthesis, architecture generation
  • Development → Code generation, refactoring, pattern enforcement
  • Testing → Autonomous test case creation and validation
  • Deployment → Intelligent CI/CD pipelines with adaptive optimization
  • Maintenance → Self-healing systems, anomaly detection, auto-remediation

The developer is no longer just a coder. The developer is becoming a curator of intent, constraints and outcomes.

The compression of the SDLC

What historically required:

  • Weeks of design
  • Months of development
  • Iterative testing cycles

Can now be orchestrated through multi-agent AI systems operating in parallel.

This introduces a new dynamic: Software development is no longer a sequential process — it is becoming a continuously adaptive system.

For CIOs, this means:

  • Traditional governance checkpoints may become bottlenecks
  • Legacy approval workflows may inhibit innovation velocity
  • Organizational design must evolve alongside technical capability

2. Intensifying competition in AI coding ecosystems

The competitive landscape is accelerating rapidly, particularly across ecosystems led by:

  • Microsoft (GitHub Copilot, Azure AI)
  • Google (Gemini, Vertex AI, developer tooling)
  • Apple (on-device AI, developer ecosystem integration)

Events like Google I/O and Microsoft Build are no longer just developer conferences—they are strategic battlegrounds for control over the future of software creation (Google I/O: Google I/O | Microsoft Build: Microsoft Build).

The stakes are clear:

  • Whoever controls the AI development stack controls the next generation of digital economies
  • Whoever defines the developer experience defines the innovation velocity of entire ecosystems

Platform gravity is becoming strategic gravity

The implication for CIOs is profound.

Choosing a development ecosystem is no longer a tooling decision — it is a strategic alignment decision that determines:

  • Data gravity
  • Talent alignment
  • Innovation velocity
  • Long-term vendor dependency

In effect: Your AI development platform choice is becoming your enterprise’s innovation ceiling.

From SDLC to IDLC: The rise of the Intelligent Development Lifecycle

Traditional SDLC frameworks are becoming obsolete.

In their place, a new paradigm is emerging: The Intelligent Development Lifecycle (IDLC)

This is not simply an evolution — it is a redefinition of how software is conceived, built and governed.

Key characteristics of IDLC:

  • Intent-driven development: Developers define what and why, not just how
  • Agentic execution: AI agents perform multi-step development tasks autonomously
  • Continuous learning loops: Systems improve based on real-time feedback and usage patterns
  • Embedded governance: Compliance, security and auditability are built into execution (NIST AI Risk Management Framework)
  • Decision-centric architecture: The primary output is not code — it is decision capability

IDLC as a leadership operating model

IDLC is not just a development methodology.

It is an enterprise operating model for intelligence creation.

It changes:

  • How teams are structured
  • How accountability is defined
  • How value is measured

For CIOs, adopting IDLC means shifting from:

  • Managing delivery pipelines
  • To governing decision supply chains

The emerging reality: Developers as intelligence orchestrators

As AI agents take over repetitive and even complex coding tasks, the developer role is undergoing a profound transformation.

From:

  • Writing code line by line
  • Debugging manually
  • Managing environments

To:

  • Designing system intent
  • Governing AI agents
  • Ensuring ethical and secure outcomes
  • Orchestrating multi-agent collaboration

This is not a reduction in developer relevance.

It is an elevation of developer responsibility.

Talent transformation is now a CIO priority

This shift introduces a critical challenge:

Most current developer skill models are not aligned to this future state.

CIOs must now proactively invest in:

  • AI-native engineering skills
  • Prompt and intent engineering
  • Model governance literacy
  • Cross-disciplinary collaboration

Because the future developer is not just technical — they are decision designers.

The CXO convergence: Why this is no longer just a CTO conversation

The transformation of software development is not confined to engineering teams.

It now sits at the intersection of four critical leadership domains, reflecting the broader evolution of CIOs into strategic business leaders shaping enterprise outcomes (State of the CIO: State of the CIO):

CIO: The intelligence architect

  • Aligns AI-driven development with enterprise strategy
  • Ensures scalability and integration across platforms
  • Drives value realization from software investments

CTO: The innovation orchestrator

  • Defines architecture patterns for AI-native development
  • Leads platform engineering and developer experience
  • Drives competitive differentiation

CISO: The trust enforcer

  • Ensures secure AI-generated code
  • Governs data lineage and model integrity
  • Mitigates risks from autonomous systems

CAIO: The intelligence governor

This convergence reflects a broader reality: Software development is no longer a technical function — it is an enterprise risk, value and governance function.

Introducing a new framework: SAFE-AI DevOps

To navigate this transformation, enterprises require a disciplined, Board-ready approach.

SAFE-AI DevOps Framework (Secure, Adaptive, Federated, Explainable AI Development Operations)

This is a next-generation operating model for AI-driven software development.

1. Secure by Design (S)

  • AI-generated code must meet zero-trust security principles
  • Continuous vulnerability scanning integrated into AI pipelines
  • Secure prompt engineering and model access controls

CISO-led mandate: Trust is the new runtime environment

2. Adaptive Intelligence (A)

  • Systems learn and evolve continuously
  • AI models adapt to changing requirements and environments
  • Feedback loops drive improvement across lifecycle

CIO-led mandate: Learning velocity is the new productivity metric

3. Federated Development (F)

  • Multi-agent collaboration across distributed environments
  • Integration across cloud, edge and on-prem ecosystems

CTO-led mandate: Scale innovation without losing control

4. Explainable Execution (E)

  • Every AI-generated decision must be traceable
  • Audit trails for code generation and deployment

CAIO-led mandate: Explainability is the new compliance baseline

5. AI-Native DevOps (AI)

  • Autonomous CI/CD pipelines
  • Predictive deployment optimization
  • Self-healing systems and automated incident response

Cross-CXO mandate: Automation is no longer optional — it is foundational

The competitive battlefield: Ecosystems, not tools

The next phase of competition is not about individual tools.

It is about ecosystem dominance, as hyper-scalers invest heavily in AI infrastructure, platforms and developer ecosystems (McKinsey Technology Strategy Insights: McKinsey Global Tech Agenda 2026).

Key battlegrounds:

  • Developer platforms
  • Model ecosystems
  • Data gravity
  • AI infrastructure

As highlighted in your CIO.com perspective, infrastructure itself is becoming a strategic intelligence decision, not just an operational one.

The risk dimension: AI-generated code is not inherently safe

While productivity gains are undeniable, risks are escalating:

  • Hallucinated code vulnerabilities
  • Licensing and IP violations
  • Model bias and ethical concerns
  • Regulatory exposure (EU AI Act, NIST AI RMF)

This creates a new category of risk: AI Development Risk

This requires structured governance aligned with emerging regulatory and risk frameworks (NIST AI RMF: AI Risk Management Framework).

Blockchain and quantum: The next convergence layer

As we move beyond 2026, two additional forces will reshape AI-driven development:

Blockchain

  • Immutable audit trails for AI-generated code
  • Smart contracts governing software execution

Quantum Computing

  • Breakthroughs in optimization and cryptography

Together with AI, they form a converging intelligence stack that will redefine software engineering, consistent with broader enterprise transformation trends toward intelligent systems.

Boardroom implications: What investors and directors must understand

The shift to AI-driven development is not just technical — it is financial.

Research shows AI delivers the greatest impact when integrated into enterprise strategy rather than siloed initiatives (BankInfoSecurity: C-Suite Leaders Must Rewire Businesses for True AI Value).

Key board-level questions:

  • How much of our software is AI-generated?
  • What governance exists for AI-generated decisions?
  • How do we ensure security and compliance at scale?
  • What is our dependency on external AI ecosystems?
  • How does this impact enterprise valuation?

Because the reality is: Software is no longer a cost center — it is a capital engine.

The new metrics: Measuring success in AI-driven development

Traditional metrics are insufficient.

Old metrics:

  • Lines of code
  • Development velocity
  • Bug counts

New metrics:

  • Decision throughput
  • AI-assisted productivity ratio
  • Model governance maturity
  • Security incident reduction
  • Time-to-intelligence (TTI)

The leadership mandate for 2026 and beyond

The transformation of software development demands a new leadership mindset.

Three defining mandates for 2026:

  1. Architect intelligence, not just applications
  2. Govern AI as an enterprise asset
  3. Align ecosystems with strategy

The future of software is a leadership decision

As we look ahead to 2026 and beyond, one reality becomes undeniable: The future of software development will not be decided by developers alone.

It will be shaped by:

  • CIOs who architect intelligence
  • CTOs who orchestrate innovation
  • CISOs who enforce trust
  • CAIOs who govern AI responsibly
  • Boards that understand the strategic implications

Because in this new era, code is no longer the product. Intelligence is. And the organizations that learn fastest will not just build better software — they will redefine entire industries.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

8 tips for becoming a more agile IT leader

Our world is spinning so fast that getting off course from intended outcomes can happen quickly. And it isn’t just technology that’s catalyzing change. The business climate, economic conditions, rules of engagement, and even people’s belief systems and behaviors are rapidly shifting to the point that trying to keep up is like chasing a cheetah on roller skates.

To lead in this climate, you have to hone your ability to pivot, pull the plug, or pounce on a new opportunity with little lead time. You can’t make a decision, install a system, or set a team to work on a project, then move on, even as you might have done a few short years ago. You have to be able to change your mind, admit you no longer stand behind a decision, aren’t confident in a particular project and set a new course toward a better destination.

Having an agile mind and a flexible worldview is vital to IT leadership today. But how do you achieve that?

I spoke to IT experts and leaders who have struggled with and mastered this skill. Here are the agility tricks they employ to stay flexible.

Keep asking questions

“Historically, CIOs come into an organization, assess, then try to add value,” says Sathish Muthukrishnan, chief information, data, and digital officer at Ally Financial. “That could take a year. Then they spend another six months developing strategy. From year three onwards, they might implement strategy. That was the traditional playbook.”

So, the current pace of change is, in itself, an enormous pivot for a role so complex, Muthukrishnan says.

The first step to becoming an agile leader then is to accept that the old playbook won’t work. The second step, he says, is to keep asking questions.

“I ask questions so I can deepen my understanding, orient myself,” he says. “Has the context changed? Has technology changed? Have people changed? If so, why are we doing what we were doing three, four, or five months ago?”

There are some things that have not changed, he says. Learning is the same, though what you learn and the way you apply it is different. And the need for your leadership has only increased.

“The human qualities that set you apart as a leader are becoming even more relevant in an AI-first world,” he says. “It’s no longer, ‘I’m the expert. I know. I’ve done this, I’ve seen this,’ that sets you apart. The thing that sets you apart is having the courage to say I am not tied to my previous beliefs. I’m changing them because of this reason. I’m making a pivot because of these reasons. Courage and conviction go together.”

Trust the navigation — and your teams

“You have to lead with purpose and clarity. That’s important for the organization. But you need a lot of flexibility when it comes to the execution,” says Manny Rivelo, CEO of ConnectWise.

Like a ship on a wild sea, you have a destination in mind. Getting there, though, requires navigating through a lot of tumult.

“You have to be able to respond quickly to change,” Rivelo says. “It can be anything from a market shift, the technology, or internal organizational challenges. You don’t want to lose sight of that long-term strategy, but you may have to pivot along the way. It’s not only about moving with speed, but with flexibility.”

Just like that ship navigating rough seas, you have to get accurate readings and trust your navigators to know how to steer through the chaos.

“How you collect information is important,” he says. “I look at it as a signal-to-noise ratio. What is the signal that’s driving you to go someplace, and what is just noise? How do you remove the noise so you can focus on what the signal is telling you.”

Rivelo believes in facts and data. But you also need to be able to test your own assumptions and, to do that, you have to trust your team, he says.

“You have to build diverse teams that are willing to challenge your thinking,” he advises. “In my experience, you can’t train for that. You have to hire for it.”

Rivelo digs deep when hiring to find people who have a history of being opinionated and, especially, curious.

“Curiosity is one of the greatest gifts you can have as a leader. You need to be curious enough to disrupt yourself and not assume that, because we are doing things a certain way, we have to continue. The best idea should win — wherever it comes from,” he says.

Empty your cup

According to one Zen parable, you have to empty your cup before you can fill it. To learn, you have to accept that you don’t already know.

“For me, being agile means seeing the truth and not making assumptions,” says Dr. Akvile Ignotaite, founder and data scientist at System Akvile. “I go into new projects thinking, ‘Let’s see what we can learn.’ And I learn from the data.”

It sounds simple. But when you have achieved a leadership role, you likely got there because of your expertise. You have become accustomed to people expecting you to know what to do. Letting go of that expert role is, Ignotaite admits, a process.

“I try to keep a very open mind,” she says. “I make assumptions, then measure those assumptions against real user data and behavior. I can’t know everything. The speed we live in is too fast.”

Use the ‘hot-shot rule’

Every day is full of decisions and responsibilities. It’s easy to get caught up in that and keep navigating toward a goal without stopping to check whether you are headed in the right direction. To stay flexible, Ingrid Curtis, CEO at Sparq, likes to test wind direction frequently with what she calls the “hot-shot rule.”

“This is not a concept I created,” she says. It is a mental exercise that helps people to let go of a decision, path, or progress that is no longer serving their purposes.

“Imagine you’ve been fired,” she says. “Who’s the hot shot that’s coming in to take over. What do you think they will do that you aren’t doing?”

The hot shot can be fictional or a real-life leader from the tech or business world.

“There are plenty of big, wild entrepreneurs to choose from,” she says. “They come with this huge persona. And we’ve seen that it has gotten some of them — the WeWork founder, Elizabeth Holmes, and others — in serious trouble. But there is also admiration for this flagrant, ‘I’m willing to do whatever it takes’ kind of leadership.”

It’s surprising, she says, how much this game allows people to disconnect from minutia and look at their job with fresh perspective. It’s fascinating to watch it unlock ideas.

“We all allow ourselves to be hamstrung,” she says. “Yet you imagine someone else would disregard those self-imposed restrictions and be able to get the thing done. Suddenly, with that perspective, you are able to do that, too.”

Rethink your approach to decision-making

“Everyone frames agility as a personality trait — be flexible, stay curious, embrace change,” says Nik Kale, principal engineer at Cisco Systems. “All of that is fine, but personality does not scale.”

Agility, he says, is less about mindset and more about structure.

“Adaptable leaders aren’t the ones with the most flexible temperament,” he says. “They’re the ones who build decision-making systems to absorb change without breaking.”

One big part of this structure, he says, is sorting decisions by weight. Some decisions are reversable. Others are not. Therefore, those two types of decisions should be sorted into different piles. Slow down and ponder non-reversable decisions. Decide fast and iterate on those that are reversible.

“Many leaders do the opposite,” he says. “They agonize over things that don’t matter and rush through things that do.”

For reversible decisions, schedule a point where you will stop to reevaluate them.

“I put reassessment dates on the calendar,” he says. That way changing your mind is part of the process. “It won’t hurt anybody’s ego if we planned to reevaluate that decision.”

This structure, he believes, overcomes the risk decision-makers face when they change their mind.

“Admitting you were wrong, in most corporate cultures, is expensive — reputationally, career-wise, politically. People double down on failing strategies because the cost of admitting they were wrong feels higher than the cost of failure,” Kale says. “Courage shouldn’t be a prerequisite for good decisions.”

Factor in the fact that permanence is a thing of the past

According to Ram Palaniappan, CTO at TEKsystems Global Services, when the software you use every day changes almost that often, clinging to the idea that anything you decide today won’t change tomorrow is holding on to a world that no longer exists.

This is especially true when working with AI, he says. When you make a decision about something repeatable, and offload the work to AI, verifying the results is essential because an AI will amplify mistakes. This also helps you learn to trust the AI.

This kind of mental agility, he says — making decisions that you are willing to unmake if the output doesn’t match expectations — requires that people to stay alert and keep learning. That goes not just for leaders but entire teams, he adds.

“We ask our teams to spend a percentage of time upskilling,” he says. “We set goals. We provide a learning path. Then we allow them to apply what they learn in a lab facility.”

The idea is, he says, to learn to let go of the way it was.

“Tech companies change their products, sometimes daily,” he says. “We all have to be able to move like that.”

Let go of the idea that anything you decide is permanent. Decide quickly. Then check how that decision is doing.

Exercise your emotional muscles

According to Sarah Noll Wilson, founder of The Noll Wilson Group and author of Don’t Feed the Elephants, many technical leaders believe that emotion has nothing to do with their decisions. But that can make you blind to the power emotion has over them.

“When you build your emotional skillset, it gives you access to a higher level of self-awareness and intellectual humility,” she says.

Curiosity is one emotional skill. “Instead of making you fear discovering a bad decision, curiosity can make it fun to wonder — with interest and even excitement — where you might be wrong,” she explains.

Another emotional skill is to let go of the idea that it is your expertise that’s needed.

“Some problems are technical,” she says. “Those are clear and typically solved with expertise. But some are adaptive challenges. In that case, the problem might not be clear and solving it requires learning, not expertise.”

Fear is another emotion that drives resistance to change. People don’t fear change, they fear loss, she says. “Ask yourself, ‘What am I losing?’ or ‘What am I afraid I’m going to lose?’”

One of the practices her team uses to increase emotional self-awareness, she says, is a courageous audit. This is a process where leaders examine what they want to be — an agile leader, for example — and interrogate behaviors that conflict with that goal.

“A question you can ask is, ‘What do I do or not do that’s in conflict with being an agile leader?” she says. “Do I protect my ideas or my team’s ideas? Do I dismiss ideas from people who aren’t in my field or ‘in’ group? Who gets to submit ideas? Who doesn’t?”

These exercises are designed to raise your awareness of the emotional reactions that affect your decisions and to help you develop the ability to be comfortable with uncertainty.

Change how you measure and build

According to Shahrzad Rafati, founder and CEO of RHEI, keeping a plastic viewpoint requires you to fundamentally change how you build technology and measure success.

“When you spend two years building an enterprise tool, your ego becomes tied to its deployment. You lose agility because you are financially and emotionally invested in the solution, rather than the problem,” she says.

“Instead of measuring success with metrics like uptime or deployment milestones, measure workforce elevation. When your metric is ‘Did it elevate human output and strategic thinking?’ you won’t hesitate to kill a failing project.”

The second step, she says, is to find a way to experiment quickly and cheaply. “We no longer live in a world where prototyping costs millions of dollars. You can ‘vibe code’ an idea, stand up a specialized agent, and test its capabilities almost instantly.”

“Use this to your advantage,” she says, “by lowering the stakes of your experiments. If testing a hypothesis costs nothing, your willingness to abandon a bad idea and admit you were wrong goes up exponentially.”

Coherence: Where leadership and AI success intersect

In an era where AI is accelerating faster than most organizations can absorb, many IT leaders are grappling with how to move quickly without creating fragmentation. For Leigh-Ann Russell, BNY’s CIO and global head of engineering, the answer comes down to a single word: coherence.

For Russell, coherence isn’t a slogan. It’s a leadership discipline that connects strategy to execution, technology to trust, and ambition to sustainability. In a recent episode of the Tech Whisperers podcast, she discussed why coherence is so integral to the modern CIO playbook and how to leverage it to scale impact instead of chaos.

Russell’s perspective is shaped by a career defined by range and resilience, from her formative years in Scotland to leading complex, high-stakes transformations across industries and geographies. After the podcast, we spent more time exploring how that journey has informed her leadership operating system, how she translates coherence into enterprise-scale AI execution, and why IT leaders must learn to navigate the intersection of innovation, control, and reusability. What follows is that conversation, edited for length and clarity.

Dan Roberts: Are there experiences from your formative years growing up in Scotland that shaped how you lead today?

Leigh-Ann Russell: I credit my father as the biggest inspiration in my life. He worked seven days a week and had two jobs, yet he was always present. When I look back, I don’t know how that was possible, for him to have a 7-day-a-week job and still take me to the park and to ballet lessons, or even afford ballet lessons. He was this embodiment of a combination of work ethic and family, and that’s how I’ve led.

There was also a lot about my life growing up that I hid. I hid the fact that I was a single parent. I hid the fact that I grew up in a council estate, similar to public housing in the US. I hid a number of things about me. It wasn’t until I got to be a bit more mature in life that I realized these were not things to hide. These are the stepping stones that make me the person I am.

My father taught me to do more with less and the power of work ethic. My daughter taught me ninja productivity skills. These were not things you should be uncomfortable about. These are things you should celebrate. My whole virtuous feedback on that, as a leader, if you share what makes you human, then it will be much easier to connect with other leaders and other people in your team. And then they feel comfortable sharing what makes their human foundation.

When the environment is complex, fast-moving, and high stakes, like it is today with AI, what are the core operating principles you return to as a leader, when you’re tired, under pressure, or being watched?

My core operating model centers on two things: talent and clarity. My philosophy is, my job is to find great talent and help them be the best version of themselves. If you achieve that, then real, magical things happen, because it’s people who create magic, not technology.

The second part of my role is about creating clarity. Life is complex, leadership is complex, and what teams need is simplicity. It’s about trying to simplify the problem, understand the trade-offs, and align people. Take AI as an example: The technology can create enormous value while also creating friction at scale if we’re not redesigning the work thoughtfully around it. That’s why it feels like the next leadership challenge — it’s not just about deploying AI well but designing the system around it with clarity and consistency in mind.

During the podcast, we talked about how you were very intentional in choosing coherence as your word of the year. Can you give some examples of coherence applied as a leadership discipline?

Coherence is a hard thing to build and a fast thing to lose. It starts with humans. It’s something Robin Vince, our CEO, does really well in bringing our leadership team together. He talks publicly about the fact that we all have a coach, and we have the same coaches from the same company and come together around that. He’s very intentional about creating coherence as a leadership structure.

As you go vertically down to the different meanings of coherence, it also applies equally to technology. It’s very easy to chase the shiny thing, and there’s a very tenuous conflict between people being empowered to do great innovation but also doing it in a structured way so that you avoid lack of controls or duplication or issues with architecture and costs firing out of control.

That double meaning of coherence and being very tight on the balance between individual innovation and empowerment and leadership becomes critical.

Can you talk about what your AI strategy looks like across the enterprise?

We set up the AI hub in 2023, and when I came to BNY in 2024, we started thinking about adoption and enablement across the enterprise, guided by our mantra: AI for everyone, for everywhere, for everything. In 2025, we set the goal of having 65% of the bank trained on AI, but we made 100% as early as June, and we’ve had to rewrite the training program twice since then to enable our employees to continue deepening their proficiency.

That enablement was important and we’ve had an amazing uptake, with over 220 AI solutions now in production accessible in Eliza, our enterprise AI platform. Eliza is built on the premise of foundational, reusable capabilities and is designed to enhance client service and company operations and drive cultural transformation through the power of AI. We talk more about Eliza and how we are advancing responsible and ethical AI in financial services on our BNY website.

Over half the bank have built their own agents, and we also have digital employees at the bank — 140 autonomous agentic employees who work alongside our human employees and that have direct human managers who monitor what logic is applied to decision-making. This is truly agentic.

So, 2025 was about widespread adoption and literacy, and now we’re moving from AI adoption to AI at the core. Even in the most complicated use cases that we’ve put into production, I still think they’re somewhat at the edges — anomaly detection, pulling together client briefing documents, or looking at contract reviews. Very advanced use cases compared to most enterprises, but we are pivoting in 2026 to having AI at the core of everything that we do at the bank. That’s truly transformational, and it’s the next step in our journey.

You have 140 digital employees, an idea that wasn’t even on the radar at the beginning of 2025. How did your organization move and adapt to scale that up so quickly?

It goes back to the philosophy that leadership is all about having talent and enabling them. We have an amazing team in engineering — and across the bank, because this is not just an engineering piece. If you look at our first digital employee, it was in the payment space, looking at reconciliations, and that was born out of a collaboration between engineering and operations. Our first human manager of a digital employee was in the operations side of the business.

Reimagining how work gets done can’t just be an engineering issue. It has to be in partnership with the business. Those individuals in the businesses and in engineering who can think back to first principles about how work gets done are leaping ahead on their AI journeys because they’re not just thinking about adding in AI as an afterthought; they’re thinking about redesigning their workflows with AI at the core.

A great recent example of that is in our onboarding process. We now have a multi-agentic model that has taken the research part of the process down from double-digit hours to single-digit minutes. This partnership and ability for our people to reimagine how we work with AI now at the core is foundational.

The strategy you’ve laid out really is a journey, and it seems there are some key foundational steps that many organizations are trying to skip, which is creating all sorts of problems for them.

In the podcast, we talked about the flip side of coherence being chaos. If you have chaos, AI will just amplify that chaos. That’s at a stack level or a leadership level. As the pressure is on companies to go out and adopt, this is where Eliza, our platform, has been truly instrumental to us because everything AI-related at the company is centralized in Eliza. It’s our tech stack; it’s our governance framework. Having one place for AI so you’re not chasing multiple tools and multiple companies, and having that very clear AI strategy embedded in a single platform, has been really differentiating for us.

In truth, there has been no single silver bullet in our AI journey. We have a tech-enthusiastic CEO in Robin Vince, who realized very early on that AI would be transformative for our company and has been determined that BNY remain at the forefront. With his leadership from the top, we invested early in our people to cultivate the AI-literate workforce we have today. So the fact that it’s a CEO-led strategy, plus the platform, plus the enablement has really helped us get to the speed of having 220 AI solutions in production supporting the enterprise.

Considering the strategic priority you’ve placed on AI adoption, how do you balance innovation and control?

Our goal is to enable innovation across the firm, which speaks to this mindset that being “AI first” is more important than just control. Obviously, we need control from a risk, compliance, legal, resilience, and cyber point of view, but from a financial and leadership perspective, we’re not trying to control and damp down and make everyone justify every single use case. As a result, we’ve said no to a lot less than I think most other companies would have done.

When I think about the trade-offs of that, it’s understanding “what is innovation?” and making sure there’s a reusable core. Because people have in their mind, “If I just hire my own engineers, and I have my own architecture, and I have my own software, I’m really innovative.” People tie innovation to having the new shiny thing, and it’s very hard to shift to a mindset that understands sometimes innovation is reusing what other teams have already developed and building on that. So, sometimes the more painful conversations involve trying to help people reground in reusability, common architectures, and common data platforms, understanding that isn’t hampering their innovation, it’s providing a solid foundation they can build on to go faster.

That intersectionality of innovation and control and reusability is the difficult thing we have to get right, because if you have too little control, you have the chaos we spoke about, and we know that AI amplifies chaos. If you have too much control and too much centralization, then you do hamper innovation. It’s something I’m very conscious of, that we don’t want to do either.

Great leadership often comes down to managing those tensions. What is the central tension you’re learning to navigate right now and how is it shaping the leader you’re becoming?

It goes back to a quote I mentioned in the podcast about Atticus Finch: How do you maintain your convictions without being rigid? Because one thing I know for sure, there are no right answers right now. Does one discipline shift left and one discipline shift right? What is the role of engineering in the future? There is absolutely no right answer to that. There’s only a set of choices that’s right for your institution, and what might be right for a marketing company will not be right for a regulated bank.

I use my digital twin to make sure I’m not too over-convicted, because I have that tendency, in my past growing up in the oil field and it’s a slightly Scottish tendency. So that’s the thing I’m really pushing myself on: How do I retain my conviction, but not become rigid, and stay really open to what is coming at us, because the change is like we’ve never seen — in a very positive way as an engineer, but we have to remain convicted, not rigid.

In world of “double VUCA,” where the impact of external volatility, uncertainty, complexity, and ambiguity is being compounded by fragmented AI journeys, a lack of clear AI strategy, and ineffective leadership internally, Leigh-Ann Russell shows us why coherence is an essential strategic discipline. In the age of AI, it can spell the difference between leaders who scale impact and those who simply scale chaos. For more insights from Russell’s leadership playbook, tune in to the Tech Whisperers.



칼럼 | 기술을 넘어선 경쟁력, 뛰어난 FDE는 어떻게 다른가

전례 없는 규모의 AI 투자가 이뤄지고 있음에도 불구하고, 대부분의 기업은 ‘통합의 벽’에 부딪힌 상태다. 기술은 개별 환경에서는 제대로 작동하고, 개념검증(PoC)도 충분히 인상적인 결과를 낸다.

하지만 실제 고객과 접점이 생기고 매출에 영향을 미치며 실질적인 리스크가 발생하는 운영 환경에 AI를 적용하려는 순간, 기업들은 주저하게 된다. 이는 충분히 타당한 이유에서다. AI 시스템은 본질적으로 비결정적(non-deterministic) 특성을 지니기 때문이다.

예측 가능한 방식으로 동작하는 기존 소프트웨어와 달리, 대규모 언어모델(LLM)은 예상치 못한 결과를 만들어낼 수 있다. 틀린 정보를 확신에 차서 제공하거나, 존재하지 않는 사실을 생성하는 ‘환각(hallucination)’, 브랜드 톤과 맞지 않는 응답을 내놓을 위험도 존재한다. 리스크 관리에 민감한 기업일수록 이러한 불확실성은 어떤 수준의 기술적 고도화로도 극복하기 어려운 장벽으로 작용한다.

이 같은 현상은 산업 전반에서 공통적으로 나타난다. 기업의 AI 도입을 지원해 온 경험을 돌아보면, 많은 조직이 인상적인 AI 데모를 구축하고도 통합 단계를 넘어서지 못하는 사례를 반복해왔다. 기술은 준비돼 있었고 사업적 타당성도 충분했지만, 조직의 리스크 수용도가 이를 따라가지 못했다. 또한 실험 환경에서 가능한 AI 활용과 실제 운영 환경에서 허용되는 범위 사이의 간극을 메울 방법을 아는 사람도 없었다. 이 지점에서 문제의 본질은 기술이 아니라 이를 실제로 적용할 인재라는 결론에 이르게 된다.

필자는 몇 달 전 IT 인력 제공 플랫폼인 안델라(Andela)라는 기업에 합류했다. 이 관점에서 보면, 기업이 필요로 하는 역량은 보다 분명해진다. 바로 파견형 엔지니어 정확히 말해 FDE(Forward Deployed Engineer)다. 해당 개념은 데이터 분석 기업 팔란티어(Palantir)가 정부 기관과 기업 내부에 자사 플랫폼을 배포하는 과정에서 필수적인 고객 중심 기술 인력을 설명하기 위해 처음 사용했다. 최근에는 선도 AI 연구소와 하이퍼스케일러, 스타트업까지 이 모델을 채택하고 있다. 예를 들어 오픈AI는 고가치 고객의 플랫폼 도입을 촉진하기 위해 숙련된 FDE를 배치하고 있다.

다만 CIO가 반드시 이해해야 할 점이 있다. 지금까지 이러한 역량은 주로 AI 플랫폼 기업의 성장 전략을 위해 집중적으로 활용돼 왔다. 기업이 통합의 벽을 넘어 AI를 실제 운영에 적용하기 위해서는, 이 같은 FDE 역량을 내부적으로 확보하고 육성해야 한다.

FDE를 만드는 요소

FDE의 핵심 특징은 기존 엔지니어가 하지 못하는 방식으로 기술적 솔루션과 비즈니스 성과를 연결하는 능력에 있다. FDE는 단순히 시스템을 구축하는 개발자가 아니다. 엔지니어링, 아키텍처, 비즈니스 전략이 교차하는 지점에서 작동하는 ‘번역자’에 가깝다.

이들은 생성형 AI라는 미지의 영역을 조직이 탐색할 수 있도록 이끄는 ‘탐험대장’과 같은 존재다. 특히 AI를 실제 운영 환경에 배포하는 과정은 단순한 기술 문제가 아니라 리스크 관리 문제라는 점을 명확히 이해한다. 따라서 적절한 가드레일 설정, 모니터링 체계, 위험 통제 전략을 통해 조직의 신뢰를 확보하는 것이 필수적이다.

필자는 구글 클라우드와 안델라에서 15년간 일하며 이러한 역량을 모두 갖춘 인재를 극소수만 만나왔다. 이들을 구분 짓는 요소는 단일 기술이 아니라 네 가지 핵심 역량의 결합이다.

첫째는 첫째는 문제 해결 능력과 판단력이다. AI의 출력은 대체로 80~90% 정확하지만, 나머지 10~20%는 오히려 더 위험한 오류를 포함할 수 있다. 때로는 그럴듯하게 보이지만 잘못된 결과이거나, 불필요하게 복잡해 실무 적용을 어렵게 만들기도 한다.

뛰어난 FDE는 이러한 오류를 식별할 수 있는 맥락적 이해를 갖추고 있다. 이들은 AI가 생성한 저품질 결과나, 중요한 비즈니스 제약을 무시한 권고를 빠르게 찾아낸다. 무엇보다 중요한 점은 이러한 리스크를 통제할 수 있는 시스템을 설계할 수 있다는 것이다. 출력 검증, 인간 개입 프로세스, 모델이 불확실할 때 작동하는 결정적 대체 응답 체계 등을 통해 위험을 관리한다. 이러한 역량이야말로 단순히 인상적인 데모와, 경영진이 실제 도입을 승인할 수 있는 운영 시스템을 가르는 결정적 차이다.

둘째는 솔루션 엔지니어링과 설계 역량이다. FDE는 비즈니스 요구사항을 기술 아키텍처로 전환하는 동시에 비용, 성능, 지연 시간, 확장성 등 현실적인 트레이드오프를 균형 있게 고려해야 한다. 특정 활용 사례에서는 추론 비용이 낮은 소형 언어모델이 최신 대형 모델보다 더 나은 성과를 낼 수 있으며, 이러한 선택을 기술적 완성도가 아닌 경제적 관점에서 설명할 수 있어야 한다.

무엇보다 중요한 것은 단순성을 우선하는 접근이다. 통합의 벽을 가장 빠르게 넘는 방법은 대부분 적절한 가드레일을 갖춘 최소기능제품(MVP)으로 전체 문제의 80%를 해결하는 데서 시작된다. 모든 예외 상황을 포괄하려다 통제 불가능한 리스크를 초래하는 복잡한 시스템이 아니라, 현실적으로 관리 가능한 수준의 솔루션이 더 효과적이다.

셋째는 고객 및 이해관계자 관리 역량이다. FDE는 비즈니스 조직과의 주요 기술 접점 역할을 수행하며, AI 경험이 많지 않은 경영진에게 기술적 작동 원리를 설명해야 한다. 다만 이들이 실제로 주목하는 것은 기술 자체가 아니라 리스크, 일정, 그리고 사업적 영향이다.

바로 이 지점에서 FDE는 조직의 신뢰를 확보하고, AI를 실제 운영 환경으로 확장할 수 있는 기반을 마련한다. FDE는 비결정적 AI의 특성을 경영진이 이해할 수 있는 리스크 프레임워크로 전환한다. 예를 들어 문제 발생 시 영향 범위는 어디까지인지, 어떤 모니터링 체계가 구축돼 있는지, 그리고 롤백 계획은 무엇인지 등을 명확히 제시한다. 이러한 과정은 AI의 불확실성을 가시화하고 관리 가능한 형태로 전환함으로써, 리스크에 민감한 의사결정자들이 이를 수용할 수 있도록 만드는 핵심 역할을 한다.

넷째는 전략적 정렬 능력이다. FDE는 AI 구현을 측정 가능한 비즈니스 성과와 직접 연결한다. 어떤 기회가 실제 성과를 만들어낼 수 있는지, 혹은 기술적으로는 흥미롭지만 가치 대비 과도한 리스크를 수반하는지에 대해 판단하고 조언한다.

또한 초기 도입 단계뿐 아니라 운영 비용과 장기적인 유지보수까지 함께 고려한다. 이러한 사업 중심의 시각에 더해, 리스크를 객관적으로 평가하는 능력이 결합될 때 비로소 FDE는 단순히 뛰어난 소프트웨어 엔지니어를 넘어서는 차별화된 역할을 수행하게 된다.

이 네 가지 역량을 모두 갖춘 인재는 공통된 특성을 보인다. 대부분 개발자 등 기술 중심 직무에서 커리어를 시작했고, 컴퓨터공학 기반의 교육을 받았을 가능성이 높다. 이후 특정 산업에 대한 전문성을 쌓고, 빠르게 변화하는 환경 속에서도 지속적으로 학습하는 유연성과 호기심을 갖추게 된다. 이러한 희소한 조합 때문에 이들은 주로 대형 기술 기업에 집중돼 있으며 높은 보상을 받는 경향이 있다.

CIO의 딜레마

FDE가 이처럼 희소한 자원이라면, CIO에게 남은 선택지는 무엇일까.

인재 시장에서 자연스럽게 공급이 늘어나기를 기다리는 방법이 있지만, 이는 상당한 시간이 필요하다. 그 사이 AI 프로젝트가 통합의 벽에서 멈춰 있는 매달, 실제 가치를 창출하는 기업과 여전히 이사회에 데모만 보여주는 기업 간 격차는 더욱 벌어진다. AI의 비결정적 특성은 앞으로도 사라지지 않는다. 오히려 모델 성능이 향상될수록 예측 불가능한 행동의 가능성은 더 커질 수 있다. 결국 성공하는 기업은 기술이 완전히 무위험 상태가 되기를 기다리는 조직이 아니라, AI를 책임감 있고 자신 있게 운영 환경에 적용할 수 있는 내부 역량을 갖춘 조직이다.

대안은 내부에서 FDE를 육성하는 것이다. 이는 채용보다 더 어렵지만, 확장 가능한 유일한 해법이다. 다행히 FDE 역량은 체계적으로 개발이 가능하다. 적절한 인재 풀과 집중적이고 구조화된 교육이 필요하다. 안델라(Andela)는 경험 많은 엔지니어를 FDE로 전환하는 교육 과정을 구축했으며, 이를 통해 효과적인 방법론을 축적해왔다.

FDE 인재 풀 구축 전략

우선 적합한 후보자를 선별하는 것이 중요하다. 모든 뛰어난 엔지니어가 FDE로 전환할 수 있는 것은 아니다. 기술 영역을 넘어서는 호기심을 갖춘 숙련된 소프트웨어 엔지니어를 찾아야 한다. 기본적인 개발 역량이 탄탄하고 데이터 과학과 클라우드 아키텍처에 대한 경험이 있는 인재가 적합하다. 특히 특정 산업에 대한 이해는 빠른 적응을 돕는 중요한 요소다. 의료 규제나 금융 리스크 프레임워크에 대한 경험이 있는 인재는 해당 분야를 처음 배우는 경우보다 훨씬 빠르게 성장할 수 있다.

기술 교육 과정은 세 단계로 구성된다. 기초 단계에서는 AI와 머신러닝에 대한 기본 이해를 다진다. LLM 개념, 프롬프트 설계 기법, 파이썬 활용 능력, 토큰 구조, 기본적인 에이전트 아키텍처 이해가 포함된다. 이는 기본 역량에 해당한다.

중간 단계는 실전 도구 활용 역량이다. FDE가 수행하는 ‘세 가지 역할’에 대응하는 핵심 기술이 요구된다.

  • 첫째는 RAG(검색 증강 생성)로, 기업 데이터와 모델을 정확하고 안정적으로 연결하는 능력이다.
  • 둘째는 에이전트형 AI로, 다단계 추론과 작업 흐름을 적절한 통제와 검증 단계와 함께 설계하는 역량이다.
  • 셋째는 운영 환경 대응 능력으로, 모니터링 체계와 가드레일, 장애 대응 프로세스를 갖춘 상태에서 솔루션을 실제 배포할 수 있어야 한다.

이러한 역량은 실제 운영 환경의 리스크를 고려한 시스템을 직접 구축하고 배포하는 과정을 통해 습득된다.

고급 단계에서는 모델 내부 구조와 파인튜닝 등 심화 지식을 익힌다. 이는 표준적인 접근 방식이 통하지 않을 때 문제를 해결할 수 있는 능력으로 이어진다. 단순히 정해진 절차를 따르는 수준을 넘어, 새로운 상황에 맞춰 즉각적으로 대응할 수 있는 역량이다. 또한 보안 책임자(CISO)와 같은 이해관계자에게 특정 접근 방식의 안전성을 설명할 수 있는 수준의 전문성이 요구된다.

기술 역량만큼 중요한 것이 비기술적 역량이다. FDE는 기술 중심의 대화에서 벗어나 비즈니스 문제와 리스크 완화 중심으로 논의를 재구성할 수 있어야 한다. 프로젝트 범위 변경, 일정 지연, 비결정적 시스템의 불확실성 등 민감한 이슈를 포함한 고난도 이해관계자 관리도 필수다. 무엇보다 중요한 것은 판단력이다. 불확실한 상황에서도 합리적인 결정을 내리고, 새로운 유형의 기술 리스크를 받아들여야 하는 경영진에게 신뢰를 줄 수 있어야 한다.

조직과 후보자 모두에게 현실적인 기대치를 설정하는 것도 중요하다. 아무리 체계적인 프로그램을 갖추더라도 모든 인재가 FDE로 전환되는 것은 아니다. 그러나 소수의 FDE 인재만 확보하더라도 통합의 벽을 넘는 속도는 크게 빨라질 수 있다. 실제로 비즈니스 조직에 배치된 한 명의 FDE는, 비즈니스 맥락 없이 분리된 환경에서 일하는 다수의 기존 엔지니어보다 더 큰 성과를 낼 수 있다. 이는 문제의 본질이 기술이 아니라는 점을 FDE가 정확히 이해하고 있기 때문이다.

AI 시대의 승부처

FDE 역량을 확보한 기업은 통합의 벽을 넘어설 수 있다. 이들은 인상적인 데모를 실제 가치를 창출하는 운영 시스템으로 전환하고, 성공 경험을 바탕으로 조직의 신뢰를 점진적으로 확대해 나간다.

반면 이러한 역량을 확보하지 못한 기업은 AI 투자에도 불구하고 실질적인 성과를 내지 못한 채 정체 상태에 머물 가능성이 크다. 그 사이 더 높은 리스크를 감수하는 경쟁 기업들이 시장을 선점하게 된다.

안델라에 합류할 당시 필자는 AI가 인간의 역량을 완전히 대체하지는 못할 것이라고 판단했다. 지금도 그 생각은 변함없다. 다만 인간 역시 진화해야 한다. FDE는 그 진화의 방향을 보여주는 대표적인 인재상이다. 깊이 있는 기술 이해, 비즈니스 감각, 리스크 관리 능력, 그리고 지속적인 변화에 대응하는 유연성을 모두 갖춘 존재다.

AI 시대에 CIO가 지금 이 역량에 투자한다면, 단순히 기술 발전 속도를 따라가는 수준을 넘어 그동안 쉽게 확보되지 않았던 기업의 AI 가치를 실질적으로 실현하는 주체가 될 수 있다.
dl-ciokorea@foundryco.com

I gave our developers an AI coding assistant. The security team nearly mutinied

I’ve sat in enough risk meetings to know the sound a bad surprise makes before anyone names it. It usually starts with a pause. Then a throat gets cleared. Then someone says, “We may need to bring the CISO into this.”

That happened over a developer tool.

Not a breach. Not a regulator. Not ransomware at 2:00 a.m. A coding assistant.

At first, I thought the reaction was overcooked. I’d seen the same pattern in other boardrooms and delivery teams. A new tool appears. Engineers like it because it saves time. Leadership likes it because it promises more output without hiring half a city. Security hates it because security has the social burden of being the adult in the room when everyone else is buying fireworks.

I backed the rollout because the case was clean on paper. Developers were drowning in repetitive work. Deadlines were tightening. Technical debt had started breeding in the dark. The assistant could draft tests, explain old code, suggest refactors and help junior engineers stop treating Stack Overflow like an underground pharmacy. And this was no longer fringe behavior. In 2025, Microsoft said that 15 million developers were already using GitHub Copilot, and the tool has spread further since then.

So yes, I approved it.

Then security nearly revolted.

That week taught me something I now say to clients more bluntly than I used to. AI coding tools do not just change software delivery. They change the terms of trust inside the company. They force you to answer ugly questions about control, proof, accountability and review discipline. Most public coverage still stares at productivity. The harder story sits elsewhere. Governance.

The part that looked sensible

The truth is, I didn’t approve the tool because I was dazzled. I approved it because I’ve spent years watching good people waste good hours on bad repetition.

You can only tell a team to “be strategic” so many times before they start laughing at you. Developers were buried under boilerplate, documentation drift, brittle legacy code and the kind of ticket churn that makes bright people look tired. A coding assistant looked like a relief. Not magic. Relief.

That distinction matters.

In advisory work, I’ve learned that many poor decisions do not begin as foolish decisions. They begin as reasonable decisions made inside an outdated control model. That’s what this was. The business case made sense. The mistake was assuming the old review system could keep up with the new speed.

That old assumption dies hard. Leaders often think software risk changes when the code changes. Often, it changes earlier, as production conditions change. If a machine now drafts what humans once wrote line by line, the issue is not only code quality. It is code volume, code origin and the shrinking time between suggestion and production.

That is a different risk shape.

Why security lost its patience

The security team was upset because they could see the math.

Code output was about to rise. Review time was not.

That gap is where trouble rents office space.

Many non-security leaders still imagine the concern is simple. “The AI might write bad code.” That’s the kindergarten version. The real concern is broader and nastier. Who reviewed the output? What hidden package did the model nudge into the build? What sensitive context got pasted into the prompt window? Which junior engineer trusted the suggestion because it sounded calm and looked polished? Which policy assumed human authorship when the draft came from somewhere else?

Those are not philosophical questions. They are operating questions.

Recent security work has made this much harder to dismiss. Snyk described a February 2026 case in which a vulnerability chain turned an AI coding tool’s issue triage bot into a supply chain attack path. That is the sort of sentence that makes security teams sit up straight and ask for names, logs and meeting invites.

And that is before you get to the quieter problem. AI-generated code can look tidy long before it is safe. Security people know that neat syntax can hide weak controls, lazy validation, poor handling of secrets and dependency choices nobody meant to own.

So when the team escalated, they weren’t staging a mutiny over a plugin. They were reacting to a change in production logic that nobody had yet governed.

What the fight was really about

Once the temperature dropped, the shape of the dispute became obvious to me. It was not engineering versus security. It was speed versus proof.

More precisely, it was four things:

  1. Velocity. The assistant increased output far faster than assurance could keep pace.
  2. Visibility. We did not have a clear sight of where the tool was used, what prompts were fed into it, what code it influenced or what external components it smuggled into the discussion.
  3. Validation. Existing checks were built for a world in which humans produced most of the first draft. That world is fading. When code generation speeds up, review cannot stay ceremonial.
  4. Governance. Nobody had written the rules that mattered most. Which use cases were fine? Which were off-limits? Who owned the risk of acceptance? What evidence would prove that the tool was used safely enough?

That last point gets too little airtime. Governance sounds dull until you don’t have it. Then it becomes the difference between controlled use and polite chaos.

NIST’s recent work on monitoring deployed AI systems makes the same point more broadly. Organizations need post-deployment measurement and monitoring because real-world behavior drifts, surprises occur and governance after launch remains immature. Different setting, same lesson. You cannot inspect your way out of weak operating design.

What we did next

We did not ban the tool. That would have been theatre dressed as courage.

We also did not waive it through and tell security to “partner more closely.” I’ve heard that sentence enough times to know it usually means, “Please absorb more risk with better manners.”

We did something less dramatic and more useful. We narrowed the rollout and rewrote the conditions of trust.

Low-risk use cases stayed in play. Drafting tests. Explaining old functions, helping with documentation and suggesting boilerplate. Those were manageable.

High-risk areas got tighter boundaries. Auth flows. Secrets handling. Encryption logic. Infrastructure-as-code for sensitive environments. Anything tied to regulated data or material security controls. Those needed a stricter review or stayed out of scope.

We also drew a hard line on prompt hygiene. No customer data. No credentials. No confidential architecture details were dropped into a chat window because someone wanted a faster answer on a Friday afternoon. You would think that goes without saying. It does not.

Then we raised the review standard. Human sign-off meant real sign-off, not a quick skim and a merge. Scanning had to cover dependencies and code changes with more discipline. Provenance mattered more. Logging mattered more. Exception paths had to be explicit, not social.

Most importantly, security moved from late-stage critic to co-designer. That changed the tone. The question stopped being, “Can we use this?” and became, “Under what conditions can we trust its use enough to defend it later?”

That small shift matters more than many policy documents.

What both sides got right — and wrong

Developers were right about the waste. They were right that these tools remove drudgery. They were right that refusing every new capability is not a strategy. A team that cannot experiment eventually decays into compliance theatre and backlog sorrow.

They were wrong to assume readable code is trustworthy code. They were wrong to treat assistance as neutral. Tools shape behavior. That is what tools do. Once suggestions arrive fast and fluently, people accept more than they admit.

Security was right about review debt. Right about supply chain exposure, right about data leakage risk. Right, governance should not arrive three incidents late, wearing a blazer and a lessons-learned slide.

They were wrong at first, as many security teams are when they feel cornered. They made the conversation sound like a moral referendum. That never helps. If security cannot offer a usable path, the business routes around it. Then you get the worst of both worlds: Secret adoption and public optimism.

I don’t say that with smugness. I say it because I’ve watched good teams damage each other by defending the right thing in the wrong way.

The bigger lesson for leaders

This is where the story stops being about one rollout and starts becoming board material.

If your developers can now produce more code with less effort, your governance burden rises even if your headcount does not. The old ratio between output and oversight has broken. Many firms have not adjusted.

That matters because software governance is no longer just about secure coding standards or release gates. It is about production conditions. Who can generate? Under what rules? With what evidence? Across which risk zones? With whose approval? And if something goes wrong, who owns the final act of acceptance?

Those questions sound administrative until the first incident report lands, and nobody can explain whether the flawed logic was written, suggested, copied, reviewed or merely assumed.

The market is moving quickly. Microsoft’s own recent security reporting says organizations adopting AI agents need observability, governance and security now, not later. Snyk is making a similar argument from the perspective of the software supply chain. Visibility first. Then prevention. Then governance that holds under pressure.

That is why I now advise something that used to sound severe and now sounds merely accurate. If you deploy AI coding tools without redesigning your control model, you are not buying productivity. You are buying ambiguity at machine speed.

What you should ask before you approve the next tool

You do not need a grand doctrine. You need a few hard questions asked before excitement turns into policy by accident.

Where can this tool be used, and where can’t it be used?

What data may enter it?

How will you know when the generated code reaches production?

What review standard applies when the first draft came from a machine?

Who can approve exceptions?

What logs, scans and decision records will let you defend the setup six months later, when memories blur and staff rotate?

That is not bureaucracy. That is self-respect.

I still believe these tools have value. I’d be foolish not to. But I trust them the way I trust a very fast junior colleague with a beautiful writing style and uneven judgment. Useful. Impressive. Worth keeping. Not someone you leave unsupervised near the crown jewels.

The near-mutiny turned out to be healthy. It forced the truth into the room before a failure did. Security was not blocking progress. They were objecting to unmanaged speed. Developers were not being reckless. They were asking for relief from the grind. Leadership’s job was not to pick a side. It was to write a better contract between them.

That is the part that too many firms still miss.

The argument was never only about a coding assistant. It was about whether we still knew how to govern work once the work started moving faster than our habits. That is a much bigger story. And if you listen carefully, you can hear it starting in many companies right now.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI FOMO: When AI Is the wrong answer to the right problem

Most AI project failures I have seen do not announce themselves cleanly. There is rarely a moment where someone stands up and admits to making the wrong call. Instead, the project quietly underdelivers. The team makes constant adjustments; leadership loses confidence and eventually the whole thing is filed away under “we tried AI and it did not work out.” This happens without anyone doing a real accounting of what the decision actually cost.

I was close to one of those situations not long ago. An organization had a system built around county-level values that drove a core business process. Over time, those values had drifted and the outputs were degrading in ways that affected the bottom line. The path forward was not complicated: A targeted update to the underlying values and some lightweight tooling to detect drift going forward. It would have been a few weeks of focused work at a modest cost with high confidence in the outcome.

What happened instead was that the organization decided to rebuild the system entirely using a non-deterministic AI model. This is worth pausing on because the original problem was deterministic by nature. It had known inputs, predictable logic and a correct answer that did not change based on inference or probability. Reaching for a non-deterministic solution in that context was not a technology decision; it was a category error. I understand why it was made. AI was consuming every boardroom conversation at the time and there was real pressure to be seen doing something proportionate to the moment.

The new system appeared to correct the original problem for a while, and it looked like the right call. Then the drift returned, worse than before, and the expense they had been trying to eliminate returned at a scale that dwarfed the original issue. The organization had applied the wrong class of solution to a well-defined problem, and nobody in the room had stopped to ask whether that mattered.

The capital allocation problem

This is not an isolated story. Harrison Allen Lewis, a three-time CIO, recently published a piece that puts a number on the broader pattern. He argues that in most enterprises, somewhere between 15–25 percent of technology spend is tied up in redundant systems that deliver no material business value. This trend is mirrored in recent Deloitte research on the “AI ROI paradox“: While 85 percent of organizations increased their AI spend in 2025, the average payback period for these investments has stretched to nearly four years. This is a significant departure from the traditional seven to 12-month window for enterprise technology. These are not technology failures; they are capital allocation problems.

What sits underneath that number is AI FOMO. The fear of being the organization that did not move fast enough is real and sometimes legitimate. But FOMO is a particularly dangerous input to a capital allocation decision because it optimizes for the appearance of action rather than the quality of the outcome. It pushes organizations toward the sophisticated answer when the precise one would have been faster, cheaper and more durable.

The result is spend that accumulates without a clear line back to value. Boston Consulting Group recently found that while 88 percent of organizations have begun AI pilots, only 5 percent have managed to reap substantial financial gains. The remaining 60 percent are failing to achieve any material value at all despite substantial investment. The antidote is discipline around how AI investments are evaluated, governed and killed when the evidence stops supporting them. That discipline has to start before the build decision, not after the drift sets in.

The pre-build diagnostic

Before an organization reaches for a governance framework, there is a more fundamental question that rarely gets the attention it deserves: Is this actually a problem AI is suited to solve, and does this organization have what it takes to support the solution over time? I have watched that question get skipped more times than I can count. The investment thesis gets built around what the model can do in a demo environment, and by the time the fit between the model and the actual problem becomes clear, the budget is already committed and the team is already building.

There are three things worth examining honestly before that happens. The first is whether the model can genuinely do the job at the scale and accuracy the business actually requires. Accuracy thresholds sound like a technical detail but they carry real financial weight. If the business needs 98 percent accuracy and the model reliably delivers 85, the human review layer required to catch and correct the gap will often cost more than the manual process the AI was supposed to replace.

Inference cost compounds that further. The true cost of an AI output includes not just tokens and compute but the ongoing engineering attention the system requires to stay functional. That number has to be meaningfully lower than human labor at production volume, not just at pilot scale. The scalability question is the one most sandboxes never answer honestly. A model that performs well on clean, bounded data in a controlled environment will frequently encounter the edge cases of real-world production and behave very differently.

Whether the organization can actually support what it is proposing to build is the second and often the most uncomfortable set of questions. Data ownership sits at the center of it. A project that depends on a third-party data stream the organization does not control, or on data that lacks the cleanliness the model requires to perform, is carrying a foundational risk that no amount of engineering will resolve.

Integration complexity belongs in the same conversation. A high-performing model that cannot connect to existing systems without a custom middleware project that costs more than the value being generated is not a solution; it is a different problem. And the internal talent required to keep the system from drifting over time is the dimension that gets the least scrutiny during approval and the most attention eighteen months later when something starts to go wrong and nobody knows how to respond.

The third area is whether the business will actually accept and sustain the outcome, which is a different question from whether the technology works. In regulated industries, any model that cannot produce a clear audit trail for its decisions should not survive an early review, regardless of its performance metrics. Time to measurable signal matters because a project that cannot demonstrate proof of value within ninety days is asking for extended runway without evidence. That is how pilots quietly become permanent operational commitments.

Whether the capability is genuinely defensible is worth asking early. Spending significant capital to build something a competitor can replicate with the same off-the-shelf API, and a week of engineering time is not innovation; it is an expensive way to achieve parity. And the people who are supposed to use the output have to actually trust it. A model that performs well technically but that underwriters, analysts or customers refuse to rely on has failed regardless of what the benchmark numbers say.

Working through these questions before the build decision gets made does not eliminate risk. But it shifts the conversation from what we could build to whether we are actually set up to build it well and sustain it honestly.

Governance proportional to risk

Assuming the diagnostic holds up and the case for building is genuine, the next question is what kind of governance the investment actually needs. Most organizations default to a single approach regardless of what they are building. That default is its own category of mistake. A speculative revenue experiment and a core operational system are not the same kind of bet. Treating them with the same oversight model will either strangle the experiment with bureaucracy or expose the core system to risk it was never designed to absorb.

The situation should determine the framework, not the other way around.

When an organization is exploring genuinely new territory, such as testing an AI-driven revenue stream or a product capability that has no internal precedent, the governance model needs to be tight at the front and earn its way to freedom. Room without gates is how speculative projects consume eighteen months of runway without producing anything the business can point to. What works better is a short initial window to prove the basic math, a defined accuracy threshold that has to be cleared before real-world data enters the picture, and a clear escalation path from shadow environment to full integration. Each stage gets more autonomy because each stage has earned it.

When the goal is modernizing internal operations, the governance question shifts. The risk profile is different because the organization is not exploring unknown territory; it is trying to do something it already does, but more efficiently. In these situations, the burden of proof moves away from accuracy and toward data. A model being trained on proprietary internal data to automate a known workflow is only as good as the data it runs on. Tight monitoring on error rates early, a clear standard for data sovereignty before any custom model work begins, and meaningful gates around the removal of manual steps are essential. The leeway expands as the evidence of process improvement accumulates, not before.

When the primary concern is margin protection on high-volume transactions, the economics have to be the governing logic from the start. The question is not whether AI can perform the task but whether the cost of AI performing the task stays below the cost of human labor at the volume the business actually runs. That calculation needs to be established as a baseline before build begins and monitored continuously afterward. Inference costs do not always scale linearly. A model that is economically viable at pilot volume can become a hidden tax on every transaction at production volume. The governance here is financial rather than technical. If the margin math stops working, the project stops regardless of how technically impressive the solution is.

The most complex governance situation is the one where an organization needs to manage immediate operational pressure and longer-term strategic bets at the same time. The temptation is to treat everything with the same urgency, which often means that immediate fixes consume the bandwidth that strategic work requires. Separating these explicitly, with different oversight cadences, different capital thresholds and different definitions of success for each horizon, is what allows an organization to fix what is broken today without sacrificing the position it is trying to build for the future.

Final perspective

There is a version of this conversation that treats AI governance as a compliance exercise: A set of controls designed to slow things down and protect the organization from its own enthusiasm. These frameworks are not brakes. They are the difference between capital that compounds and capital that quietly drains away while everyone is focused on the technology.

The organizations that navigate this well share a few things in common that have nothing to do with the sophistication of their models or the size of their AI budgets. They have technology leaders who are willing to kill a project when the evidence stops supporting it. This sounds obvious but is genuinely rare when a team has been building for six months and the sunk cost is visible. They have CFOs and boards who understand that a well-governed AI portfolio will have failures in it, and that those failures are not evidence of a broken process but evidence that the process is working.

The organization I described at the beginning of this piece did not fail because they chose the wrong AI approach. They failed because they chose AI for a problem that did not require it. That was a governance error that happened before a single line of code was written. Getting the category right matters more than getting the model right.

Knowing which kind of problem you have before you decide which kind of solution to reach for and then governing the investment in proportion to what you actually know, is what separates organizations building an advantage that holds from the ones already filing an AI post-mortem under things that did not work out.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI is spreading decision-making, but not accountability

On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.

As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.

AI, after all, may diffuse decision-making, but not legal liability.

AI doesn’t show up in court — people do

Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.

“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jessica Eaves Mathews, founder, Leverage Legal Group

LLG

“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”

In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.

The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.

CIO as the system’s last line of defense

If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.

“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.

The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Drumgoole, president, global infrastructure services, DXC Technology

DXC

The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.

“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.

In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.

The illusion of distributed accountability

Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Ojas Rege, SVP and GM, privacy and data governance, OneTrust

OneTrust

Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”

But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”

In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Simon Elcham, CAIO, Trustpair

Trustpair

And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.

Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.

The emergence and limits of the CAIO

To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.

At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”

The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jonathan Tushman, CTO, Hi Marley

Hi Marley

This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.

When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.

The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.

The widening gap between deployment and governance

If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”

Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.

The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.

At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.

Where the buck stops

For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.

When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.

AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.

How UKG puts AI to work for frontline employees

As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation framework to transform the frontline worker experience, and why the CIO-CHRO partnership is critical to making it stick.

How do you leverage AI for growth and transformation at UKG?

UKG is one of the largest HR, pay, and workforce management tech platforms in the market, and our expertise is in creating solutions for frontline workers, which account for 80% of the world’s workforce. This is important because when companies rebrand themselves as AI for knowledge workers, they’re not talking about frontline workers. But people in retail, manufacturing, healthcare, and so on also benefit from AI capabilities.

So the richness of our data sets, and our long history with the frontline workforce, positions us well for AI driven workforce transformation. 

What are some examples?

We use agentic AI for dynamic workforce operations, which shows us real-time labor demand. Our customers employ thousands of frontline workers, and the timely market insights and suggested actions we give them are new and valuable.

We also provide voice agents. Traditionally, when a frontline worker requests a shift, managers would review availability, fill out paperwork or update scheduling software, and eventually offer an appropriate job. With voice agents, AI works directly with the frontline worker, going through background and skills validation, communication, and even workflow execution. The worker can also ask if they can swap shifts or even get advice on how to make more money in a particular month. This is where AI changes the entire frontline worker experience.

We also launched People Assist, an autonomous employee support agent. Typically, when an employee is onboarded, IT and HR need to trigger and approve workflows. People Assist  not only tracks workflows, but also performs those necessary IT and HR onboarding activities so new employees are productive from day one.

What framework do you use to create these new capabilities?

For internal AI usage for our own employee experience, we use an idea-to-implementation framework, which involves a community of UKG power users who are subject matter experts in their area. Ideas can come from anybody, and since we started nine months ago, more than 800 ideas have been submitted. The power users set our priorities by choosing the ideas that will make the most impact.

Rather than funneling ideas through a small central team — a linear process that kills momentum — we’ve democratized innovation across the business. We give teams the governance frameworks, change models, and risk guardrails they need to move quickly.  With AI, the most important thing isn’t to launch, but to land.

But before we adopted the framework, we defined internal personas so we could collaborate with different employee groups across the company, from sales to finance.

With the personas and the framework, we can prioritize ideas by persona, which also facilitates crowd sourcing. You’re asking an entire persona which of these 10 ideas will make their lives better, rather than senior leaders making those decisions for them.

Why do so many CIOs focus on personas for their AI engine?

Across the enterprise, every function has a role to play. We hire marketing, sales, and finance for a particular purpose. Before AI, we gave generic packaged tools to everyone. AI allows us to build capabilities to make a specific job more effective. Even our generic AI tools are delivered by persona. Its impact on specific roles is the reason personas are so important right now. Our focus is on the actual jobs, the people who do them, the skills and tasks needed, and the outcomes they want to achieve.

We know our framework and persona focus work from employee data. In our most recent global employee engagement survey, 90% said they’re getting the right AI tools to be effective. For the AI tools we’ve launched broadly across the company, eight out of 10 employees use them. For me, AI isn’t about launching 10,000 tools, because if no one uses them, it’s just additional cost for the CIO and the company.

Is the build or buy question more challenging in this nascent stage of AI?

The lifecycle of technology has moved from three years to three hours, so whenever we build at UKG, we use an open architecture, which allows us to build with a commercial product if one comes on the market.

Given the speed of innovation, we lean toward augmentation rather than build. There are areas, like our own native products, where a dedicated engineering team makes sense. But for most of our AI capabilities — customer support and voice agents, for example — we work with our vendor partners. We test and learn with multiple vendors, and decide on one usually within two weeks.

This is what AI is giving all CIOs: flexibility, rapid adoption, interoperability, and the ability to quickly switch vendors. It’s IT that’s very different from what it used to be.

Given the shift to augmentation, how will the role of the software engineer change?

For software builders, business acumen — the ability to understand context — is no longer optional. In the past, the business user would own the business context, and the developer, who owns the technology, brings that business idea to life. Going forward, the builder has the business context to create the right prompts to let AI do the building, and the human in the loop is no longer the technology builder, but the provider of context, prompts, and validation of the work. So the engineer doesn’t go away, however they now finish a three-week scope of work in hours. With AI, engineers operate at a different altitude. The SDLC stays, but agility increases where a two-week concept compresses into two days.

At UKG, you’re directly connected to the CHRO community. What should they be thinking about how the workforce is changing with AI?

The best CHROs are thinking about the skills they’ll need for the future, and how to train existing talent to be ready. They’re not questioning whether we’ll need people, but how to sharpen our teams for new roles. The runbooks for both IT and HR are evolving, which is why the CIO-CHRO partnership has never been more critical to create the right culture for AI transformation.

CIOs can deliver a wealth of employee data like roles, skillsets, and how people spend their time. And as HR leaders help business leaders think through their roadmap for talent —  both human and AI — IT leaders can equip them with exactly that intelligence.

What advice would you give to CIOs driving AI adoption?

Invest in AI fluency, not just AI tools. Your people don’t need to become data scientists, but they do need a new kind of literacy — the ability to work alongside AI, question its outputs, and know when to override it. That’s a training and culture investment, not a software investment.

And redesign work before you redeploy people. Don’t just drop AI into existing workflows. Use this moment to ask what work really matters. AI is forcing us to have the job design conversations we should’ve had years ago, so it’s important to be transparent about the journey. What’s killing workforce trust now is ambiguity. Your people can handle hard truths but not silence. Leaders who communicate openly about where AI is taking the organization will retain the talent they need to get there.

El agua quiere dejar de ser un “novato digital”

El agua está tan integrada en nuestra vida cotidiana que ya ni siquiera nos parece algo especial. Abrimos el grifo, tiramos de la cisterna o activamos el chorro de la ducha y allí está, esperando. Sin embargo, para que eso ocurra tienen que pasar muchas cosas, un complejo ciclo del agua que garantiza no solo que circule sino también que sea óptima para el consumo humano. Es un proceso en el que la tecnología también está muy presente.

“Se puede decir que toda el agua es tecnológica. Otra cosa es que sea analógica o digital”, explica Luis Babiano, gerente de la Asociación Española de Operadores Públicos de Abastecimiento y Saneamiento (AEOPAS). “Es un sector altamente tecnificado. Otra cosa es que estemos en el inicio de la digitalización. Nos falta todavía mucho para ser unos auténticos campeones digitales”, reconoce a CIO ESPAÑA.

“Aunque el agua sigue siendo un recurso físico, su gestión hoy es cada vez más digital”, explica al otro lado del correo electrónico María Gil, responsable de Idrica en España. Las utilities han incorporado a nivel global “sensores IoT, sistemas SCADA avanzados, telelectura, plataformas de analítica y, más recientemente, arquitecturas de datos tipo data lake que permiten integrar información de toda la operación”, apunta, lo que permite hacer una gestión más basada en datos.

Aun así, la digitalización del ciclo del agua es uno de los retos a los que se enfrenta el sector, uno que se vuelve mucho más acuciante cuando se tiene en cuenta el contexto en el que opera el agua. “La importancia es enorme porque el agua es un recurso cada vez más escaso y sometido a una gran presión”, explica Gil.

Las organizaciones ecologistas llevan años alertando sobre el impacto que tiene la presión creciente sobre los acuíferos, así como el coste que la crisis climática pasa en término de sequías. Según un informe de la ONU publicado en enero, el mundo ha entrado ya en una fase de “bancarrota hídrica”. “Muchas regiones han vivido muy por encima de sus posibilidades hidrológicas. Es como tener una cuenta bancaria a la que se le extrae dinero cada día sin que entre un solo depósito. El saldo ya es negativo”, explicaba entonces Kaveh Madani, el autor principal del informe.

España es, de hecho, uno de los terrenos más complejos en lo que presión hídrica se refiere. WWF advierte de que el país “se queda sin agua”, por ejemplo, y cada vez se habla más de estrés hídrico.  La situación es compleja, porque, como advierte la propia industria del agua, también se pierden cantidades importantes por culpa de los problemas de las propias infraestructuras que dan soporte al ciclo del agua. Algunas estimaciones hablan de que entre el 19 y 20% del agua se desperdicia por fugas o averías.

La digitalización podría ayudar a ser más eficaces y, sobre todo, a mejorar la eficiencia y resiliencia del ciclo del agua. Como apuntan las fuentes expertas, se podría prever situaciones complejas, identificar problemas, optimizar redes y mejorar las cosas.

El estado de las cosas en España

En este proceso de salto a la digitalización, hay luces y elementos positivos, pero también hay matices que invitan a poner en cierta perspectiva el optimismo. Esto es, hablar con el sector deja claro que se están haciendo cosas y que existe mucho interés, pero que se necesita mucha más inversión y mucha más sensibilidad ante la importancia del problema y la necesidad de actuar para mejorar esas infraestructuras del agua.

“España es uno de los países más avanzados en gestión del agua y eso se está trasladando también al ámbito digital”, defiende Gil. “Estamos viendo utilities que ya operan con plataformas integradas, modelos de gemelo digital, analítica avanzada y despliegues amplios de telelectura”, ejemplifica.

El PERTE del agua (que destinó parte de los fondos europeos del Plan de Recuperación, Transformación y Resiliencia a la digitalización del ciclo del agua) ha servido para dar impulso a la transformación. “El PERTE del agua ha sido una auténtica semilla para sembrar la digitalización en el sector y esto es muy positivo”, señala Babiano. También Gil confirma que “está acelerando” el cambio. Así, ya existen proyectos que incorporan herramientas clave y que “pueden servir de locomotoras”, como apunta el gerente de AEOPAS. Pero esto es solo una parte de la foto. “El reto no es tanto tecnológico —la tecnología ya existe— como de adopción, integración y cambio cultural dentro de las organizaciones”, indica Gil.

Babiano es claro a la hora de pintar el panorama del sector: la digitalización del agua necesita financiación, una que llegue de forma sostenida. Puede que esto lleve a que cambien las tarifas del agua, pero Babiano apunta que se necesitan “también fuentes públicas para su desarrollo”. “Entre otras cosas, porque la digitalización debe ir de la mano con un proyecto país”, defiende. Un aspecto clave por el que es importante que se integre en una visión a nivel Estado y no se quede solo en algo de casos concretos es que se necesita que la digitalización llegue a todas partes. O, como asegura el experto, “no solo nos debemos centrar en las ciudades, sino también en los municipios pequeños”. Se trata de evitar que existan “dos velocidades”, una para municipios capaces de ser digitales y otra para aquellos que se quedarán con “unas carencias importantes en todo tipo de infraestructura, incluida la digitalización”.

Las ‘utilities’ han incorporado a nivel global “sensores IoT, sistemas SCADA avanzados, telelectura, plataformas de analítica y, más recientemente, arquitecturas de datos tipo ‘data lake’ que permiten integrar información de toda la operación”, apunta María Gil (Idrica)

Aquí entra, además, otro factor importante en el que incide Babiano. La digitalización del ciclo del agua necesita una base sólida: antes, hay que optimizar la propia infraestructura física que lleva el agua a la ciudadanía. Puede que hablar de cañerías y plantas de depuración no sea tan cool como hablar de IA, pero esa es la base del ciclo del agua y ahí es donde aparecen los primeros problemas. Ahora mismo, todavía existen zonas de España sin depuradoras (a pesar de que la normativa comunitaria lo penaliza). Además, en líneas generales, la infraestructura del agua tiene ya sus décadas, lo que crea focos de tensión. “Más del 30% de nuestras redes tiene más de 40 años”, recuerda Babiano. Para entenderlo solo hay que pensar en la reforma del baño de casa: llega un momento en el que cambiar las cañerías es inevitable. Aquí pasa a una mayor escala.

“La digitalización nos permite pasar de un nivel razonable de solvencia y mantenerlo en el tiempo”, afirma Babiano. Pero la transformación digital no debe ir sola: el experto advierte que “primero, se trata de optimizar nuestras pérdidas, invertir en nuestras redes, etc y luego entrar (o entrar en paralelo) en la digitalización”.

Los retos del agua

Todo esto ocurre, igualmente, en ese momento lleno de retos para el sector que no se debe perder de vista. “Estamos ante una necesidad imperiosa de una transición”, asegura Babiano. Las cuencas hidrográficas se enfrentan a sequías, a danas (que, como recuerda el experto, llevan al límite en tiempos récord a las infraestructuras, como a las plantas depuradoras que deben asumir una avalancha de agua) y a una mayor presión. “Y, sin embargo, no tenemos un proyecto muy claro en torno a cómo invertir en esta transición hídrica”, asegura. Babiano compara la situación de esta transición con la que viven la transición energética o la de movilidad, en las que existen planes, medidas fiscales e incentivos para la inversión con los que ellos no cuentan. La transición hídrica no cuenta con una situación parecida, aunque desde el sector insisten en que debería serlo.

En ese contexto de transición, la digitalización podría convertirse en una aliada para afrontar los retos del agua. “La tecnología no es la única solución, pero sí es un habilitador clave”, indica Gil. “Los grandes retos del agua (sequía, estrés hídrico, sobreexplotación) tienen una dimensión estructural, climática y también de gobernanza”, explica, pero recuerda que “sin tecnología es prácticamente imposible gestionarlos de forma eficiente”. Permite ver qué está ocurriendo, qué puede fallar y tomar mejores decisiones, al tiempo que “aporta transparencia y trazabilidad”. Como resume Babiano, “la digitalización aumenta exponencialmente nuestra excelencia”. “Por ejemplo, si monitorizas toda tu red, sabes la localización inmediata de los puntos donde está perdiendo más agua de lo normal”, muestra. Se puede avisar al usuario final de lo que está pasando y localizar la fuga (y solventarla).

En España, asegura Babiano, ya existen este tipo de soluciones. “Gran parte de la reducción de muchos de nuestros consumos está viniendo de la mano de los contadores inteligentes y de la monitorización y digitalización de nuestras redes”, apunta. “Lo que no estamos logrando todavía es mayores automatismos”, señala, recordando que alcanzar los niveles más elevados de mejoras llevará un tiempo. “Todavía estamos en una fase de, podemos decir, paso del ‘novato digital’ a la ‘integración vertical’”, resume.

Tecnologías emergentes para el cambio

Pero ¿qué herramientas TI son las que esperan a la vuelta de la esquina cuando se alcanza un nivel avanzado en la digitalización?

Unas cuentas tecnologías se han convertido en emergentes en la gestión global del agua, según concluye un informe de la plataforma de software Xylem Vue. Según enumera su análisis son la colaboración entre la administración pública y la empresa privada, las arquitecturas basadas en agentes, la ciberseguridad, los sistemas de alerta temprana y, por supuesto, la ya ubicua IA generativa.

El salto a la digitalización tiene otra cara, la de las potenciales amenazas de ciberseguridad

“La inteligencia artificial está empezando a jugar un papel muy relevante, especialmente cuando ya existe una base sólida de datos”, explica Gil (Idrica es, junto con Xylem, quienes están detrás de Xylem Vue). “Su principal aportación es la capacidad de encontrar patrones complejos y optimizar decisiones en entornos con múltiples variables”, apunta. “Es importante entender que la IA no sustituye al conocimiento experto de la operación”, recuerda, pero señala que cuando se combinan ambos se logran grandes resultados. Otro de los puntos destacados son los sistemas de alerta temprana, que, como explica la experta, “son uno de los mayores cambios de paradigma en la gestión del agua”. En lugar de esperar a que el fallo se produzca e impacte en el propio servicio, se adelantan a lo que va a ocurrir. “El valor está en ganar tiempo: pasar de reaccionar a prevenir. Y en un sistema tan complejo y sensible como el del agua, esa anticipación tiene un impacto directo en la continuidad del servicio, en los costes operativos y en la confianza del ciudadano”, indica.

Aunque, eso sí, el salto a la digitalización tiene otra cara, la de las potenciales amenazas de ciberseguridad. El agua no deja de ser una infraestructura crítica y muy sensible. “Sin duda, la digitalización amplía la superficie de exposición, y el sector del agua no es ajeno a ello”, reconoce Gil, que suma que esto se ha convertido ya “en una prioridad creciente”. “Lo que estamos viendo es una evolución hacia modelos de seguridad más maduros”, afirma. “También hay una mayor concienciación en el sector”, suma. “La clave está en que la digitalización y la ciberseguridad avancen de la mano. No son elementos independientes”.

칼럼 | AI 급변기, CEO가 직접 나서야 AI 전략이 완성된다

지난 50년간 인터넷의 폭발적인 성장과 오픈소스의 부상 등 주요 기술 기반 비즈니스·문화 변화를 직접 경험해온 관점에서 볼 때, AI 시대에도 과거와 비교해 참고할 만한 유사점이 존재한다. 그중에서도 특히 주목할 요소는 기업의 AI 추진 과정에서 CEO가 수행하는 역할이다.

AI는 수십 년 만에 등장한 가장 혁신적인 변화 중 하나라고 확신하지만, 이번 전환에는 분명 이전과 다른 특징이 있다. 기술 자체의 성격뿐 아니라, 기업 내 도입을 주도하기 위해 CEO들이 직접 나서고 있는 수준이 크게 높아졌다는 점이다.

이 같은 판단은 역사적 사례에서도 확인된다. 1990년대 IBM 인터넷 사업부 총괄 책임자로서 인터넷 전략 전반을 담당했지만, 당시 CEO 루 거스너의 강력하고도 분명한 지지가 없었다면 성공은 어려웠다. 조직을 결집하는 데 있어 “이것은 특정 임원의 전략”이라고 말하는 것과 “이것은 CEO의 전략”이라고 말하는 것 사이에는 큰 차이가 있었다.

몇 년 뒤 IBM의 리눅스 및 오픈소스 전략을 전사적으로 추진할 때도 상황은 같았다. 당시 CEO 샘 팔미사노의 적극적이고 공개적인 지원이 없었다면 전략 실행은 불가능했을 것이다.

이러한 경험을 종합하면, CEO가 조직 전반에서 AI를 적극적으로 지원하고 이를 주도하지 않는다면 기업 차원의 성공적인 AI 전략 수립은 어렵다. 다만 CEO의 역할은 전략을 직접 수행하는 것이 아니라, 전사적 AI 전략을 지지하고 확산하는 데 있다.

CEO 주도 AI 추진, 무엇이 필요한가

CEO가 주도하는 AI 도입에서 핵심 역할은 조직 내부는 물론 외부 시장에서도 AI 전략을 적극적으로 지지하고 확산하는 데 있다. 내부적으로는 경영진과 부서 책임자들과 함께 진행 상황을 점검하고 주요 경쟁사와 비교해야 한다. 외부적으로는 이사회에 진행 상황을 공유하고, 고객·언론·재무 분석가 등 주요 이해관계자들과 AI 전략을 논의해야 한다.

다만 중요한 점은 실행은 조직 전반의 경영진에게 맡겨야 한다는 것이다. CEO가 재무 전략을 직접 이끄는 대신 CFO와 긴밀히 협력하듯, AI 전략 역시 CEO 단독이 아니라 CIO와 사업부 책임자 등과 협업해 추진해야 한다. AI 애플리케이션과 제품, 서비스 전반에 걸쳐 공동으로 실행하는 구조가 필요하다.

기업 규모에 따라 접근 방식은 달라진다. 중소기업의 경우 CEO가 전략 수립뿐 아니라 실행 단계에도 깊이 관여할 수 있다. 반면 규모가 크고 복잡한 기업에서는 이러한 직접적인 개입을 유지하기 어렵다.

CIO 역할, 조직 미래 좌우하는 핵심 축

CEO 주도의 AI 전략에서 CIO의 중요성은 더욱 커지고 있다. 조직의 미래를 좌우하는 핵심 역할로, 지금처럼 CIO의 영향력이 컸던 시기는 드물다는 평가다.

CIO는 빠르게 진화하는 복잡한 기술인 AI와 그 비즈니스 영향을 깊이 이해해야 한다. 동시에 조직 전반과 협력해 현실적이고 실행 가능한, 경쟁력 있는 AI 전략을 수립해야 한다. 무엇보다 과장된 기대와 실제 구현 가능성을 명확히 구분하는 능력이 필수다.

CEO의 AI에 대한 높은 관심과 열정은 긍정적인 요소지만, 1990년대 닷컴 열풍처럼 시장의 과도한 기대에 휘둘릴 위험도 존재한다. CIO는 이러한 CEO의 의지를 조직의 추진력으로 전환하는 역할을 해야 한다. 예를 들어 경쟁사가 AI 솔루션을 성공적으로 구축하고 배포했다면, 이를 근거로 조직의 실행 속도를 높일 수 있다.

물론 성공 사례를 참고하는 것만으로는 충분하지 않다. 대기업의 경우 연구 조직이 새로운 아이디어를 탐색하고 시장 프로토타입을 통해 검증한 뒤, 추가 투자 여부를 결정해야 한다. 반면 중소기업은 시장의 흐름을 면밀히 관찰하고 검증된 사례를 빠르게 도입하는 것이 경쟁력 유지에 효과적이다.

CIO와 CEO 간 협력 기반 구축이 성패 가른다

CEO 주도의 AI 환경에서 성과를 내는 CIO는 공통적인 특징을 갖는다. CEO와의 신뢰 관계를 구축하고, 합리적인 의사결정 이력을 보유하며, 동료와 C레벨 경영진으로부터 신뢰를 얻은 인물이다. 반대로 CEO가 기대하는 AI의 가능성과 실제 구현 가능성 사이에 큰 격차가 존재할 경우, 기업에는 심각한 리스크가 발생할 수 있다.

성공적인 AI 추진을 위해 CIO는 CEO의 의견을 충분히 경청하는 동시에, 자신의 판단과 권고안을 명확하게 전달해야 한다. 필요하다면 외부 전문가의 의견도 적극적으로 반영해야 한다. 또한 기술 자체를 이해하는 데 그치지 않고, 이를 비즈니스 관점에서 어떻게 설명할 것인지에 대해서도 충분한 시간을 투자해야 한다.

CIO의 역할은 단순한 기술 실행이 아니라 전략적 비즈니스 가치 창출에 있다. AI처럼 복잡하고 중요한 기술을 다룰 때는 기회 요인뿐 아니라 실제로 가능한 것과 그렇지 않은 것을 명확히 구분해 전달하는 능력이 무엇보다 중요하다.

예를 들어 CEO가 이사회에 지속적으로 비현실적인 약속을 제시할 경우, 이사회는 CEO 교체를 검토할 수 있다. CEO 주도로 추진된 AI 프로젝트가 실패할 경우, 이는 CEO뿐 아니라 이사회, 그리고 기술적·경제적 한계를 명확히 설명하지 못한 CIO와 경영진 전체의 책임으로 이어질 수 있다. 새로운 기술 초기 단계에서의 과도한 기대는 결국 재무적 버블로 이어지는 경우가 많다.

CIO-CEO 관계, 점진적으로 재편된다

AI에 대한 이해가 깊어질수록 CEO가 기술 전략에 직접 관여하는 수준은 점차 줄어들 것으로 보인다. 방향성이 명확해지면 CEO는 시장에서의 성과를 알리고, 사업 확장과 인수합병 등 보다 핵심적인 경영 의사결정에 집중하게 된다.

이 시기를 잘 대응하는 CIO에게는 큰 기회가 된다. CIO는 조직 전반에서 신뢰를 확보해야 하는 역할인 만큼, 자신의 전략적 판단을 통해 기업이 성과를 낼 경우 조직 내 위상은 크게 강화될 수 있다.

AI 시대에 성공하는 차세대 CIO는 기술 역량뿐 아니라 사고력과 표현력, 그리고 뛰어난 커뮤니케이션 능력을 갖춰야 한다. AI는 기술 리더에게 리더십을 입증할 수 있는 ‘한 세대에 한 번 올까 말까 한 기회’를 제공하고 있다.
dl-ciokorea@foundryco.com

Cuenta atrás para presentar candidaturas en España a los CIO 50 Awards

Un año más, vuelve la convocatoria de premios de referencia para distinguir a los mejores directivos de sistemas de información (CIO) en España y los proyectos de TI más innovadores realizados en el país. La iniciativa, conocida como los ‘Oscar de la industria de TI’, forma parte del proyecto global CIO Awards con el que la publicación internacional CIO, del grupo editorial Foundry, pone en valor la labor de ejecutivos de primer nivel capaces de impulsar valiosos resultados empresariales mediante el liderazgo digital, la visión estratégica y la innovación tecnológica.

Los premios esta vez recalan en España bajo el nombre de CIO 50 Awards. El plazo de recepción de candidaturas para la edición de 2026 está abierto hasta el próximo 29 de mayo y la cita de entrega de los galardones tendrá lugar el 8 de octubre en Madrid, en el marco de una gran conferencia que se celebrará en paralelo y estará centrada en la temática “Liderazgo tecnológico responsable, resiliencia y gobierno digital en el contexto español”. Durante la jornada, los galardonados en otras ediciones de los premios y los candidatos podrán compartir sus historias de éxito con otros líderes de TI, creando una experiencia de aprendizaje entre iguales de valor incalculable.

Quién puede participar

Pueden optar a los CIO 50 Awards los CIO y otros directivos/gerentes de tecnología de empresas, administraciones públicas u organizaciones sin ánimo de lucro (ONG).

Los directivos que se presenten a la convocatoria deben desempeñar una labor al más alto nivel en lo que respecta a estrategia y ejecución tecnológica y de transformación, pues los premios CIO 50 reconocen a aquellos líderes que definen la dirección de la organización, contribuyen a decisiones a nivel del consejo directivo y ejercen influencia en inversiones tecnológicas de gran envergadura. Un requisito para presentar candidatura es que los CIO lleven al menos un año en la organización para la que trabajan actualmente.

Los consultores, proveedores de TI, de software o de hardware y las empresas de estudios de mercado o servicios de información no podrán optar a los CIO 50.

Cómo se elige a los premiados

Como en las ediciones anteriores, las candidaturas serán valoradas por un jurado independiente que analizará aspectos como los desafíos afrontados en los proyectos y las soluciones implementadas; los beneficios y mejoras logrados; el impacto en el negocio (optimización de costes, mejora de márgenes, crecimiento de ingresos); los aumentos en la productividad y la transformación de los procesos empresariales gracias a las TI.

El jurado está conformado por Fernando Muñoz, director del CIO Executive by Foundry; Esther Macías, directora editorial de CIO y COMPUTERWORLD en España; los históricos CIO, ya retirados, José María Tavera, que lideró la estrategia de TI de gigantes como Telefónica o Acciona, y José María Fuster, quien estuvo al frente de las TI del Banco Santander y ahora es patrono de la Fundación Real Academia de Ciencias de España; Dimitris Bountolos, CIIO de Ferrovial y ganador de la categoría CIO del año de la edición 2025 de los CIO 100 Awards Spain; Gracia Sánchez-Vizcaíno, CIO para Iberia & Latinoamérica de Securitas Group; Mar Hurtado de Mendoza, vicepresidenta global de reclutamiento en IE University y profesora adjunta de esta escuela de negocio; y Patricia Arboleda, presidenta de Women in Tech – Spain.

Una distinción local con alma global

La historia de los galardones CIO 100 y CIO 50 a la excelencia en TI empresarial se remonta a hace más de tres décadas, cuando comenzaron a otorgarse a directivos de Estados Unidos, para extenderse después a otros mercados como Alemania, Reino Unido, España, Singapur, Australia, Corea del Sur e India.

Se trata de una iniciativa clave para reconocer logros, compartir conocimiento y conectar a una influyente comunidad de responsables de la toma de decisiones en tecnologías de la información.

En la actualidad, la publicación CIO, del grupo Foundry, tiene abierto el proceso de recepción de candidaturas a los premios CIO 100 y CIO 50 en los siguientes países/regiones:

  • CIO 100 USA (agosto de 2026).– Fase de solicitud cerrada; La inscripción para la conferencia está abierta aquí. Más información
  • CIO del Año Alemania (octubre de 2026).– Fecha límite de presentación de candidaturas: el 15 de mayo de 2026. Más información
  • CIO 100 UK (septiembre de 2026).– Fecha límite de presentación de candidaturas: el 21 de mayo de 2026. Más información
  • CIO 50 España (octubre de 2026).– Fecha límite de presentación de candidaturas: 29 de mayo de 2026. Más información
  • CIO 100 India (septiembre de 2026).– Fecha límite de presentación de candidaturas: 5 de junio de 2026. Más información
  • CIO 100 Australia (septiembre de 2026).– Fecha límite de presentación de candidaturas: 19 de junio de 2026. Más información
  • CIO 100 ASEAN (noviembre de 2026).– Fecha límite de presentación de candidaturas: 27 de julio de 2026. Más información
  • CIO 50 Japón (diciembre de 2026) – Fecha límite de presentación de candidaturas: mediados de agosto de 2026.

❌