Visualização de leitura

¿Cuál es la mejor opción de internet cuando viajamos por trabajo? Por qué Holafly for Business es la preferida de las empresas

Para un equipo IT, uno de los mayores riesgos cuando un empleado viaja no es el vuelo, sino el acceso a internet cuando llega a su destino: un trabajador que aterriza sin conexión no solo pierde tiempo, sino que pierde acceso a herramientas críticas, recurre a redes WiFi inseguras o toma decisiones sin información en tiempo real. Y en contextos donde la movilidad internacional forma parte del día a día de miles de empresas, esto deja de ser una incomodidad para convertirse en un problema operativo real.

Es por eso que soluciones como Holafly for Business empiezan a posicionarse como una capa de infraestructura que permite a los equipos IT recuperar el control y la seguridad de sus equipos. A través de eSIMs corporativas, garantizan que cualquier empleado esté conectado desde el momento en que aterriza, sin fricción, sin dependencia de terceros y con visibilidad centralizada.

Estas son las siete claves que explican por qué cada vez más empresas están apostando por este modelo:

1.- Conexión desde el momento en que aterrizas

Una de las claves está precisamente en esa inmediatez. El usuario puede instalar la eSIM antes de viajar y, una vez llega a destino, su dispositivo está operativo sin necesidad de buscar WiFi, comprar tarjetas físicas o realizar configuraciones adicionales. Esto elimina uno de los puntos de fricción más habituales en los viajes corporativos y reduce significativamente el tiempo improductivo en los primeros momentos del desplazamiento.

2.- Seguridad reforzada

Desde el punto de vista de la seguridad, el impacto es igualmente relevante. Evitar redes WiFi públicas no es solo una buena práctica, sino una necesidad en entornos donde se manejan datos sensibles o accesos a sistemas corporativos. Al priorizar conexiones móviles cifradas, se reduce la superficie de ataque y se protege el tráfico en movilidad, algo especialmente crítico en organizaciones.

3.- Control centralizado para IT

En paralelo, la gestión de la conectividad deja de ser una carga operativa para el departamento. Holafly Business Center permite centralizar la asignación de eSIMs, automatizar la facturación y mantener una visibilidad global sobre el consumo. Esto no solo mejora el control, sino que reduce tareas manuales y facilita la toma de decisiones basada en datos, algo cada vez más necesario en entornos distribuidos.

4.- Ahorro de hasta 85% por cargos de ‘roaming’

Otro de los puntos clave es la previsibilidad financiera. El roaming tradicional sigue siendo una fuente frecuente de desviaciones presupuestarias, con costes difíciles de anticipar y facturas inesperadas. Con modelos que permiten ahorros de hasta el 85%, Holafly for Business no solo optimiza el gasto, sino que aporta algo igual de importante: control y estabilidad en los costes asociados a la movilidad internacional.

5.- Adaptación a distintos perfiles de uso

La solución se adapta a diferentes perfiles de uso dentro de la organización; desde empleados con alta demanda de datos, que necesitan planes ilimitados con funcionalidades como hotspot o soporte para autenticación en dos factores, hasta perfiles con menor intensidad de uso, que pueden operar con planes más acotados sin perder eficiencia.

6.- Siempre conectado

En términos de continuidad, soluciones como Always On introducen una capa de conectividad permanente en más de 150 países, lo que elimina la necesidad de replantear la conectividad en cada desplazamiento. Por otro lado, el Plan Unlimited, accesible en 220 destinos, responde a escenarios de alta demanda de datos, integrando funcionalidades clave como hotspot y SMS para autenticación en dos factores (2FA), fundamental en entornos con políticas de seguridad reforzadas. Para empresas con baja demanda los planes por días incluyen facturación automática, asignación inmediata de eSIMs para empleados y la plataforma para empresas más fácil del mercado.

7.- 100% digital y sin fricciones

A nivel operativo, el modelo de despliegue también responde a las necesidades actuales de IT. La entrega de eSIM por email permite activar dispositivos de forma remota, sin logística ni tiempos de espera, alineándose con modelos de Zero Touch Provisioning. Este enfoque, además, encaja con estrategias de sostenibilidad IT, al eliminar la necesidad de SIM físicas y reducir residuos, algo cada vez más presente en las políticas ESG de las compañías.

Con una valoración de 4.6/5 en Trustpilot y más de 80.000 reseñas, Holafly for Business se posiciona como una de las soluciones mejor valoradas por empresas y viajeros profesionales. Pero más allá de los datos, lo que realmente está cambiando es la forma en la que las empresas entienden la conectividad en movilidad.

Porque en un entorno donde los equipos trabajan desde cualquier parte del mundo, la conectividad ya no es un complemento. Es infraestructura. Y, cada vez más, también es sinónimo de tranquilidad operativa.

Cómo elaborar un plan de continuidad del negocio eficaz

Las organizaciones se enfrentan a un entorno operativo cada vez más amenazante y volátil. Los directivos señalan un aumento de los riesgos en múltiples áreas, incluyendo el fraude cibernético, el phishing y las interrupciones en la cadena de suministro, según el informe ‘Global Cybersecurity Outlook 2026’ del Foro Económico Mundial.

Al mismo tiempo, a los ejecutivos les preocupa cada vez más cómo la inteligencia artificial, las interdependencias digitales, la geopolítica y el complejo entorno operativo actual aumentan el riesgo a la hora de proteger la tecnología de su organización y garantizar la continuidad del negocio. Dos tercios (66%) de las organizaciones han aumentado el apoyo financiero o de recursos para la continuidad del negocio y la resiliencia en respuesta a ello, según el informe ‘State of Continuity and Resilience 2025’ del Business Continuity Institute.

Aun así, los líderes empresariales se preparan para incidentes impactantes cada vez más frecuentes, lo que hace que un plan de continuidad del negocio sólido sea más crítico que nunca. “Todas las empresas deben tener la mentalidad de que se enfrentarán a un desastre, y todas necesitan un plan para abordar los diferentes escenarios potenciales”, afirma Goh Ser Yoong, CISO de Ryt Bank y miembro del Grupo de Trabajo sobre Tendencias Emergentes de ISACA.

Un plan de continuidad del negocio ofrece a las organizaciones la mejor oportunidad de capear un desastre, al proporcionar instrucciones predefinidas sobre quién debe realizar qué tareas y en qué orden para mantener la viabilidad del negocio. Sin dicho plan, la organización tardará más de lo necesario en recuperarse de un evento o incidente —si es que llega a recuperarse.

¿Qué es un plan de continuidad del negocio?

Un plan de continuidad del negocio (BCP) es un manual estratégico creado para ayudar a una organización a mantener o reanudar rápidamente sus funciones empresariales ante una interrupción, ya sea causada por un desastre natural, disturbios civiles, un ciberataque o cualquier otra amenaza para las operaciones comerciales.

“La continuidad consiste en conocer el tiempo mínimo o la pérdida que una organización puede absorber y seguir siendo viable y operando. Se trata de la rapidez con la que puede recuperarse antes de que la situación se agrave para su clientela o su negocio, y de qué sistemas y procesos debe restablecer y en qué orden”, relata Matt Chevraux, director general de FTI Consulting.

Como tal, un plan de continuidad del negocio describe los procedimientos que la organización debe seguir para minimizar el tiempo de inactividad, abarcando los procesos empresariales, los activos, los recursos humanos, los socios comerciales y mucho más.

Un plan de continuidad del negocio no es lo mismo que un plan de recuperación ante desastres, que se centra en restaurar la infraestructura y las operaciones de TI tras una crisis. No obstante, un plan de recuperación ante desastres forma parte de la estrategia global para garantizar la continuidad del negocio, y el plan de continuidad del negocio debe servir de base para las medidas detalladas en el plan de recuperación ante desastres de una organización. Ambos están estrechamente relacionados, por lo que a menudo se agrupan bajo el término BCDR.

La continuidad del negocio también difiere de la resiliencia, aunque ambas están interrelacionadas. La continuidad del negocio se centra en restablecer las operaciones en caso de una interrupción, mientras que la resiliencia empresarial se refiere a la estrategia de una organización para responder a todo tipo de fuerzas internas y externas con el fin de garantizar su supervivencia y éxito a largo plazo.

Elementos de la planificación de la continuidad del negocio en la actualidad

Los eventos disruptivos son inevitables, según investigadores, responsables de riesgos y asesores ejecutivos. “Han quedado atrás los días en que las organizaciones utilizaban los programas de continuidad del negocio o de resiliencia como una especie de seguro por si algo fallaba. Ahora, las organizaciones deben afrontar la realidad; es solo cuestión de tiempo que se produzca un incidente catastrófico que afecte a los clientes”, escribe Forrester Research en su informe ‘Business Continuity Management Software Landscape, Q1 2026’.

Los ejecutivos no solo operan en un entorno en el que el riesgo de un incidente catastrófico no es una cuestión de ‘si’ sino de ‘cuándo’, sino que también trabajan en un mundo en el que la complejidad de las operaciones empresariales ha aumentado drásticamente.

Ahora las organizaciones deben tener en cuenta, como parte de sus planes de continuidad, un volumen creciente de usos de la IA, proveedores y conexiones digitales de terceros, afirma Ross Tisnovsky, socio de Everest Group y responsable de la práctica de investigación y asesoramiento para CIO de la empresa.

Por ejemplo, los planes actuales deben abordar la disponibilidad de la IA, así como su precisión y sus riesgos cibernéticos, como la amenaza de ataques de inyección de comandos, explica, señalando que los planes de continuidad actuales deben tener en cuenta preocupaciones más novedosas. “La preocupación con respecto a la infraestructura y las aplicaciones era la disponibilidad, pero ¿qué pasa si la IA te da basura? Esa degradación de la calidad de los resultados es una preocupación para la continuidad”.

Del mismo modo, las organizaciones deben evaluar y abordar su creciente dependencia operativa de terceros, ya sean hiperescalares o proveedores de LLM, un factor que también añade más complejidad a los planes de continuidad del negocio, afirma Tisnovsky.

“Ahora contamos con todos estos proveedores y, además, dependemos mucho más de las API y de la malla de servicios. Dependemos de conexiones potenciales de las que ni siquiera tenemos conocimiento”, explica. “Eso puede generar una exposición que no se puede controlar”.

Todas estas consideraciones se suman a la miríada de riesgos convencionales que un plan de continuidad del negocio siempre ha tenido que abordar, añade Tisnovsky.

Creación (y actualización) de un plan de continuidad del negocio

Ya sea creando el primer plan de continuidad del negocio de la organización o actualizando uno ya existente, el proceso implica varios pasos esenciales.

Evaluar los procesos de negocio en cuanto a su criticidad y vulnerabilidad: la planificación de la continuidad del negocio comienza por comprender qué es lo más importante para la empresa. Evaluar los procesos de negocio para determinar cuáles son los más críticos; cuáles son los más vulnerables y ante qué tipo de incidentes; y cuáles son las pérdidas potenciales si esos procesos se interrumpen durante un día, unos días, una semana o más.

“Empiece con un análisis de impacto en el negocio: ¿cuáles son los elementos críticos que hacen funcionar el negocio?”, recomienda Lawrence Bilker, director de sistemas de información de Lift Solutions Holdings. “Identifique los procesos de negocio y los sistemas que hacen funcionar la empresa”.

Esta evaluación es más exigente que nunca debido a la complejidad del lugar de trabajo híbrido actual, el entorno de TI moderno y la dependencia de socios comerciales y proveedores externos para ejecutar o dar soporte a los procesos críticos.

Como resultado, la evaluación requiere un inventario no solo de los procesos clave, sino también de los componentes de apoyo —incluidos los sistemas de TI, las redes, el personal y los proveedores externos—, así como de los riesgos a los que se enfrentan dichos componentes, afirma Goh.

Determine el RTO y el RPO de su organización: el siguiente paso es determinar el objetivo de tiempo de recuperación (RTO) de la organización, que es el tiempo objetivo entre el momento del fallo y la reanudación de las operaciones, y el objetivo de punto de recuperación (RPO), que es la cantidad máxima de pérdida de datos que una organización puede soportar.

Cada organización tiene su propio RTO y RPO en función de su negocio, sector, requisitos normativos y otros factores operativos. Además, las diferentes partes de una empresa pueden tener distintos RTO y RPO, que deben establecer los ejecutivos.

Algunas empresas “necesitan estar operativas en todo momento sin fallos, por lo que necesitan una alta disponibilidad, lo que significa una o dos copias de seguridad”, afirma Bilker.

Detallar los pasos, funciones y responsabilidades para la continuidad: los líderes empresariales deben entonces utilizar el RTO y el RPO, junto con su análisis de impacto en el negocio, para determinar las tareas específicas que deben realizarse, quién debe llevarlas a cabo y en qué orden, a fin de garantizar la continuidad del negocio.

Una herramienta habitual para la planificación de la continuidad del negocio es una lista de verificación que incluya suministros y equipos, la ubicación de las copias de seguridad de los datos y los centros de respaldo, dónde está disponible el plan y quién debe tenerlo, así como la información de contacto de los servicios de emergencia, el personal clave y los proveedores de los centros de respaldo.

No es necesario identificar todos los riesgos posibles para la organización al elaborar o actualizar un plan de continuidad del negocio, afirma Kayne McGladrey, miembro sénior de la asociación profesional sin ánimo de lucro IEEE.

La lista de posibles escenarios de impacto es extensa. En lugar de intentar identificarlos todos, McGladrey aconseja identificar los tipos de incidentes más probables y representativos y, a continuación, centrarse en cómo dichos incidentes podrían afectar al negocio. A partir de ahí, los líderes deben determinar qué impactos serían intolerables en función de la tolerancia al riesgo de la organización. “Piense en los riesgos empresariales, no en los riesgos técnicos ni en las causas, sino en los impactos en el negocio”, afirma McGladrey.

El objetivo, subraya, es crear un plan de continuidad del negocio capaz de indicar a la organización cómo recuperarse de un evento inesperado de cualquier tipo.

La importancia de poner a prueba el plan de continuidad del negocio

Las pruebas y los simulacros son otros componentes críticos de la planificación de la continuidad del negocio, ya que muestran si un plan funcionará y en qué medida. También ayudan a preparar a las partes interesadas para un incidente real, desarrollando la memoria muscular necesaria para responder con rapidez y confianza durante una crisis.

“Las pruebas y la formación del personal son fundamentales para que todos sepan qué hacer en caso de fallo”, recuerda Bilker.

También ayudan a identificar lagunas en el plan elaborado. Por ejemplo, Bilker señala que las pruebas y la formación podrían revelar la falta de copias de seguridad o de alternativas para sistemas, proveedores o personas críticos.

Además, las pruebas y la formación ayudan a identificar dónde puede haber una falta de alineación de objetivos. Por ejemplo, es posible que los ejecutivos hayan restado prioridad a la importancia de restaurar determinados sistemas de TI, solo para darse cuenta durante un simulacro de que estos son esenciales para respaldar procesos críticos.

Tipos y periodicidad de las pruebas

Muchas organizaciones prueban un plan de continuidad del negocio entre dos y cuatro veces al año. Los expertos afirman que la frecuencia de las pruebas, así como de las revisiones y actualizaciones, depende de la organización: su sector, su velocidad de innovación y transformación, la rotación del personal clave, el número de procesos de negocio, etc.

Las pruebas habituales incluyen ejercicios de simulación, revisiones estructuradas y simulaciones. Los equipos de prueba suelen estar compuestos por el coordinador de recuperación y miembros de cada unidad funcional.

Un ejercicio de simulación suele realizarse en una sala de reuniones, donde el equipo examina minuciosamente el plan, busca lagunas y se asegura de que todas las unidades de negocio estén representadas.

En un repaso estructurado, los miembros del equipo revisan en detalle sus componentes del plan para identificar puntos débiles. A menudo, el equipo realiza la prueba teniendo en mente un desastre específico. Algunas organizaciones incorporan simulacros y juegos de rol de desastres en el repaso estructurado. Cualquier punto débil debe corregirse y debe distribuirse un plan actualizado a todo el personal pertinente.

Algunos expertos aconsejan realizar un simulacro completo de evacuación de emergencia al menos una vez al año.

Las pruebas de simulación de desastres —que pueden ser bastante complejas— también deben realizarse anualmente. Para esta prueba, cree un entorno que simule un desastre real, con todo el equipo, los suministros y el personal (incluidos socios comerciales y proveedores) que se necesitarían. La simulación ayuda a determinar si la organización puede llevar a cabo funciones empresariales críticas durante un evento real.

Durante cada fase de las pruebas del plan de continuidad del negocio, incluya a algunos empleados nuevos en el equipo de pruebas. Una mirada fresca puede detectar lagunas o omisiones de información que los miembros experimentados del equipo podrían pasar por alto.

La revisión y actualización del plan de continuidad del negocio debe ser un proceso continuo. De lo contrario, los planes quedan obsoletos y no sirven de nada cuando se necesitan. “La frecuencia con la que debe actualizarse debe venir determinada por el negocio”, afirma Tisnovsky.

Reúna al personal clave al menos una vez al año para revisar el plan y debatir las áreas que requieren modificaciones. Antes de la revisión, solicite comentarios del personal para incorporarlos al plan. Pida a todos los departamentos o unidades de negocio que revisen el plan, incluidas las sucursales u otras unidades remotas.

Además, una función sólida de continuidad del negocio requiere revisar la respuesta de la organización en caso de un incidente real. Esto permite a los ejecutivos y a sus equipos identificar lo que la organización hizo bien y en qué aspectos debe mejorar.

Buenas prácticas adicionales

Según los asesores de gestión y los ejecutivos con experiencia, las siguientes buenas prácticas pueden ayudar a las organizaciones en su planificación de la continuidad del negocio:

Utilizar la IA para ayudar a crear y mantener el plan: Zach Rossmiller, vicepresidente asociado y director de sistemas de información de la Universidad de Montana, utiliza una herramienta de IA generativa personalizada para analizar los procesos, procedimientos, infraestructura y arquitectura de la organización, así como su plan de continuidad del negocio, con el fin de identificar posibles deficiencias, como la necesidad de probar los generadores del centro de datos de la universidad. Dado el rendimiento de la herramienta, Rossmiller aconseja a otros que utilicen la IA para la planificación y las pruebas de continuidad del negocio. Chevraux afirma que la IA también puede utilizarse para el descubrimiento de datos, la cartografía y la realización de evaluaciones de impacto en el negocio.

Por su parte, Bilker destaca la importancia de incluir planes de comunicación como parte del plan de continuidad del negocio. “Durante un incidente es difícil recordar quién recibe qué información y cuándo, y quién distribuye la información, por lo que el plan de continuidad del negocio debe detallar esa información”, afirma.

Del mismo modo, el plan debe identificar quién desempeña qué funciones y responsabilidades durante y después de un incidente para agilizar la respuesta y reducir la confusión.

Bilker también aconseja a las organizaciones que revisen sus planes de continuidad cada vez que se produzca un cambio importante en el negocio. Entrar en nuevos mercados o cambiar de un proveedor de nube clave a otro debería dar lugar a una actualización del plan de continuidad del negocio.

Cómo garantizar el apoyo y la concienciación sobre el plan de continuidad del negocio

Todo plan de continuidad del negocio debe contar con el apoyo de la cúpula directiva. Esto significa que la alta dirección debe estar representada a la hora de crear y actualizar el plan; nadie puede delegar esa responsabilidad en sus subordinados. Además, es probable que el plan se mantenga actualizado y viable si la alta dirección le da prioridad dedicando tiempo a su revisión y pruebas adecuadas.

La dirección también es clave para promover la concienciación de los usuarios. Si los empleados no conocen el plan, ¿cómo podrán reaccionar adecuadamente cuando cada minuto cuente?

Aunque la distribución del plan y la formación pueden correr a cargo de los responsables de las unidades de negocio o del personal de RR. HH., es recomendable que alguien de la alta dirección inicie la formación y destaque su importancia. Esto tendrá un mayor impacto en todos los empleados, dotando al plan de mayor credibilidad y urgencia.

How to create an effective business continuity plan

Organizations are seeing a more threatening and volatile operating environment.

Executives report an increase in risks across multiple areas, including cyber-enabled fraud, phishing, and supply chain disruptions, according to the World Economic Forum’s 2026 Global Cybersecurity Outlook report.

At the same time executives are increasingly worried about how artificial intelligence, digital interdependencies, geopolitics, and today’s complex operating environment increase risk to securing their organization’s technology and ensuring business continuity.

Two-thirds (66%) of organizations have increased financial or resource support for business continuity and resilience in response, according to the 2025 State of Continuity and Resilience report from the Business Continuity Institute.

Even so, business leaders are bracing for increasingly more frequent impactful incidents, making a solid business continuity plan more critical than ever.

“Every business should have the mindset that they will face a disaster, and every business needs a plan to address the different potential scenarios,” says Goh Ser Yoong, CISO at Ryt Bank and a member of the Emerging Trends Working Group at ISACA.

A business continuity plan gives organizations the best shot at navigating a disaster by providing ready-made directions on who should do what tasks in what order to keep the business viable.

Without such a plan, the organization will take longer than necessary to recover from an event or incident — if it recovers at all.

What is a business continuity plan?

A business continuity plan (BCP) is a strategic playbook created to help an organization maintain or quickly resume business functions in the face of disruption, whether it is caused by a natural disaster, civic unrest, cyberattack, or any other threat to business operations.

“Continuity is about knowing the minimum time or loss an organization can absorb and still be viable and conduct business. It’s about how quickly it can come back up before it gets bad for its clientele or business, and what systems and processes it has to bring back up and in what order,” says Matt Chevraux, managing director of FTI Consulting.

As such, a business continuity plan outlines the procedures the organization must follow to minimize downtime, covering business processes, assets, human resources, business partners, and more.

A business continuity plan is not the same as a disaster recovery plan, which focuses on restoring IT infrastructure and operations after a crisis. Still, a disaster recovery plan is part of the overall strategy to ensure business continuity, and the business continuity plan should inform the action items detailed in an organization’s disaster recovery plan. The two are tightly coupled, which is why they often are linked together as BCDR.

Business continuity differs from resilience, too, although they are also interrelated. Business continuity focuses on restoring operations in the event of a disruption, whereas business resilience speaks to an organization’s strategy for responding to all sorts of internal and external forces to ensure its long-term survival and success.

Elements of business continuity planning today

Disruptive events are inevitable, according to researchers, risk leaders, and executive advisers.

“Gone are the days when organizations used business continuity or resilience programs as a kind of insurance in case something failed. Now, organizations must face the reality; it’s only a matter of time until a catastrophic incident occurs and affects customers,” Forrester Research writes in its Business Continuity Management Software Landscape, Q1 2026 report.

Executives are not only operating in an environment where the risk of a catastrophic incident is not-an-if-but-when scenario, they’re also working in a world where the complexity of business operations has increased dramatically.

Now organizations must consider as part of their continuity plans a growing volume of AI uses, vendors, and third parties’ digital connections, says Ross Tisnovsky, a partner at Everest Group and leader of the firm’s CIO research and advisory practice.

For example, plans today must address AI availability as well as its accuracy and its cyber risks, such as the threat of prompt injection attacks, he explains, noting that today’s continuity plan must account for more novel concerns. “The concern with infrastructure and applications was availability, but what if AI is giving you junk? That degrading quality of output is a continuity concern.”

Similarly, organizations must evaluate and address their ever-growing operational reliance on third parties, whether they’re hyperscalers or LLM providers, a factor that also adds more complexity to business continuity plans, Tisnovsky says.

“We now have all these providers, and on top of it we’re relying on APIs and the service mesh way more. We’re relying on potential connections we don’t even know about it,” he explains. “That can create exposure you cannot control.”

All these considerations are in addition to the myriad conventional risks that a business continuity plan has always had to address, Tisnovsky adds.

Building (and updating) a business continuity plan

Whether building the organization’s first business continuity plan or updating an existing one, the process involves multiple essential steps.

Assess business processes for criticality and vulnerability: Business continuity planning starts with understanding what’s most important to the business. Assess business processes to determine which are the most critical; which are the most vulnerable and to what type of events; and what are the potential losses if those processes go down for a day, a few days, a week, or more.

“Start with a business impact analysis: What are the critical things that make the business run,” says Lawrence Bilker, CIO of Lift Solutions Holdings. “Identify the business processes and systems that make the company work.”

This assessment is more demanding than ever due to the complexity of today’s hybrid workplace, the modern IT environment, and reliance on business partners and third-party providers to perform or support critical processes.

As a result, assessment requires an inventory of not only key processes but supporting components — including IT systems, networks, people, and outside vendors — as well as the risks to those components, Goh says.

Determine your organization’s RTO and RPO: The next step is determining the organization’s recovery time objective (RTO), which is the target amount of time between point of failure and the resumption of operations, and the recovery point objective (RPO), which is the maximum amount of data loss an organization can withstand.

Each organization has its own RTO and RPO based its business, industry, regulatory requirements, and other operational factors. Moreover, different parts of a business can have different RTOs and RPOs, which executives must establish.

Some businesses “need to be up all the time without fail, and so they need high availability in place, meaning one or two backups,” Bilker says.

Detail the steps, roles, and responsibilities for continuity: Business leaders should then use RTO and RPO, along with their business impact analysis, to determine specific tasks that need to happen, by whom, and in what order to ensure business continuity.

One common business continuity planning tool is a checklist that includes supplies and equipment, the location of data backups and backup sites, where the plan is available and who should have it, and contact information for emergency responders, key personnel, and backup site providers.

There’s no need to identify every possible risk to the organization when building or updating a business continuity plan, says Kayne McGladrey, a senior member of nonprofit professional association IEEE.

The list of possible impact scenarios is extensive. Instead of trying to identify them all, McGladrey advises identifying the most likely and most representative types of incidents and then focusing on how such incidents could impact the business. From there, leaders must determine what impacts would be intolerable based on the organization’s risk tolerance.

“Think about business risks, not the technical risks and not causes, but the impacts on the business,” McGladrey says.

The objective, he stresses, is to create a business continuity plan capable of instructing the organization on how to recover from an unexpected event of any kind.

The importance of testing the business continuity plan

Testing and practicing are other critical components of business continuity planning, as they show whether or how well a plan will work. They also help prepare stakeholders for an actual incident, building muscle memory to respond quickly and confidently during a crisis.

“Testing and training for people are critical so everyone knows what to do in an event of a failure,” Bilker says.

They also help identify gaps in the devised plan. For instance, Bilker says testing and training could uncover the lack of backups or alternatives to critical systems, providers, or people.

Additionally, testing and training help identify where there may be misalignment of objectives. For example, executives may have deprioritized the importance of restoring certain IT systems only to realize during a drill that those are essential for supporting critical processes.

Types and timing of tests

Many organizations test a business continuity plan two to four times a year. Experts say the frequency of tests, as well as reviews and updates, depends on the organization — its industry, its speed of innovation and transformation, the amount of turnover of key personnel, the number of business processes, and so on.

Common tests include tabletop exercises, structured walk-throughs, and simulations. Test teams are usually composed of the recovery coordinator and members from each functional unit.

A tabletop exercise usually occurs in a conference room with the team poring over the plan, looking for gaps and ensuring business units are represented.

In a structured walk-through, team members walk through their components of the plan in detail to identify weaknesses. Often, the team works through the test with a specific disaster in mind. Some organizations incorporate drills and disaster role-playing into the structured walk-through. Any weaknesses should be corrected and an updated plan distributed to all pertinent staff.

Some experts advise a full emergency evacuation drill at least once a year.

Disaster simulation testing — which can be quite involved — should also be performed annually. For this test, create an environment that simulates an actual disaster, with all the equipment, supplies, and personnel (including business partners and vendors) who would be needed. The simulation helps determine whether the organization can carry out critical business functions during an actual event.

During each phase of business continuity plan testing, include some new employees on the test team. A pair of fresh eyes might detect gaps or lapses of information that experienced team members could overlook.

Reviewing and updating the business continuity plan should be an ongoing process. Otherwise, plans go stale and are of no use when needed.

“How often it needs to be updated should be driven by the business,” Tisnovsky says.

Bring key personnel together at least annually to review the plan and discuss areas that require modification.

Prior to the review, solicit feedback from staff to incorporate into the plan. Ask all departments or business units to review the plan, including branch locations or other remote units.

Furthermore, a strong business continuity function calls for reviewing the organization’s response in the event of an actual event. This allows executives and their teams to identify what the organization did well and where it needs to improve.

Additional best practices

According to management advisers and experienced executives, the following best practices can help organizations with their business continuity planning:

Use AI to help build and maintain the plan: Zach Rossmiller, associate vice president and CIO of the University of Montana, uses a customized generative AI tool to analyze the organization’s processes, procedures, infrastructure, and architecture as well as its business continuity plan to identify potential gaps, such as the need to test generators for the university’s data center. Given the tool’s performance, Rossmiller advises others to use AI for business continuity planning and testing. Chevraux says AI can also be used for data discovery, mapping, and conducting business impact assessments.

Meanwhile, Bilker stresses the importance of including communications plans as part of the business continuity plan.

“It’s difficult during an incident to remember who gets what information when and who distributes information, so the business continuity plan should outline that information,” he says.

Similarly, the plan should identify who owns what roles and responsibilities during and after an incident to speed response and reduce confusion.

Bilker also advises organizations to revisit their continuity plans any time there is a major change to the business. Entering new markets or switching from a key cloud provider to another should trigger an update to business continuity plan.

How to ensure business continuity plan support and awareness

Every business continuity plan must be supported from the top down. That means senior management must be represented when creating and updating the plan; no one can delegate that responsibility to subordinates. In addition, the plan is likely to remain fresh and viable if senior management makes it a priority by dedicating time for adequate review and testing.

Management is also key to promoting user awareness. If employees don’t know about the plan, how will they be able to react appropriately when every minute counts?

Although plan distribution and training can be conducted by business unit managers or HR staff, have someone from the top kick off training and punctuate its significance. It’ll have a greater impact on all employees, giving the plan more credibility and urgency.

Your CEO just got AI FOMO. Here are 6 tips on what to do next.

Every CIO I know has had some version of this conversation: their CEO comes back from a golf trip with their buddy, or a conference with peers, and is told AI is about to automate everything at their company, from HR to marketing and finance. No humans in the loop, just AI. The CEO then calls an all-hands Monday morning, and the CIO is suddenly on the hook to make it all happen.

The instinct for CEOs to chase unsubstantiated claims is understandable since they’re responding to competitive pressure. But that leaves CIOs responsible to close the gap between ambition and reality. Making AI work in an organization with decades of accumulated process, permission frameworks, and cultural inertia is very different from deploying it in a demo.

The best response isn’t to push back on the ambition, but redirect it. Translate the CEOs vision into an honest map of what has to happen for the organization to get there, including the infrastructure, governance, and training. That helps to convert the kneejerk compulsion to move faster into a concrete plan that leadership can get behind.

Here’s what CIOs should actually be focused on to get where their CEOs want them to go, regardless of what’s discussed on the links.

1. Start where AI can build its own credibility

The hype machine wants you to climb Everest on day one. Instead, identify the repetitive tasks where AI can prove itself on familiar ground — the workflows your team already knows well, where results are easy to verify and the bar for trust is attainable.

The goal is the Eureka moment when a skeptic on your team sees a real result and becomes a believer. Those moments compound. When someone has seen AI make their work easier in a context they understand, they’re more likely to help you move things forward. You can’t force that change, but you can engineer the conditions for it.

2. Models will commoditize. Context will not.

Every few months, a new model claims to be smarter, faster, and cheaper than the last one. Don’t be distracted by that race. The lasting advantage in enterprise AI doesn’t just come from which model you’re running, it’s in the quality, governance, and semantic clarity of the data feeding it. Enterprises that invest in consistent business definitions, well-structured data, and clear lineage will outperform those that don’t, regardless of which model is in fashion. Context is your competitive moat. Focus on building that.

3. Nail down the permissions

In a world of dashboards, you know exactly what data will appear on a given page, so you can set permissions in advance for who can access it. In an AI world, the system can generate outputs that were never pre-designed. So how do you determine who has the right to see a result that was never anticipated?

Before deploying any agent that acts on someone’s behalf, such as filing a request, surfacing payroll data, or populating a record, first determine whether your existing permissions and access control frameworks can handle outputs that were never planned for. Most can’t. This is a prerequisite of what your CEO is asking for: the unglamorous infrastructure work that determines whether your AI is trustworthy in production. It needs to happen before you scale, not after.

4. Build an editing culture, not a writing one

For decades, engineers, analysts, and operations teams have been trained to write code, build reports, and define new processes. AI upends that. The skill now is editing — auditing what the system produces, catching what it got wrong, and knowing where to push back.

The truth is most people aren’t naturally good at editing because they’ve never had to be. That’s a skills gap that needs to be closed early on. Invest in helping engineers, analysts, and managers develop the judgment to evaluate AI outputs, not just generate them. Editing must become a core enterprise competency.

5. Measure behavior change, not tool adoption

Login data is a vanity metric. If your engineers are accessing AI coding tools but aren’t changing how they build, you haven’t adopted anything. The metric that makes more sense is productivity output. In agile terms, a team that completes 20 story points per sprint should hit about 28 with AI, not because the tools are magic, but because the repetitive work gets faster. If you’re not seeing that, you’re measuring the wrong thing. Pay attention to output, not usage metrics.

6. Reframe your organization’s relationship with failure

The instinct to de-risk everything made sense when software deployments were expensive and slow to reverse. AI works differently. The outputs are probabilistic, the iteration cycles are fast, and being overly cautious can cost valuable time. CIOs need to give teams permission to experiment in ways that feel uncomfortable by traditional enterprise standards, all while building the feedback loops that make fast failure safe. That culture shift has to be modeled from the top.

FOMO isn’t going away

CEOs will keep getting pulled into cycles of urgency and FOMO, and that pressure will keep landing on CIOs. The organizations that make real progress will be the ones that redirect that energy into infrastructure that makes AI trustworthy, measurement systems that show what’s working, and cultural changes that make adoption stick. That’s the agenda that’ll move your organization forward.

AI sprawl: Why your productivity trap is about to get expensive

I have seen this movie before.

A decade ago, at Tesla, our Finance team faced a data crisis. We had information scattered across accounting, supply chain and delivery systems, all disconnected, all using different structures. The engineering team was rightfully focused on Full Self-Driving (FSD) and manufacturing. So, we did what productivity-hungry teams always do: We built our own solution. We taught ourselves Structured Query Language (SQL), normalized the data with creative IF-THEN logic and created our own reporting database.

It worked beautifully. Until it became a governance nightmare. The VP of Engineering hated our siloed system with embedded business logic. We eventually handed it over to IT, but not before our workaround forced the company to finally resource a proper data team.

The pattern is always the same: Productivity-hungry teams build workarounds faster than the organization can govern them, and by the time leadership notices, the workarounds have become the infrastructure.

That was more than a decade ago. The pattern took years to unfold.

Today, I am watching the exact same dynamic play out in insurance and industries across the board, but compressed into months, not years. AI adoption is sprawling across organizations, led by the same productivity-hungry individuals, but without central platforms or governance. Leadership has not created space for safe experimentation, so adoption spreads like a city without a highway system. The difference? Back then, we were building SQL databases. In 2026, we are building AI agents. And the cost of fragmentation is exponentially higher.

What is AI sprawl?

AI Sprawl is what happens when the cost of building AI drops faster than an organization can govern it. Teams spin up models, agents and automations independently. Each one works in isolation. None of them connect. The result is fragmented data, drifting decisions and intelligent systems that quietly get abandoned.

It happens because execution has become cheap. Large Language Model (LLM) APIs, no-code tools and cloud infrastructure have made spinning up AI trivially easy. A claims team builds an automation to speed adjudication. Underwriting builds a model to assess risk. Customer service deploys a chatbot. Each initiative delivers local value. No single project looks like a problem.

But collectively, they create an ungovernable landscape.

Over the past 18 months, the GenAI acceleration intensified what IDC calls the GenAI scramble: scattered, fragmented and sometimes redundant applications launched by business-led initiatives without central oversight. Many organizations have fallen into what researchers describe as a productivity trap: Focusing on short-sighted value generation instead of scalability, which limits their ability to create reusable capabilities across departments.

AI sprawl is everywhere

A major property and casualty carrier recently invited us to speak with their innovation leadership about implementing process automation. We spoke with more than 10 key stakeholders across multiple lines of business and found more than a dozen different POCs and local solutions across claims intake, underwriting and fraud detection.

Six of them were solving overlapping problems. None shared data infrastructure. Two had been abandoned months earlier but were still running and still being billed.

This is not an outlier. It is the norm.

AI Sprawl persists because it is insidious, hiding in plain sight unless you look for it. Business units move fast, build independently and solve immediate problems. IT discovers shadow AI only when something breaks, when an audit is triggered or when a vendor renewal surfaces a tool, nobody knew existed. And this symptom multiplies as more innovative teams exist within the organization.

The 4 hidden costs of sprawl

AI Sprawl creates costs that compound over time, many of which are not visible in any single budget line. It results in a dangerous cascade of failures:

  1. Governance becomes impossible. Companies cannot govern what they cannot see. When AI systems scatter across departments, audit trails fragment. Bias monitoring becomes inconsistent. Explainability standards vary by team.
  2. Scaling stalls. Disconnected systems cannot integrate. Every new initiative starts from scratch instead of building on shared infrastructure.
  3. Maintenance and redundant spending multiply. Teams that built AI to accelerate their work end up spending most of their time maintaining it. One carrier reported that 60% of their AI engineering capacity was devoted to maintaining existing tools rather than building new capabilities. Meanwhile, teams unknowingly pay for overlapping capabilities because nobody has a complete view of AI spending.
  4. Talent drains away. The best AI engineers want to solve hard problems. When they are cornered into spending their time maintaining fragmented infrastructure, they walk out the door.

Why traditional governance fails

Seventy percent of large insurers are investing in AI governance frameworks. Yet only 5% have mature frameworks in place. This gap is not about commitment or resources. It is about a category mistake.

For the last two decades, enterprise software governance worked because the software itself worked a certain way. Systems were point solutions. A claims platform did claims. A policy admin system did policy admin. Each tool had a clear owner, a defined scope and a predictable boundary. Governance could wrap around the edges, through access controls, audit logs, change management, vendor reviews, because the edges were visible. We governed the perimeter because the perimeter was the product.

AI is not a point solution. It is foundational technology, closer to electricity or a database than to a piece of software. It does not sit inside a defined boundary; it flows across every process, every decision and every department that touches data. And because it flows, it cannot be governed at the perimeter.

This is why carriers applying the old playbook keep running in place. Policy documents, oversight committees and compliance checklists were designed to govern systems that stood still. AI does not stand still. It is built, modified, retrained and extended by the same teams it is meant to serve, often in the same week. By the time a governance committee reviews it, three more versions exist somewhere else in the organization.

The failure is not that carriers are governing AI badly. It is that they are governing it as if it were software, when it’s actually infrastructure. Infrastructure requires a different discipline: Shared foundations, common standards and the assumption that everyone will build on top of it. You do not govern electricity by reviewing each appliance. You govern it by standardizing the grid.

Until carriers make that shift, their frameworks will keep maturing on paper while sprawl compounds underneath.

3 questions every insurance CIO should be able to answer

If the failure of traditional governance is a category mistake, the first job of leadership is to check which category they are actually operating in. These three questions are not meant to produce tidy answers. They are meant to reveal whether you are still governing AI as software when you should be governing it as infrastructure.

1. Are you governing AI at the perimeter, or at the foundation?

Look at your current AI governance artifacts, such as the policies, the committees, the review processes. Are they designed to wrap around individual tools after they are built, or to set shared standards that every tool must be built on top of? Perimeter governance asks, “is this specific model compliant?” Foundational governance asks, “does every model in this organization inherit the same definitions, the same lineage and the same guardrails by default?” If your governance only kicks in at review time, you’re still treating AI like software. You’re already behind.

2. If you standardized one thing across your entire organization tomorrow, what would create the most leverage and why haven’t you?

Every carrier has a list of things they know should be standardized but have not been. Shared definitions for core entities. Common ways of handling unstructured inputs. A single source of truth for how decisions get logged. The question is not which item belongs at the top of the list; most CIOs already know. The question is what has been blocking the standardization: Is it political, budgetary, or organizational? Because that blocker, whatever it is, is also what is letting sprawl compound. Governance frameworks cannot fix what foundational decisions have been deferred.

3. When a new AI initiative launches next quarter, what will it automatically inherit from what already exists?

This is the real test. In a point-solution world, every new system is built fresh and governance is applied afterward. In a foundational world, every new system inherits shared standards, shared definitions, shared oversight before a single line of code is written. If the honest answer is “it will inherit nothing, and we will govern it after the fact,” then you do not have an AI governance problem. You have an AI foundation problem, and no amount of policy will close the gap.

The uncomfortable truth is that most carriers will answer these questions honestly and discover they are still operating from the old playbook. It is a signal that the work to be done is not more governance, but different governance, the kind that assumes AI is the ground floor, not the top floor.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The CIO succession gap nobody admits

I have sat with three CIOs in the last two years who wanted to leave their seat and could not. One was being recruited into a larger enterprise role. One was ready to retire. One had been offered a board seat that required stepping down. In every case, the same thing stopped them. When the CEO asked who could step in, the CIO could not give a credible name. The person they had been calling their number two was technically brilliant and operationally reliable, and every one of them had been groomed into an architect, not a leader. The board would not approve an external hire during an active transformation. So the CIO stayed. One of them is still stuck.

The CIO role has the weakest succession bench in the C-suite, and most CIOs discover it the same way those three did. Not during a quarterly talent review. Not during a board retreat. They discover it the moment they try to leave. By then, the decision is already made for them. This is a leadership design problem CIOs build into their own orgs, and they inherit it when it is too late to fix quickly.

The architect trap

I have watched the same pattern form in almost every IT organization I have worked in. The people who rise to the top of the CIO’s direct reports are the ones who can hold the most architectural complexity in their heads. They are the ones the CIO trusts with the platform decisions, the vendor consolidations, the integration maps. They earn that trust legitimately. They are excellent at what they do.

But architectural trust is a different currency than leadership trust. When a CIO promotes based on architectural depth, what they get is a deputy who can design the org but cannot run it. I have seen deputies who have never owned a P&L conversation with a CFO. Deputies who have never delivered hard news to a business unit president. Deputies who have never had to defend a budget line item in a room full of people trying to take it from them. They were not hiding from those conversations. The CIO was holding the conversations for them because the CIO was good at those conversations and the deputy was good at the architecture.

The result is a bench that looks deep from inside the IT org and looks empty from the boardroom. I have watched a CEO walk out of a succession conversation saying, “I like your people, but I cannot see any of them in your chair.” That is not a compliment to the CIO. That is a verdict on how the CIO built the team.

Three moves I make before I need them

After watching this happen enough times, I stopped treating succession as something I would address later and started treating it as a design choice I had to make inside my first year. I changed how I build the bench in three ways, and I make each move early enough that the person has time to grow into it or fail out of it.

First, I give them a standing decision domain, not a “next in line” title. A deputy who is told they are being groomed for the CIO seat will manage their career instead of their work. A deputy who is given full authority over, say, all vendor escalations above a defined threshold will start making real decisions in real rooms with real consequences. That is where judgment gets built. The domain has to be something I would otherwise own myself. If I am still approving everything inside it, I am building a forwarder, not a successor.

Second, I put them in rooms where they have to lose something. One of the most damaging things a CIO can do is protect a high-potential deputy from conflict. I used to do this without realizing it. I would pull the hard conversations back to my level because I wanted to spare the deputy the political damage. The deputy came out looking clean and came out completely unprepared. Now I deliberately put deputies into conversations where they have to defend a position against a peer executive who will push back hard. Sometimes they hold the line. Sometimes they fold. Either outcome tells me something I needed to know before anyone was counting on them.

Third, I make the bench visible to the board before I have to. If the board does not know my top two or three deputies by name and track record, I do not have a succession plan. I have private notes. The CIOs I described at the beginning of this article all had deputies they believed in. None of those deputies had ever presented to the board on anything substantive. The board had no reference point. So when the succession question came up, the deputies did not exist in the board’s imagination, and the CIO’s personal endorsement was not enough to create them.

The first time I put a deputy in front of the board, they came back different. The board did not go easy on them. They came back knowing what a board conversation actually feels like, which meant the next one would not be a first impression. The board needs reps with my deputies before the seat is vacant. Once it is vacant, the reps are a job interview and a job interview is not where anyone does their best work.

What the gap actually costs

The cost of a shallow bench is not abstract. I have seen CIOs delay their own career moves by eighteen months or longer because they could not produce a credible successor. I have seen organizations pay two and a half times market to hire externally because the internal candidate did not survive a board interview. I have seen transformations stall because the CIO could not delegate enough to step back and think, because there was no one qualified to hold what they put down.

The cost to the deputies is also real. The architect-track deputy who spends six or seven years being the CIO’s most trusted technical lieutenant, and then gets passed over for the CIO role because the board does not see a leader, rarely recovers that momentum. Some of them leave. Some of them stay and quietly disengage. A few of them become the reason the new CIO’s first ninety days are harder than they should be. None of that is the deputy’s fault. It is the consequence of a design choice the previous CIO made years earlier, usually without knowing they were making it.

CIO.com has published strong guidance on this, including work on grow your own CIO strategies that treat succession as a deliberate pipeline rather than an accident of tenure.

The test is simple. If you had to leave in ninety days, could you hand the CEO a name and get a nod? If you cannot picture that nod, you do not have a successor. You have a list of people you like and trust, which is not the same thing. The successor you can actually name is the one you built on purpose, not the one who happened to look ready when the chair emptied. I have learned this by watching peers run out of time to build what they meant to build. I am trying not to be one of them.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

“채용이 곧 공격 경로”…AI 악용한 가짜 IT 인력, 기업 내부 위협으로 확산

최근 몇 년 사이 가짜 IT 인력을 채용하는 문제는 점점 심각해지고 있지만, 이를 공개적으로 인정하려는 기업은 많지 않다. 포춘 500 기업부터 중소 조직에 이르기까지 원격 채용 방식이 악용되면서, 실제 신원이 아닌 인물에게 신뢰 기반 접근 권한이 부여되는 사례가 발생하고 있으며 이는 내부자 위협으로 이어질 수 있다.

추정에 따르면 미국 전역에서 수천 명의 가짜 IT 인력이 활동 중이며, 이들은 정보와 지식재산(IP), 데이터 탈취는 물론 해외로의 업무 외주화, 시스템 교란, 외국 정부로의 자금 유입 등 다양한 위협 행위를 수행할 수 있는 위치에 있다.

미국 기업 아마존(Amazon)의 최고보안책임자(CSO) 스티브 슈미트는 “북한이 IT 직무를 확보하기 위해 시도한 1,800건 이상의 사례를 차단했으며, 그 수는 계속 증가하고 있다”고 밝혔다.

일부는 개인적인 이익을 위해 미국 직원으로 위장하고, 또 다른 경우에는 북한과 같은 국가 단위 조직이 자금 확보 및 기타 불법 목적을 위해 IT 인력으로 위장하기도 한다.

현재 AI 기술은 딥페이크 생성, 더욱 정교한 영상 면접 수행, 빠른 신원 변경 등을 가능하게 하며 이러한 위협을 한층 고도화하고 있다.

슈미트는 공격 방식 역시 변화하고 있다며, 단순히 프로필을 조작하는 수준을 넘어 실제 미국인의 신원을 구매해 활용하는 단계로 진화하고 있다고 경고했다.

사이버보안 기업 센티넬원(SentinelOne)의 위협 연구원 톰 헤겔은 “이 문제는 전통적인 의미의 채용 사기가 아니다”라며 “공격자가 ‘채용되는 것’을 첫 단계로 삼는 내부자 위험 문제”라고 설명했다.

CIO, CISO 등 IT 리더들은 가짜 및 사기성 IT 인력에 대해 지속적으로 경계해야 하지만, 조직이 이를 인지하지 못한 채 피해를 입는 경우도 적지 않다.

가짜 인력은 어떻게 채용을 통과하나

채용 과정에는 단일한 실패 지점이 존재하지 않는다. 가짜 및 사기성 IT 인력은 신원을 숨기고, 역량과 경력을 조작하며, 면접과 검증 절차를 별다른 의심 없이 통과한다.

센티넬원은 북한 연계 IT 인력 조직과 관련된 약 360개의 가짜 인물과 1,000건 이상의 채용 지원 사례를 추적했으며, 자사 채용에도 실제 지원 시도가 있었다고 밝혔다.

헤겔에 따르면 공격자들은 점점 더 대규모로 사회공학 기법과 신원 은폐 전략을 활용하고 있으며, 채용 과정은 이들이 침투하기 위한 핵심 진입 지점으로 작용하고 있다.

이들은 합성 또는 도용된 신원을 기반으로 이력서와 온라인 프로필을 만들고, 스크립트나 대리 응시자, AI 기반 응답을 활용해 면접을 통과한다. 또한 백그라운드 체크는 제출된 정보만 검증하기 때문에 이러한 조작을 그대로 통과시키는 구조다.

헤겔은 “가짜 구직자들은 이제 AI 도구를 활용해 실제 지원자를 모방하고 있다”라며 “초기 신원 검증을 통과할 수 있는 합성 신원을 만들고, 경력 이력을 조작하며, 실시간 AI 지원을 통해 면접에서도 설득력 있게 응답한다”고 설명했다.

보안 기업 플래시포인트(Flashpoint)의 조사에서는 HR 및 채용 플랫폼 계정 정보가 저장된 악성코드 감염 시스템, 번역된 면접 코칭 메모가 담긴 브라우저 기록, 해외에서 기업 장비를 원격 조작하는 ‘노트북 팜’, 그리고 가짜 경력 검증을 위한 페이퍼컴퍼니 등이 확인됐다.

문제는 채용 이후다. 채용이 완료되면 계정과 장비가 지급되고 시스템 접근 권한이 부여되면서 이들은 곧 내부 신뢰 인력으로 전환된다. 헤겔은 “장기적인 위험은 단순히 가짜 직원을 채용하는 데 그치지 않는다”라며 “기업 시스템과 민감한 데이터에 악의적인 접근을 스스로 열어주는 결과로 이어질 수 있다”고 경고했다.

가짜 IT 인력 대응 방법

CIO가 가짜 IT 인력을 의심하는 순간부터 문제의 성격은 단순 채용 이슈에서 내부자 리스크 관리로 전환된다. 이후 대응 절차가 무엇보다 중요해진다.

몽고DB(MongoDB) 재직 당시 조사 및 대응을 총괄했던 IANS 자문이자 베드록 데이터(Bedrock Data) CSO 조지 거초우는, 재직했던 회사가 북한 연계 가짜 IT 인력을 채용한 사실을 뒤늦게 인지하고 조사에 착수한 경험을 공유했다.

문제는 엔드포인트 보안 솔루션 제거 시도에서 시작됐다. 거초우는 “크라우드스트라이크 오버워치(CrowdStrike Overwatch)를 포함한 보안 기능을 제거하려는 시도가 감지됐고, 이후 해당 노트북이 북한 IP 주소와 통신하는 정황이 포착됐다”고 설명했다.

이어 “보안 도구 조작과 북한 연계 트래픽이 동시에 나타난 것은 일반적인 신규 입사자의 행동이 아니라는 명확한 신호였다”고 덧붙였다.

조사 결과 해당 인력은 도용된 신원에 AI로 생성된 이력서, 스크립트 기반 면접 답변을 결합해 검증 절차를 통과한 것으로 드러났다. 기존 백그라운드 체크는 제출된 정보만 확인할 뿐, 조작 여부를 탐지하지 못하는 한계가 있었다.

거초우는 “많은 검증 시스템이 조작된 경력, 합성 신원, 재활용된 개발자 프로필을 식별하지 못한다”라며 “이 때문에 별다른 경고 없이 채용과 면접을 통과할 수 있었다”고 설명했다.

이후 조사에서는 보안 도구 비활성화 시도, 장비 내 지속 접근 확보, 권한 상승 탐색 등의 행위가 확인됐다. 거초우는 “발각되지 않았다면 연방 보안 인증(FedRAMP) 환경까지 접근했을 가능성이 있다”며 위험성을 강조했다.

놓치기 쉬운 ‘경고 신호’…단편적 대응이 문제

사건 이후 돌아보니 여러 이상 징후가 있었다. 면접 영상 품질이 낮고 화면이 불명확했으며, 통화마다 억양이 일관되지 않았다. 면접 평가도 분산돼 있었고, 이를 통합 검토하는 체계가 없었다.

노트북 배송 주소를 마지막 순간 변경한 점도 주요 단서였다. 거초우는 “이는 ‘섀도우 워커’들이 자주 사용하는 전형적인 수법”이라고 말했다.

문제는 이러한 징후가 각각 개별적으로는 채용을 막을 정도로 치명적이지 않았다는 점이다. 거초우는 “각 이상 징후를 통합해 판단하는 역할이 없었기 때문에, 엔드포인트 경고가 발생하기 전까지 패턴을 인식하지 못했다”고 설명했다.

발견 이후 대응…즉각 차단과 전면 조사

가짜 인력이 확인되자 팀은 즉시 장비를 격리하고 모든 계정을 폐기했으며, 포렌식 조사를 실시하고 연방 당국에 신고했다. 조사 결과 데이터 유출이나 내부 확산은 발생하지 않은 것으로 확인됐다.

이후 대응 조치로는 채용 과정에서의 신원 검증 강화, 초기 이상 징후를 통합 관리하는 ‘옐로 플래그’ 담당자 지정, 신규 입사자에 대한 신뢰 확보 전까지 접근 권한 제한 등이 도입됐다.

“신원보다 행동”…채용 이후 모니터링 강화해야

거초우는 채용 이후 행동 기반 모니터링의 중요성도 강조했다. 단순 자격 증명보다 실제 사용 행태가 위장 인력을 식별하는 핵심이라는 설명이다.

이에 따라 기업은 보안 또는 HR 부서 내 검토 담당자를 지정해 면접 영상 품질 저하 등 채용 과정의 불일치를 식별해야 한다. 또한 AI로 생성된 링크드인 프로필, 이력서 불일치, 장비 배송 주소 변경 등도 주요 점검 대상이다.

패널 면접과 프로젝트 기반 평가를 통해 도용 또는 가짜 개발자 신원을 재활용하는 지원자를 식별하고, 신규 입사자에게는 초기 단계에서 민감 데이터나 운영 환경 접근을 제한하는 것이 필요하다.

또한 IAM, EDR, VPN 등 보안 에이전트가 비활성화될 경우 경고를 설정하고, 가짜 개발자 채용 상황을 가정한 탐지·대응 훈련도 병행해야 한다.

거초우는 “근무 시간 외 접근, 내부 시스템 전반에 대한 과도한 검색, 대량의 문서 및 코드 저장소 복제 시도 등도 주요 이상 징후로 주의 깊게 살펴야 한다”고 강조했다.

IT 리더들이 내부에서 목격하는 현실

고용 사기 문제는 앞으로 더욱 악화될 전망이다. 가트너는 2028년까지 전 세계 채용 지원자의 4명 중 1명이 가짜일 것으로 예측했다.

에너지솔루션(Energy Solutions)의 CIO 데이비드 웨이송은 “가짜 및 사기성 구직자의 증가는 조직 전반에 걸친 ‘전염병’ 수준으로 확산되고 있다”고 말했다.

웨이송에 따르면 공격자들은 데브옵스, 시스템 관리자, 데이터 엔지니어, 데이터베이스 관리자 등 높은 접근 권한을 가진 기술 직무를 집중적으로 노린다. 이러한 직무에 채용될 경우 핵심 시스템에 대한 깊은 가시성과 통제 권한을 확보할 수 있기 때문이다.

웨이송은 “이들 직무는 사실상 ‘성문 열쇠’를 쥔 역할”이라며 “시스템 접근을 노린다면 일반 개발자보다 훨씬 가치가 높은 목표”라고 설명했다.

규제가 엄격한 에너지 시장에서 운영되는 에너지솔루션은 미국 내 인력 채용과 데이터의 미국 내 보관이 계약상 의무화돼 있다.

웨이송은 가짜 IT 인력을 직접 식별한 경험을 바탕으로 다른 IT 리더들에게 경고를 전했다. 가장 초기 징후 중 하나는 비정상적인 지원자 급증이었다. 수 시간 만에 수백 건의 지원서가 몰렸으며, 이는 기업 인지도 대비 과도한 수준으로 자동화 또는 조직적인 활동을 시사했다.

면접 단계에서는 ‘신원 바꿔치기’ 사례도 확인됐다. 웨이송은 “전화 인터뷰를 통과한 사람과 화상 면접에 등장한 사람이 다르고, 이후 또 다른 인물이 나타나는 경우도 있었다. 모두 동일한 이름과 이력서를 사용했다”고 밝혔다.

문제의 근본 원인 중 하나는 기존 채용 절차가 정보와 역량을 개별적으로 검증한다는 점이다. 웨이송은 “전통적인 백그라운드 체크는 제출된 정보만 확인할 뿐, 사기를 식별하지 못한다”고 지적했다.

일부 CIO에게는 불편한 현실이지만, 이들이 수행하는 업무 결과 자체는 높은 수준일 수 있으며, 탐지는 성과가 아닌 이상 징후를 통해 이뤄지는 경우가 많다.

그러나 가짜 IT 인력은 보안 위험뿐 아니라 비즈니스와 규제 리스크도 동시에 초래한다. 특히 규제 산업에서는 계약 위반, 규제 조사, 고객 신뢰 상실로 이어질 수 있다.

웨이송은 “가짜 IT 인력은 보안 문제를 넘어 비즈니스와 컴플라이언스 측면에서도 심각한 위험을 초래하며, 규제 산업에서는 계약 위반과 규제 리스크, 고객 신뢰 훼손으로 이어질 수 있다”고 강조했다.

가짜 IT 인력 대응 전략

아마존(Amazon)은 AI 기반 도구와 인적 검토를 병행해 의심스러운 연락처 정보와 허위 학력, 가짜 기업 이력을 식별하고 있다. 또한 보안팀은 수상한 링크드인 프로필을 표시하고, 대면 면접과 사무실 출근을 강화하며, 컴퓨터 사용 패턴과 업무 품질을 모니터링하고 물리적 토큰 기반 인증을 적용하고 있다.

스티브 슈미트는 포츈 인터뷰를 통해 IT와 HR 부서 간 긴밀한 협력이 문제 해결의 핵심이라고 강조했다. 그는 “문제를 초기에 발견하는 것이 HR 조직 입장에서도 훨씬 비용 효율적”이라고 밝혔다.

센티넬원의 헤겔은 채용에 대한 접근 방식 자체를 바꿔야 한다고 지적했다. 그는 “채용을 단순 인사 절차가 아닌 접근 권한 통제 문제로 봐야 한다”라며 “신원을 한 번 확인하는 체크리스트로 끝내지 말고, 원격 채용을 특권 접근 권한 부여처럼 다뤄야 한다”고 설명했다.

에너지솔루션의 웨이송은 경험을 바탕으로 채용 시스템과 내부 프로세스 전반에 걸쳐 대대적인 변화를 도입했다.

채용 공고 단계부터 기술 직무 지원자가 요구사항과 책임을 명확히 이해하도록 모든 문서에 이를 명시했다. 웨이송은 “특히 ‘완전 원격 근무’라는 표현을 제거한 이후, 사기 시도와 해외 지원이 눈에 띄게 줄었다”고 말했다.

이어 “제로 트러스트 방식이 이상적이긴 하지만 채용 과정 자체를 저해하거나 정상 지원자를 위축시켜서는 안 된다”라며 “자동화된 사기 지원자가 애초에 채용 파이프라인에 들어오지 못하도록 충분한 대응책을 마련해야 한다”고 강조했다.

지원자 급증 문제를 해결하기 위해 에너지솔루션은 채용 공고에 강력한 CAPTCHA를 적용하고, 직원 추천 보너스를 통해 내부 네트워크 기반 채용을 확대했으며, 신규 입사자에게는 90일 성과 검증 기간을 운영하고 있다.

채용 심사 과정에서는 전화 대신 영상 면접을 실시하고, 실시간 과제를 위해 화면 공유를 요구한다. 또한 면접 이후 보고서를 통해 지원자의 실제 위치를 검증하며, 미국 외 지역에서 접속할 경우 ‘옐로/레드 플래그’로 분류한다.

지원자는 근무할 사무실을 직접 선택해야 하며, 면접 과정에서 AI 사용 시 탈락될 수 있다는 점에도 동의해야 한다.

경력 및 추천서 검증을 위해 최소 2명의 추천인을 요구하고, 그중 1명은 이전 상사 또는 관리자여야 한다. 과거 근무 이력과 이전 회사도 확인하며, 자택 주소 제출도 의무화했다.

접근 권한 통제를 위해 신규 직무가 민감 정보에 대한 고급 접근 권한을 포함하는지 여부를 사전 문서에서 확인하도록 했다.

입사 첫날에는 반드시 사무실에 출근해 장비를 수령하고 온보딩 교육을 받아야 하며, 모든 직무는 초기에는 온사이트 근무가 원칙이다. 이후 성과가 검증된 경우에만 하이브리드 근무가 허용된다.

웨이송은 “이 문제를 해결하기 위해서는 채용 프로세스를 재점검하고 HR과 긴밀히 협력하며, 각 대응 조치의 효과를 지속적으로 점검해야 한다”고 강조했다. 이어 “채용 시스템 자체가 잘못된 것이 아니라, 신뢰를 단계적으로 구축하는 방식으로 접근해야 한다”고 덧붙였다.
dl-ciokorea@foundryco.com

오픈AI·앤트로픽, SI 영역 넘본다…엔터프라이즈 AI 경쟁 ‘구현 영역’으로

오픈AI와 앤트로픽은 합작 투자와 인수 협상을 통해 전문 서비스 영역으로 사업 범위를 확장하며, 기존 시스템 통합 기업이 맡아온 구현 역할에 한층 더 가까이 다가가고 있다.

로이터의 5일 보도에 따르면, 두 AI 기업과 연계된 합작사는 기업의 AI 도입을 지원하는 서비스 업체 인수를 논의해 왔으며, 이 가운데 오픈AI 측은 3건의 협상에서 상당한 진척을 이룬 것으로 알려졌다.

또한 기업 고객들이 생성형 AI를 실험 단계에서 실제 운영 환경으로 전환하는 과정에서, 엔지니어와 컨설턴트 인력을 확충하려는 움직임도 나타나고 있다.

한편 앤트로픽은 블랙스톤, 헬만앤프리드먼, 골드만삭스의 투자를 기반으로 새로운 엔터프라이즈 AI 서비스 기업 설립 계획을 발표했다. 이 회사는 중견 기업이 ‘클로드(Claude)’를 핵심 업무에 적용할 수 있도록 지원하는 것을 목표로 한다.

앤트로픽은 자사의 응용 AI 엔지니어들이 신설 기업의 엔지니어링 팀과 협력해 유즈케이스를 발굴하고, 맞춤형 시스템을 구축하며, 장기적으로 고객 지원을 수행할 것이라고 밝혔다.

서비스 확장 배경…엔터프라이즈 AI 주도권 경쟁 본격화

CIO들에게 이번 변화의 핵심은 AI 벤더가 기존 컨설팅 기업, 시스템 통합(SI) 업체, 매니지드 서비스 제공업체가 맡아온 역할을 점차 대체하고 있는지 여부다. 이번 흐름은 모델 기업들이 엔터프라이즈 AI 구현 과정에서 더 큰 주도권을 확보하려는 의지를 보여준다. 다만 대규모 구축 프로젝트에서는 여전히 SI 기업의 역할이 중요하다는 점도 함께 드러난다.

이 같은 움직임은 이미 많은 CIO들이 직면한 문제를 반영한다. AI 파일럿은 빠르게 시작할 수 있지만, 이를 보안과 안정성을 갖춘 운영 시스템으로 전환하는 데에는 수개월에 걸친 통합과 프로세스 작업이 필요하다.

컨설팅 기업 테크아크의 설립자이자 수석 애널리스트인 파이살 카우사는 “엔터프라이즈 IT 구축은 전통적으로 컨설팅이나 자문 중심으로 이뤄져 왔다”라며 “실질적인 수익이 발생하는 도입 속도를 높이기 위해서는 기존 엔터프라이즈의 프레임워크와 시장 진출 모델에 맞춰야 한다”고 설명했다.

이어 카우사는 “현재 AI 기업들은 가치 사슬의 최상단에 위치해 있으며, 단순한 IT 공급업체로 전락하기보다는 ‘주도권을 쥔 상태’를 유지하려 한다”고 분석했다.

IDC 아시아태평양 지역 AI·데이터 분석·데이터 부문 리서치 총괄 디피카 기리는 “이번 변화는 엔터프라이즈 AI 전반의 구조 재편으로 이어질 가능성이 있다”라며 “AI 모델 기업들이 플랫폼 공급자를 넘어 전체 AI 가치 사슬을 적극적으로 설계하는 방향으로 이동하고 있다”고 말했다. 이어 “구현, 컨설팅, 매니지드 서비스까지 확장함으로써 단순 기술 공급을 넘어 기업의 실제 성과에 더 밀접하게 관여하려는 전략”이라고 덧붙였다.

카우사는 일부 IT 서비스 기업들이 AI 도입에 신중한 태도를 보이는 이유로 기술의 불확실성과 역할 축소 가능성을 지목했다. 그는 “시장 진출 전략의 변화 속에서 AI 기업들이 주도권을 잡고 있다”고 평가했다.

도입 리스크는 낮추지만…‘락인’ 심화 우려

AI 모델 기업으로부터 직접 서비스를 도입하면 초기 구축은 한층 수월해질 수 있다.

카덴스 인터내셔널의 수석 부사장 툴리카 쉴은 “기업이 더 긴밀한 통합과 전문 인력 지원을 받을 수 있어 단기적으로는 구축 리스크를 줄일 수 있다”고 설명했다.

다만 이러한 편의성은 장기적인 부담으로 이어질 수 있다는 지적도 나온다.

쉴은 “모델부터 데이터 파이프라인, 워크플로우에 이르기까지 전체 스택 전반에서 의존도가 더욱 심화될 수 있다”라며 “시간이 지날수록 락인이 강화돼, 큰 혼란 없이 벤더를 교체하기 어려워질 수 있다”고 말했다.

카운터포인트 리서치의 부사장이자 파트너인 닐 샤는 “AI 모델 기업들은 사용량 기반 비즈니스 모델과 애플리케이션, 서비스 간 결합을 강화하며 기업 고객을 위한 ‘원스톱 서비스’ 제공자로 자리매김하려 한다”고 분석했다.

이어 “애플리케이션과 서비스 계층을 직접 통제하면 기업을 자사 생태계에 묶어둘 수 있을 뿐 아니라, 고객의 요구와 문제, 업무 방식까지 직접 이해해 모델 최적화에도 활용할 수 있다”고 설명했다.

IDC의 기리는 락인이 불가피한 것은 아니라고 진단했다. 다만 이를 피하기 위해서는 초기 단계에서의 전략적 설계가 중요하다고 강조했다.

기리는 “모듈형 아키텍처를 통해 모델 계층은 점차 추상화할 수 있지만, 락인을 피하려면 의도적인 설계 선택이 필요하다”라며 “그렇지 않으면 특정 모델뿐 아니라 데이터 파이프라인, 워크플로우, 거버넌스 프레임워크까지 포함한 전체 스택에 종속될 위험이 있다”고 말했다.

한편 이번 흐름은 엔터프라이즈 AI가 여전히 많은 구현 작업을 필요로 한다는 점도 보여준다.

쉴은 “생성형 AI 플랫폼은 강력하지만, 실제 비즈니스 프로세스를 지원하려면 기업 내부 데이터와 워크플로우, 거버넌스 시스템과의 깊은 통합이 필수적”이라며 “이는 모델 성능과 실제 현장 적용 사이에 간극이 존재한다는 것을 의미한다”고 짚었다.

이러한 변화는 CIO들이 단순히 어떤 AI 모델의 성능이 더 뛰어난지를 넘어서, 해당 모델이 기업 시스템에 적용된 이후 구현과 운영을 누가 주도할 것인지까지 함께 고려해야 함을 시사한다.
dl-ciokorea@foundryco.com

멀티클라우드 시대 AI 에이전트 관리 전쟁···MS·구글 전략 ‘온도차’

마이크로소프트(MS)와 구글은 기업 IT 조직이 기업 데이터에 접근하고 다양한 비즈니스 애플리케이션을 넘나들며 작업을 수행하는 도구에 대응할 수 있도록, AI 에이전트 통제 기능을 강화하고 있다.

MS는 5월 1일 기업 고객을 대상으로 ‘에이전트 365(Agent 365)’를 정식 출시했다. 이 서비스는 조직이 AI 에이전트를 탐색하고, 관리하며, 보안을 유지할 수 있도록 지원한다. 특히 MS 환경뿐 아니라 서드파티 SaaS, 클라우드, 온프레미스 등 다양한 환경에서 작동하는 에이전트까지 포괄하는 것이 특징이다.

구글은 4일 ‘워크스페이스(Workspace)’용 AI 컨트롤 센터를 발표했다. 해당 기능은 AI 사용 현황, 보안 설정, 데이터 보호 정책, 프라이버시 보호 기능 등을 중앙에서 통합적으로 확인할 수 있도록 하는 데 초점을 맞췄다.

이 같은 발표 시점은 기업 AI 활용 방식의 변화를 반영한다. 많은 기업이 더 이상 챗봇 테스트 단계에 머무르지 않고, 기업 시스템에 접근해 사용자를 대신해 업무를 수행하는 에이전트 도입을 본격화하고 있다.

이 변화는 CIO와 CISO가 기업 내 AI 에이전트를 바라보는 방식에도 영향을 미친다.

시장조사업체 포레스터의 수석 애널리스트 비스와짓 마하파트라는 “벤더들이 에이전트 통제를 신원, 접근, 데이터, 워크로드 관리와 함께 배치하면서 AI 거버넌스를 IT와 보안 조직이 공동으로 책임지는 운영 영역으로 자리매김시키고 있다”라며 “CIO 입장에서는 AI 에이전트를 다른 디지털 인력과 마찬가지로 관리해야 하며, 라이프사이클 관리와 비용 가시성, 서비스 관리 체계와의 통합이 필요하다”라고 설명했다.

CISO의 역할도 확대되고 있다. 기존의 모델 리스크나 데이터 유출 대응을 넘어, 자율성이 높아진 에이전트의 행동을 지속적으로 통제하고, 위험 발생 시 영향을 최소화할 수 있는 체계가 요구된다.

옴디아(Omdia)의 수석 애널리스트 리안 지에 수는 “AI 거버넌스가 모든 AI 기반 기업 애플리케이션의 핵심 구성 요소로 부상하고 있다”라며 “파일럿 단계를 넘어 전사적 도입으로 확대되는 과정에서, 거버넌스는 AI 구축 단계부터 필수적으로 포함돼야 한다”라고 강조했다.

MS와 구글의 차이점

MS의 ‘에이전트 365’와 구글의 AI 컨트롤 센터는 유사한 거버넌스 문제를 다루지만, 출발점은 서로 다르다.

옴디아의 수는 “기업들이 멀티클라우드와 하이브리드 IT 환경에서 AI를 점점 더 적극적으로 도입하고 있다는 점을 고려하면 두 접근 방식은 상호 보완적”이라며 “각각 자사 환경의 AI 워크로드에 최적화돼 있어 특정 벤더에 집중 투자한 기업일수록 네이티브 AI 거버넌스 경험이 훨씬 원활해질 것”이라고 설명했다.

포레스터의 마하파트라는 이러한 차이를 거버넌스 성숙도가 아닌 ‘플랫폼 범위’의 문제로 해석했다. MS는 AI 에이전트를 조직 전반에서 관리해야 하는 ‘기업 행위자’로 보는 반면, 구글은 협업 데이터와 사용자 콘텐츠 내에서 AI가 어떻게 작동하는지에 더 집중하는 경향이 있다는 분석이다.

마하파트라는 “두 접근 방식은 서로 다른 통제 영역을 다루기 때문에 완전히 경쟁 관계라고 보기는 어렵다”라면서도 “기업이 두 생태계를 동시에 표준으로 채택하지 않는 한 완전한 보완 관계라고 보기도 어렵다”라고 말했다. 이어 “시간이 지날수록 각 모델은 자사 생산성 및 데이터 플랫폼과 더욱 긴밀하게 결합되면서, AI 거버넌스 의사결정이 기업 아키텍처 전략이 아닌 특정 벤더 선택에 종속될 위험이 커질 수 있다”라고 덧붙였다.

파리크 컨설팅(Pareekh Consulting)의 CEO 파리크 자인은 보다 중립적인 시각을 제시했다. 자인은 “두 접근 방식은 보완적이면서 동시에 경쟁적 성격을 지닌다”라며 “특히 MS와 구글을 함께 사용하는 기업의 경우 AI 거버넌스가 각 벤더의 기반 플랫폼에 더욱 밀접하게 연결될 가능성이 있다”라고 분석했다.

남아 있는 리스크

새로운 통제 기능은 기업이 AI 에이전트를 보다 잘 파악할 수 있도록 돕지만, 섀도우 AI, 서드파티 통합, 자율적 행동에 대한 책임 문제 등 더 큰 리스크를 해소하지는 못한다는 분석이 나온다.

파리크 컨설팅(Pareekh Consulting)의 CEO 파리크 자인은 개발 도구, 브라우저 확장 프로그램, 로컬 어시스턴트, SaaS 코파일럿, 비인가 도구 연동 등을 통해 섀도우 AI 에이전트가 여전히 등장할 수 있다고 지적했다. 또한 서드파티 통합은 보안 검증 속도를 앞지르며 빠르게 확산될 가능성도 있다고 덧붙였다.

자인은 “감사 로그는 어떤 일이 발생했는지는 보여주지만, 자율형 에이전트가 왜 그런 행동을 선택했는지까지는 항상 설명하지 못한다”라고 말했다.

이로 인해 에이전트가 비즈니스나 보안 리스크를 유발하는 행동을 했을 때, 기업은 통제와 책임 소재를 둘러싼 어려운 문제에 직면하게 된다. 로그가 개선된다고 해서 책임이나 통제 문제가 자동으로 해결되는 것은 아니라는 의미다.

포레스터(Forrester)의 수석 애널리스트 비스와짓 마하파트라는 가장 큰 공백이 네이티브 플랫폼 외부에서 발생할 가능성이 높다고 지적했다. 로우코드 도구, 외부 API, SaaS 애플리케이션을 통해 생성된 섀도우 에이전트는 중앙 통제를 우회하고 과도하거나 상속된 권한으로 작동할 수 있다는 설명이다.

마하파트라는 “서드파티 통합은 에이전트의 활동 범위를 확장시키지만, 이후 발생하는 행동이나 데이터 전파에 대한 가시성은 동일한 수준으로 확보되지 않는 경우가 많다”라며 “여러 시스템을 거치며 연쇄적으로 작동하는 경우 감사 가능성도 균일하지 않아 의도와 결과를 구분하기 어렵고, 자율형 에이전트가 실질적인 비즈니스 또는 보안 영향을 초래했을 때 책임 소재 역시 여전히 불분명하다”라고 분석했다.

결국 MS와 구글이 제공하는 기본 통제 기능은 도움이 되지만, 전체 AI 에이전트 환경을 완전히 포괄하기는 어렵다는 것이 전문가들의 공통된 시각이다. 멀티클라우드, 다양한 SaaS, 개발 플랫폼, 브라우저 기반 AI 어시스턴트를 함께 사용하는 기업이라면 단일 벤더 콘솔을 넘어서는 거버넌스 체계를 별도로 마련해야 한다는 지적이다.
dl-ciokorea@foundryco.com

AI戦略という名のパフォーマンス——CIOはイノベーションを率いているか、演じているだけか

数年おきに、CIOは同じ問いに直面する。「今話題の技術について、わが社は何をしているのか」。今日、その問いはAIに向けられている。プレッシャーは本物だ。競争環境は厳しく、取締役会が進捗を求めるのも当然だ。

問題は、そのプレッシャーがどう吸収されているかだ。多くの組織で、取締役会の要求への対応が一種の「演技」になっているのではないか。パイロットが積み上がり、ベンダーとの関係が増え、進捗報告が社内を回る。外から見ると、AIに真剣に投資している組織に見える。しかし実態はというと、ビジネスの動き方はほとんど変わっていない。AIが依存するインフラ整備、ワークフローの再設計、データの準備は手つかずのままだ。

取締役会向けの準備で、AIのスライドに15のアクティブなパイロットが載っているようなケースを複数目にしてきた。3つは「有望」と説明され、1つはデータアクセスの問題で保留中。どれも測定可能なビジネス成果には結びついていない。これが「AI戦略シアター(AI戦略という名の演技)」だ。取締役会の問いを表面上は満たすが、本質的には答えていない。

パイロットがポートフォリオになるとき

本来パイロットとは、「この技術は特定の用途でスケールさせるに値するか」という一つの問いに答えるためのものだ。時間を区切り、用途を定義し、二択の結論を出す——それがパイロットの役割だ。しかし今多くの組織で起きていることはそれとは異なる。

取締役会からのプレッシャーが高いとき、最も抵抗の少ない道は「何かを始めること」だ。用途を特定し、ベンダーと交渉し、概念実証を立ち上げて報告する。目に見える活動が生まれ、次の四半期のガバナンス上の問いが満たされる。一方、ワークフローの統合、データインフラ、変更管理という難しい仕事は先送りされ続ける。

McKinseyの2025年版「State of AI」では、88%の企業が少なくとも一つの領域でAIを活用しているが、スケーリングフェーズにあるのはわずか32%であることがわかった。実験と価値創出の間のギャップは広く、ほとんどの組織はその中に止まっている。この状況に対し、McKinsey、ワークフローが再設計されていないことを主な理由と指摘する。AIはある。しかしその周りのビジネスプロセスは変わっていない。

個別に立ち上げたパイロットは、互いに連携しない。AIを拡張するために必要となるデータインフラや統合アーキテクチャも生まれない。結果として、維持コストだけがかかるポートフォリオと、実態の伴わないAI投資の物語が残る。ベンダー側にも新しいパイロットを次々と立ち上げる動機がある。概念実証は限られた環境では印象的な結果を出すが、それが本番環境で機能するかどうかは顧客側の問題だ。契約が取れれば、ベンダーの役割は終わる。

ガバナンスの欠如がCIOの信頼性を損なう

プレッシャーが生む第二の問題は、ガバナンスのないAI意思決定が組織内に広がることだ。取締役会からのAI推進の指令が事業部門に届くと、各部門は独自の判断で動き出す。財務部門がITアーキテクチャの審査を通過していないツールの契約を結ぶ。業務部門が本番データに触れる自動化パイロットを走らせる。マーケティング部門がコンプライアンス審査を受けていない顧客情報で実験する。

いわばAIの速度で拡大するシャドーITだ。通常の大きなソフトウェア投資なら調達審査やアーキテクチャレビューを経るはずが、午後に導入できて数日で結果が出るツールにはそのプロセスが適用されない。IT部門が全体像を把握する頃には、事業部門はすでに「AIは使える」「使えない」という結論を出している——エンタープライズ向けに設計されていないツールをもとに。

小さな失敗が積み重なるうちに、信頼は静かに失われていく。成果の出ないAI投資、後から発覚するガバナンスの問題、ITを避け始めた事業部門——こうしたパターンが積み重なれば、取締役会はいずれ問題を直視する。実験フェーズから価値創出への移行を自らリードし、そのプロセスを積極的に管理しているCIOが、差をつけ始めている。

規律ある実行に必要なもの

概念実証から本番環境へ、AIを移行させることに成功した組織には共通点がある。どこに投資するかについて明示的で文書化された決断を下し、パイロットを追加するプレッシャーが高いときもその方針を守り通したことだ。

具体的には、開始前に一定の基準を満たすイニシアティブだけを選ぶ短いリストを維持する。ワークフローが十分に理解され、変更管理権限を持つビジネスリーダーが担っていること。データがアクセス可能で整備されていること。展開後ではなく前に成功の定義が決まっていること。この基準を満たさない提案はスタートさせない。取締役会が進捗を求め、ベンダーが有利な条件を提示している状況で目に見える活動を控えることは、言うは易く行うは難しだ。

内部ケイパビリティの構築も重要だ。ツールは増え続ける。戦略的な問いは、組織がAIを評価・統合・ガバナンスする真のケイパビリティを培っているか、それともベンダー依存に留まるかだ。前者は時間をかけて組織的な優位性を積み上げる。後者は外から見るとケイパビリティのように見えるが、内実は依存だ。

AIリーダーシップを測るたった一つの指標

取締役会向けのプレゼンでは見栄えがよく、業務上の価値はほとんど生まないAIリーダーシップの形がある。パイロットは走り、進捗報告が回る。「実際にビジネスに何をもたらしているのか」という問いは、答えることがナラティブを複雑にするため、問われないままになりがちだ。

AIリーダーシップは最終的に一つのことで測られる。何個のパイロットが、ビジネスの実際の動き方を変えるまで生き残ったか。今構築されているもののほとんどは、そこまで届かないかもしれない。

Beyond prevention: Protecting patient care through cyber recovery

Cyberattacks in healthcare can be operational crises that disrupt care delivery, delay procedures, and put patient safety at risk. As ransomware and data breaches continue to escalate, healthcare leaders are being forced to rethink what resilience actually means in practice.

For years, resilience was defined largely by prevention. But in healthcare environments shaped by legacy systems, complex clinical applications, and strict regulatory requirements, prevention alone is insufficient. Organizations now have to assume disruption will occur. The real measure of resilience is how quickly and safely they can recover.

That change reflects the reality of healthcare data environments, which are uniquely complex. Many organizations are still running legacy applications that support critical workflows, while also managing the fallout from years of mergers and acquisitions that have left behind fragmented systems and inconsistent data architectures. At the same time, many critical applications do not have well-defined recovery objectives, leaving significant gaps when incidents occur. 

In this context, recovery speed and data integrity carry far greater consequences than in most other industries. Delays are more than an inconvenience and can directly impact clinical decision-making and, in extreme cases, patient outcomes.

Restoring systems quickly is essential, but doing it correctly is just as critical. Inaccurate or incomplete data introduces new risks at the exact moment organizations are trying to stabilize operations.

Where healthcare resilience breaks down

While infrastructure or data may be recoverable in theory, executing recovery in a way that maintains compliance and protects sensitive patient data is far more difficult.

The challenges are layered. Limited budgets and staffing make it difficult to build and maintain robust recovery strategies. Data itself is highly complex, spanning structured and unstructured formats across diverse systems. And acquired datasets from mergers often arrive with limited documentation and immature architectures, creating ongoing operational friction. 

These issues are compounded by broader industry pressures. Healthcare providers are expected to modernize infrastructure, adopt cloud technologies, and improve efficiency while operating under tight financial constraints. Legacy systems slow down progress, but replacing them introduces new risks, particularly when recovery processes are not fully aligned across environments. As a result, the traditional separation between backup, security, and compliance is breaking down.

Forward-thinking organizations are moving toward a more integrated model that brings these functions together into a unified recovery strategy.

According to the most recent FBI annual internet crime report, criminals are posing as legitimate health insurers and fraud investigators to commit health care fraud. The FBI determined that the sectors most impacted by ransomware are healthcare and public health.

This is where the combined strengths of Cognizant and Rubrik become most compelling. Cognizant brings deep healthcare domain expertise and a proven track record in designing infrastructure strategies that address regulatory, operational, and clinical realities. Rubrik complements this with advanced capabilities in cyber recovery, sensitive data discovery, and ransomware resilience. Together, they enable a fundamental shift — from reactive backup management to a proactive, application-led recovery model — spanning multi-cloud environments and helping healthcare organizations restore critical systems rapidly, while preserving data integrity and maintaining compliance.

Over the next year, healthcare IT leaders must prioritize resilience and treat it as both a cyber and data challenge. That means adopting solutions that support faster implementation, tighter operational control, and measurable ROI.

Also, they must build recovery strategies that can withstand real-world disruption without compromising patient care — because in healthcare, resilience is about maintaining trust and ensuring that care can continue under pressure.

Discover how Cognizant and Rubrik are helping healthcare organizations recover faster, stay compliant, and keep patient care moving forward.

From AI investment to innovation: What it takes to deliver real business impact

As organizations continue to invest heavily in AI, many CIOs are still working to understand how those investments translate into measurable business impact. At the center of that challenge is a shift in how AI is approached, from isolated experimentation to enterprise-wide execution. In this conversation, Jeff Baker, Technology Managed Services Lead at PwC, shares how organizations can move beyond early-stage use cases and begin realizing meaningful outcomes.

custom

Jeff Baker, Technology Managed Services Lead at PwC

CIO.com: Many CIOs are investing in AI but haven’t necessarily seen a return on that investment yet. What does it take to move from investment to actual innovation?

Jeff Baker: A couple of things. I don’t think a lot of our clients are thinking big enough about the impact of AI and some of the possibilities that are out there. One of the things we’re encouraging them to do is move it out of that experimental phase or the back office or cottage industry and really start teaming up with the business directly to find more impactful ways to use the technology that have a business outcome, not just a cool technology showcase.

There are a lot of skunkworks projects out there that look fun but aren’t necessarily hitting the bottom line from an impact standpoint. The more we can team the AI engineers with people inside the business who are asking for the technology, the more you’re going to see meaningful outcomes.

CIO.com: You’ve said that AI requires structural change, not just experimentation. What’s the most important operational shift CIOs should make?

Jeff Baker: I think about AI in two basic categories. There’s what I call citizen-led AI. We’re getting a lot of really cool tools into the hands of people at firms, and they’re doing interesting things with it. They’re organizing their inboxes and creating chat programs that respond to RFPs, and other “day in the life” tasks.

On the other side, there are more durable, agentic-type models that have a lot more business impact but require more investment. That’s where strong teaming between IT and the business is important to define what the outcome should be.

There’s also a lot of sophistication that comes with that. Is it durable? Is it secure? Are you thinking about bias? How are you curating it? Who owns the ongoing management and observability of those agents once they’re deployed?

Security and data management become critical. The agents are only as good as the data they’re based on. In many cases, companies need to clean up their data before these agents can be effective. And finally, this should be collaborative. These agents are not isolated. They’re going to work across the organization with other humans and other agents to help drive outcomes.

CIO.com: You’ve said AI-driven managed services differ from traditional models. How so, and where do CIOs get it wrong?

Jeff Baker: The difference for us, what we call Managed Services 2.0, is that it’s AI-first. It’s focused on business outcomes.

It’s not just about deploying a team to work tickets and hit service levels. It’s about improving business outcomes over time. We’re seeing efficiency gains of about 20% in the first year and up to 50% over five years with clients who allow us to use AI appropriately.

Where it can get tricky is in how these services are purchased. In an RFP process, procurement teams often try to normalize key elements across vendors. But that can flatten the innovation that providers are trying to bring to the table.

CIO.com: Looking ahead 3 to 5 years, what will separate organizations that succeed with AI from those that remain stuck in pilot mode?

Jeff Baker: It comes down to focusing on the business outcomes. What are you trying to achieve with technology, people, and your organization?

And then, in some ways, you have to get out of the way of the agents. They think differently than humans do. I see too many companies trying to treat agentic systems like a traditional business process automation exercise.

Instead, you should focus those agents on outcomes and allow them to operate in the way they’re designed to. That’s where you’re going to see a bigger impact.

To learn more about PwC managed services, click here.

Why modernization is defining the next decade of cloud

Cloud adoption is no longer the differentiator it once was. Over the past decade, enterprises have moved aggressively to the cloud to improve scalability, reduce infrastructure constraints, and accelerate innovation. Today, most organizations operate in hybrid or multicloud environments, and cloud has become the baseline rather than a competitive advantage.

What separates leaders now is not whether they are in the cloud, but how effectively they modernize and operate it.

Many enterprises are discovering that their current environments, while technically in the cloud, still reflect legacy design decisions. Applications may have been lifted and shifted without being re-architected. Data remains fragmented across systems. Operations are often managed through manual processes and disconnected tools. These limitations restrict agility and prevent organizations from fully realizing the value of their cloud investments.

Modernization addresses this gap. It is not simply a technology upgrade, but a shift in how applications, data, and operations are designed to support continuous innovation. Organizations that modernize effectively can improve performance, increase resilience, and create the foundation required for advanced capabilities such as artificial intelligence.

A key driver behind this shift is the growing importance of data. As enterprises invest in AI and analytics, the ability to access, govern, and activate data across environments becomes critical. Without a modern data foundation, AI initiatives struggle to scale and deliver consistent results. Data that is siloed, inconsistent, or difficult to access limits both operational efficiency and decision-making.

At the same time, application modernization is becoming essential. Legacy applications are often not designed to take advantage of cloud-native capabilities such as elasticity, automation, and microservices architectures. Modernizing these applications enables faster development cycles, improved scalability, and better alignment with evolving business needs.

However, modernization is not limited to applications and data. It also requires a transformation in how cloud environments are operated. Many organizations still rely on reactive operating models, where teams respond to issues as they arise. As environments grow more complex, this approach becomes increasingly difficult to sustain.

In fact, as explored in Why Cloud Innovation Slows, many enterprises find that outdated operational approaches create friction, slow delivery, and increase costs, even in modern cloud environments. Moving toward more proactive, automated operations is a critical component of successful modernization.

This evolution is being accelerated by the rise of AI. Organizations are not only building AI capabilities but also embedding intelligence into how systems are managed and optimized. AI-driven operations can help identify inefficiencies, automate routine tasks, and improve overall system performance. As a result, modernization efforts are increasingly tied to broader AI strategies.

The benefits of modernization extend beyond technology. Organizations that modernize effectively are better positioned to respond to market changes, launch new products, and improve customer experiences. They can operate with greater efficiency while maintaining the flexibility needed to adapt to new opportunities.

However, the path to modernization is not always straightforward. It requires careful planning, clear priorities, and alignment across teams. Enterprises must balance the need to maintain existing systems with the need to invest in future capabilities. This often involves making strategic decisions about which applications to re-architect, which to retire, and how to integrate new technologies into existing environments.

Partnerships can play an important role in this process. Organizations benefit from working with providers that bring both technical expertise and operational experience. This helps reduce risk, accelerate timelines, and ensure that modernization efforts are aligned with business outcomes.

For CIOs and technology leaders, the message is clear. The next phase of cloud is not about adoption. It is about evolution. Modernization is the mechanism that enables organizations to move from simply running workloads in the cloud to fully leveraging its capabilities.

As cloud environments continue to grow in complexity, the ability to modernize effectively will determine which organizations can innovate at scale. Those that invest in modern architectures, unified data foundations, and intelligent operations will be better positioned to compete in the years ahead.

Modernization is the foundation for agility, resilience, and intelligence — and the gateway to becoming an AI-ready enterprise. Discover how to modernize your applications, infrastructure, and data in ways that help your organization drive continuous innovation.

Download our e-book: Modernization Without Limits: Building the AI-Ready Enterprise

The inference imperative: Why running AI is harder than building it

Enterprises have made significant progress in building artificial intelligence capabilities. Access to models, tools, and platforms has expanded rapidly, lowering the barrier to entry for experimentation. Yet many organizations are discovering that building AI is only the first step. Running it at scale is where the real challenge begins.

The difficulty is not in creating models, but in operationalizing them.

As AI moves from pilot to production, it must integrate into complex enterprise environments. These environments include fragmented data systems, legacy infrastructure, and distributed workflows that were not designed to support AI-driven execution. What works in a controlled experiment often breaks down under real-world conditions.

Data is one of the most significant constraints. AI systems rely on consistent, high-quality, and context-rich data. In most enterprises, data is spread across multiple platforms and lacks a unified structure. Without a shared understanding of what data represents, models struggle to produce reliable outputs. More importantly, business teams cannot act on those outputs with confidence.

This challenge becomes more pronounced as organizations attempt to scale AI across use cases. Each new deployment introduces additional complexity, from data integration and governance to security and compliance. Without a strong foundation, these factors slow progress and increase operational risk.

Running AI also requires a different operating model. Traditional approaches to cloud and application management are often reactive, relying on manual processes and ticket-driven workflows. These models are not designed to support the continuous monitoring, iteration, and optimization that AI systems require.

Organizations that treat AI as an isolated capability often encounter friction at this stage. Models may perform well in testing, but struggle to deliver consistent value once deployed. This disconnect between development and operations limits the return on AI investments.

In contrast, organizations that succeed with AI focus on how it is run, not just how it is built. They align data, infrastructure, and operations around AI-driven execution. This includes creating unified data environments, embedding governance into workflows, and enabling real-time access to information.

Automation plays a critical role in this transition. Managing AI systems at scale involves monitoring performance, maintaining data quality, and responding to changing conditions. Embedding automation into these processes helps reduce manual effort and improve consistency. Over time, this enables organizations to operate AI systems more efficiently and with greater reliability.

The shift toward AI-first operating models is becoming more pronounced. In these environments, intelligence and automation are embedded into how systems are designed and operated. This allows organizations to move from reactive processes to more proactive and predictive operations. As a result, they can reduce operational overhead, improve delivery speed, and better support AI-driven innovation. 

This evolution is also being driven by increasing business expectations. Leadership teams expect AI to deliver measurable outcomes tied to efficiency, speed, and resilience. However, these outcomes depend on the ability to run AI effectively across the enterprise. Without the right operating model, even advanced AI capabilities will struggle to deliver consistent value.

At the same time, AI-native organizations are setting a new benchmark. They can deploy and scale AI more quickly because their environments are built with automation and integration at the core. This allows them to iterate faster and respond more effectively to changing conditions.

For established enterprises, the path forward requires a shift in focus. Building AI capabilities remains important, but it must be matched with investments in data foundations, operating models, and automation. This is what enables AI to move beyond experimentation and deliver real business outcomes.

The takeaway for CIOs and technology leaders is clear: the success of AI initiatives depends less on the models themselves and more on the systems that support them. Organizations that prioritize how AI is run will be better positioned to scale, adapt, and realize the full value of their investments.

Continue building your AI strategy with a practical, execution-focused framework. Check out the AI Action Playbook to about the five stages of enterprise AI maturity. 

Why a modern data foundation takes more than a new platform

Too many data modernization efforts begin with the platform. The conversation turns to replacing the underlying data environment, moving reporting workloads to the cloud or retiring legacy tooling. Those decisions matter, but in my experience, they are rarely what makes the work hard.

What makes the work hard is everything that has built up around the platform over time.

I have seen this most often in organizations that inherited legacy architecture through acquisition, accumulated technical debt through years of deferred investment or saw reporting logic and master data evolve without enough enterprise discipline. On the surface, the environment may still appear functional. Dashboards are still refreshing. Reports still go out. Teams still find ways to get numbers. But once the business begins to scale, the weaknesses become much harder to hide.

The warning signs usually appear before the platform itself becomes the problem. Different teams start using different numbers for the same KPI, critical reporting logic begins to live outside core systems and analysts spend more time reconciling data than interpreting it. New business units take longer to onboard, reporting changes become harder than they should be and, before long, the issue is no longer just the data platform. It becomes a broader problem of trust, scalability and control.

That is why too many modernization efforts are scoped too narrowly. Replacing the platform is only one part of the challenge. The real work is untangling years of logic, definitions and integration patterns that were never designed to scale together.

The platform is only one layer of the problem

One of the clearest lessons I have learned is that legacy data environments rarely fail in an isolated way. They fail by becoming harder to trust and harder to change.

In many environments, the data platform is carrying far more than data. It is carrying years of workarounds for things that source systems were never able to handle cleanly. Reporting logic ends up split across ETL jobs, SQL transformations, scripts, spreadsheets and side databases. Some of it was built quickly to solve immediate business needs. Some of it was necessary at the time. But over time, those decisions create duplicated logic, hidden dependencies and handoffs that become harder to govern every time the business changes.

The issue is not only technical debt in the traditional sense. It is also reporting debt, where inconsistent definitions and duplicated logic across reports make data harder to trust and maintain. KPI definitions evolve differently across functions. Business logic gets embedded in too many places. Teams build local workarounds to compensate for mismatched source data. The business keeps moving, but the data foundation falls further behind.

That is why I think CIOs need to treat modernization less like a platform replacement and more like an effort to restore architectural separation and control.

In practice, that means separating ingestion, transformation and reporting instead of allowing all three to collapse into the same layer. It means reducing the number of places where business logic can live. It means establishing a clear source of truth for key metrics before they show up in executive dashboards. It also means making sure master data is defined consistently enough that teams are not comparing duplicate records or conflicting definitions and assuming the platform is to blame.

Fit matters more than feature depth

Platform decisions are often misunderstood.

On paper, most modern data platforms are capable. They all promise scale, flexibility and performance. But in practice, the decision is rarely about capability alone. It is about fit.

In recent modernization work, I have seen firsthand that the wrong decision is not always choosing an inferior technology. More often, it is choosing a platform that introduces unnecessary complexity into an environment that is already fragmented.

That complexity shows up quickly in the form of another cloud to manage, another billing model to track, another toolchain to support, another integration layer to maintain, another set of skills to build and another governance surface to control.

Those costs do not always show up clearly in vendor comparisons, but they show up immediately in execution.

That is why I have become more disciplined about asking a different question. Not what is the most powerful platform on paper, but what choice best aligns with the operating model, capabilities and simplification goals of the enterprise.

There is no one-size-fits-all answer. For some organizations, a separate cloud native warehouse may make perfect sense. For others, a more unified platform approach is the better fit because it leverages current skills, preserves momentum and avoids duplicating effort inside an ongoing modernization program.

That distinction matters.

The goal is not to build the most theoretically flexible architecture. It is to build one where the organization can actually govern, extend and operate over time.

Master data is where credibility starts

Modernization does not become credible until master data starts to improve.

That is not a side effort. It is part of the foundation.

In many enterprises, the root problem is not just the reporting layer. It is the fact that core entities such as customers, products, suppliers and locations are still defined differently across systems. When that happens, every downstream discussion about trust, reporting consistency and AI readiness becomes harder than it should be.

One area where this becomes tangible is syndication and deduplication. In most legacy environments, the same customer, product or supplier exists multiple times across systems, often with slight variations in naming, attributes or hierarchy. Over time, teams build local workarounds to compensate, which only reinforces the fragmentation.

Deduplication is not just a technical exercise. It forces alignment to what defines a unique entity. Syndication operationalizes that alignment, ensuring that once data is standardized, it is consistently distributed across systems and downstream processes. Without both, organizations end up maintaining multiple versions of the same truth and the platform becomes harder to trust regardless of how modern it is.

That is why I keep coming back to master data discipline. If important reports are not built on agreed business definitions and trusted logic, leaders end up looking at different versions of the same KPI. If customers, products and suppliers are not defined consistently across the business, the platform may look modern while the reporting remains hard to trust.

That is also why phased execution matters. Master data does not have to be fully resolved upfront, but it does need to be mature enough in the right domains to support the first releases and give the organization a foundation it can extend with confidence.

A modern foundation has to be engineered for change

What has worked best in my experience is a disciplined architecture that separates ingestion, transformation and reporting instead of mixing them together in ways that are hard to maintain.

That is where the medallion model becomes practical, giving the organization a structured way to separate raw data, standardized data and business-facing reporting. Bronze is where data first comes in from different systems. Silver is where it gets standardized, so the business is not working from conflicting definitions or duplicate records. Gold is where reporting and KPIs can sit on a more trusted foundation. That separation makes the environment easier to scale, troubleshoot and govern over time. The value is not in terminology, but in the discipline behind it.

I have seen organizations modernize into cloud data warehouses, data lakes and lakehouse architectures. The pattern is the same. If the underlying logic, master data and governance are still fragmented, the new platform inherits the same trust problems as the old one.

That same discipline has to carry through to the platform itself. If the environment is going to hold up under growth, the pipelines have to be observable, versioned and resilient enough to support change without constant rework. Environment separation, CI/CD workflows and operational monitoring are not extras. They are part of what makes the platform sustainable.

I also would not lead a modernization effort with AI, even when the pressure is high. AI raises the stakes, but it does not change the core problem. If the data foundation is still fragmented, poorly governed or inconsistent, a new AI layer will not solve it. That is increasingly showing up in the market, with Gartner warning that many generative AI efforts will stall because of poor data quality, inadequate risk controls, escalating costs or unclear business value. Foundry’s latest AI research reinforces this, identifying data storage and management as a top foundational investment for internal AI.

Final thought

The technology will continue to evolve.

The organizations that benefit most will not be the ones chasing every new platform. They will be the ones making disciplined decisions about how those platforms fit into their operating model and executing against them consistently.

Modernization does not fail because the technology is not good enough.

It struggles when the decisions behind it are not grounded in how the business actually runs.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Why the future of software is no longer written — it is architected, governed and continuously learned

We are entering a decade where software is no longer just an enabler of business — it is the primary mechanism through which intelligence is created, scaled and monetized across the enterprise.

For CIOs, this is not another technology cycle. This is a leadership inflection point.

Across boardrooms, investor discussions and strategic planning sessions, the conversation is shifting rapidly:

  • From How fast can we build software?”
  • To How intelligently can we design, govern and scale decision systems?”

This is a fundamental reframing of the CIO mandate.

The organizations that recognize this shift early will not just move faster — they will compound intelligence faster, creating asymmetric advantage in markets where speed alone is no longer sufficient.

The following perspective must therefore be read not as a technology trend, but as a strategic operating model shift for CIOs entering 2026 and beyond.

The next inflection point: Software development is no longer about code

Over the past two decades, software development has evolved through predictable phases — manual coding, agile acceleration, cloud-native scaling and DevOps automation. But as we enter 2026, that trajectory is no longer linear.

We are now witnessing a structural break.

Generative AI and agentic systems are not simply accelerating development — they are redefining the very nature of software creation, ownership and accountability.

This shift mirrors the broader transformation outlined in the CIO 3.0 paradigm, CXO 3.0: How intelligent leadership will redefine enterprise value, where technology leadership has moved from operating systems to architecting enterprise intelligence itself.

In software development, this translates into a fundamental question for boards, CIOs, CTOs, CISOs and chief AI officers (CAIOs): Are we still building software or are we now orchestrating intelligence systems that build themselves?

What makes this transition particularly consequential is that it is already happening quietly but decisively.

Across high-performing organizations:

  • AI-generated code is already contributing meaningfully to production systems
  • Development cycles are compressing from weeks to days — and in some cases, hours
  • Decision-making is increasingly embedded directly into software systems rather than layered on top

Yet, in many enterprises, governance, accountability and operating models have not kept pace.

This gap between capability acceleration and governance maturity is where both the greatest opportunity and the greatest risk now reside.

2 forces reshaping software development in 2026

1. AI across the full software development lifecycle (SDLC)

Generative AI has moved beyond coding assistance into end-to-end lifecycle orchestration, consistent with broader enterprise AI adoption trends where organizations are embedding AI across multiple functions (McKinsey State of AI: The state of AI in 2025: Agents, innovation and transformation):

  • Planning & Design → AI-driven requirements synthesis, architecture generation
  • Development → Code generation, refactoring, pattern enforcement
  • Testing → Autonomous test case creation and validation
  • Deployment → Intelligent CI/CD pipelines with adaptive optimization
  • Maintenance → Self-healing systems, anomaly detection, auto-remediation

The developer is no longer just a coder. The developer is becoming a curator of intent, constraints and outcomes.

The compression of the SDLC

What historically required:

  • Weeks of design
  • Months of development
  • Iterative testing cycles

Can now be orchestrated through multi-agent AI systems operating in parallel.

This introduces a new dynamic: Software development is no longer a sequential process — it is becoming a continuously adaptive system.

For CIOs, this means:

  • Traditional governance checkpoints may become bottlenecks
  • Legacy approval workflows may inhibit innovation velocity
  • Organizational design must evolve alongside technical capability

2. Intensifying competition in AI coding ecosystems

The competitive landscape is accelerating rapidly, particularly across ecosystems led by:

  • Microsoft (GitHub Copilot, Azure AI)
  • Google (Gemini, Vertex AI, developer tooling)
  • Apple (on-device AI, developer ecosystem integration)

Events like Google I/O and Microsoft Build are no longer just developer conferences—they are strategic battlegrounds for control over the future of software creation (Google I/O: Google I/O | Microsoft Build: Microsoft Build).

The stakes are clear:

  • Whoever controls the AI development stack controls the next generation of digital economies
  • Whoever defines the developer experience defines the innovation velocity of entire ecosystems

Platform gravity is becoming strategic gravity

The implication for CIOs is profound.

Choosing a development ecosystem is no longer a tooling decision — it is a strategic alignment decision that determines:

  • Data gravity
  • Talent alignment
  • Innovation velocity
  • Long-term vendor dependency

In effect: Your AI development platform choice is becoming your enterprise’s innovation ceiling.

From SDLC to IDLC: The rise of the Intelligent Development Lifecycle

Traditional SDLC frameworks are becoming obsolete.

In their place, a new paradigm is emerging: The Intelligent Development Lifecycle (IDLC)

This is not simply an evolution — it is a redefinition of how software is conceived, built and governed.

Key characteristics of IDLC:

  • Intent-driven development: Developers define what and why, not just how
  • Agentic execution: AI agents perform multi-step development tasks autonomously
  • Continuous learning loops: Systems improve based on real-time feedback and usage patterns
  • Embedded governance: Compliance, security and auditability are built into execution (NIST AI Risk Management Framework)
  • Decision-centric architecture: The primary output is not code — it is decision capability

IDLC as a leadership operating model

IDLC is not just a development methodology.

It is an enterprise operating model for intelligence creation.

It changes:

  • How teams are structured
  • How accountability is defined
  • How value is measured

For CIOs, adopting IDLC means shifting from:

  • Managing delivery pipelines
  • To governing decision supply chains

The emerging reality: Developers as intelligence orchestrators

As AI agents take over repetitive and even complex coding tasks, the developer role is undergoing a profound transformation.

From:

  • Writing code line by line
  • Debugging manually
  • Managing environments

To:

  • Designing system intent
  • Governing AI agents
  • Ensuring ethical and secure outcomes
  • Orchestrating multi-agent collaboration

This is not a reduction in developer relevance.

It is an elevation of developer responsibility.

Talent transformation is now a CIO priority

This shift introduces a critical challenge:

Most current developer skill models are not aligned to this future state.

CIOs must now proactively invest in:

  • AI-native engineering skills
  • Prompt and intent engineering
  • Model governance literacy
  • Cross-disciplinary collaboration

Because the future developer is not just technical — they are decision designers.

The CXO convergence: Why this is no longer just a CTO conversation

The transformation of software development is not confined to engineering teams.

It now sits at the intersection of four critical leadership domains, reflecting the broader evolution of CIOs into strategic business leaders shaping enterprise outcomes (State of the CIO: State of the CIO):

CIO: The intelligence architect

  • Aligns AI-driven development with enterprise strategy
  • Ensures scalability and integration across platforms
  • Drives value realization from software investments

CTO: The innovation orchestrator

  • Defines architecture patterns for AI-native development
  • Leads platform engineering and developer experience
  • Drives competitive differentiation

CISO: The trust enforcer

  • Ensures secure AI-generated code
  • Governs data lineage and model integrity
  • Mitigates risks from autonomous systems

CAIO: The intelligence governor

This convergence reflects a broader reality: Software development is no longer a technical function — it is an enterprise risk, value and governance function.

Introducing a new framework: SAFE-AI DevOps

To navigate this transformation, enterprises require a disciplined, Board-ready approach.

SAFE-AI DevOps Framework (Secure, Adaptive, Federated, Explainable AI Development Operations)

This is a next-generation operating model for AI-driven software development.

1. Secure by Design (S)

  • AI-generated code must meet zero-trust security principles
  • Continuous vulnerability scanning integrated into AI pipelines
  • Secure prompt engineering and model access controls

CISO-led mandate: Trust is the new runtime environment

2. Adaptive Intelligence (A)

  • Systems learn and evolve continuously
  • AI models adapt to changing requirements and environments
  • Feedback loops drive improvement across lifecycle

CIO-led mandate: Learning velocity is the new productivity metric

3. Federated Development (F)

  • Multi-agent collaboration across distributed environments
  • Integration across cloud, edge and on-prem ecosystems

CTO-led mandate: Scale innovation without losing control

4. Explainable Execution (E)

  • Every AI-generated decision must be traceable
  • Audit trails for code generation and deployment

CAIO-led mandate: Explainability is the new compliance baseline

5. AI-Native DevOps (AI)

  • Autonomous CI/CD pipelines
  • Predictive deployment optimization
  • Self-healing systems and automated incident response

Cross-CXO mandate: Automation is no longer optional — it is foundational

The competitive battlefield: Ecosystems, not tools

The next phase of competition is not about individual tools.

It is about ecosystem dominance, as hyper-scalers invest heavily in AI infrastructure, platforms and developer ecosystems (McKinsey Technology Strategy Insights: McKinsey Global Tech Agenda 2026).

Key battlegrounds:

  • Developer platforms
  • Model ecosystems
  • Data gravity
  • AI infrastructure

As highlighted in your CIO.com perspective, infrastructure itself is becoming a strategic intelligence decision, not just an operational one.

The risk dimension: AI-generated code is not inherently safe

While productivity gains are undeniable, risks are escalating:

  • Hallucinated code vulnerabilities
  • Licensing and IP violations
  • Model bias and ethical concerns
  • Regulatory exposure (EU AI Act, NIST AI RMF)

This creates a new category of risk: AI Development Risk

This requires structured governance aligned with emerging regulatory and risk frameworks (NIST AI RMF: AI Risk Management Framework).

Blockchain and quantum: The next convergence layer

As we move beyond 2026, two additional forces will reshape AI-driven development:

Blockchain

  • Immutable audit trails for AI-generated code
  • Smart contracts governing software execution

Quantum Computing

  • Breakthroughs in optimization and cryptography

Together with AI, they form a converging intelligence stack that will redefine software engineering, consistent with broader enterprise transformation trends toward intelligent systems.

Boardroom implications: What investors and directors must understand

The shift to AI-driven development is not just technical — it is financial.

Research shows AI delivers the greatest impact when integrated into enterprise strategy rather than siloed initiatives (BankInfoSecurity: C-Suite Leaders Must Rewire Businesses for True AI Value).

Key board-level questions:

  • How much of our software is AI-generated?
  • What governance exists for AI-generated decisions?
  • How do we ensure security and compliance at scale?
  • What is our dependency on external AI ecosystems?
  • How does this impact enterprise valuation?

Because the reality is: Software is no longer a cost center — it is a capital engine.

The new metrics: Measuring success in AI-driven development

Traditional metrics are insufficient.

Old metrics:

  • Lines of code
  • Development velocity
  • Bug counts

New metrics:

  • Decision throughput
  • AI-assisted productivity ratio
  • Model governance maturity
  • Security incident reduction
  • Time-to-intelligence (TTI)

The leadership mandate for 2026 and beyond

The transformation of software development demands a new leadership mindset.

Three defining mandates for 2026:

  1. Architect intelligence, not just applications
  2. Govern AI as an enterprise asset
  3. Align ecosystems with strategy

The future of software is a leadership decision

As we look ahead to 2026 and beyond, one reality becomes undeniable: The future of software development will not be decided by developers alone.

It will be shaped by:

  • CIOs who architect intelligence
  • CTOs who orchestrate innovation
  • CISOs who enforce trust
  • CAIOs who govern AI responsibly
  • Boards that understand the strategic implications

Because in this new era, code is no longer the product. Intelligence is. And the organizations that learn fastest will not just build better software — they will redefine entire industries.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

8 tips for becoming a more agile IT leader

Our world is spinning so fast that getting off course from intended outcomes can happen quickly. And it isn’t just technology that’s catalyzing change. The business climate, economic conditions, rules of engagement, and even people’s belief systems and behaviors are rapidly shifting to the point that trying to keep up is like chasing a cheetah on roller skates.

To lead in this climate, you have to hone your ability to pivot, pull the plug, or pounce on a new opportunity with little lead time. You can’t make a decision, install a system, or set a team to work on a project, then move on, even as you might have done a few short years ago. You have to be able to change your mind, admit you no longer stand behind a decision, aren’t confident in a particular project and set a new course toward a better destination.

Having an agile mind and a flexible worldview is vital to IT leadership today. But how do you achieve that?

I spoke to IT experts and leaders who have struggled with and mastered this skill. Here are the agility tricks they employ to stay flexible.

Keep asking questions

“Historically, CIOs come into an organization, assess, then try to add value,” says Sathish Muthukrishnan, chief information, data, and digital officer at Ally Financial. “That could take a year. Then they spend another six months developing strategy. From year three onwards, they might implement strategy. That was the traditional playbook.”

So, the current pace of change is, in itself, an enormous pivot for a role so complex, Muthukrishnan says.

The first step to becoming an agile leader then is to accept that the old playbook won’t work. The second step, he says, is to keep asking questions.

“I ask questions so I can deepen my understanding, orient myself,” he says. “Has the context changed? Has technology changed? Have people changed? If so, why are we doing what we were doing three, four, or five months ago?”

There are some things that have not changed, he says. Learning is the same, though what you learn and the way you apply it is different. And the need for your leadership has only increased.

“The human qualities that set you apart as a leader are becoming even more relevant in an AI-first world,” he says. “It’s no longer, ‘I’m the expert. I know. I’ve done this, I’ve seen this,’ that sets you apart. The thing that sets you apart is having the courage to say I am not tied to my previous beliefs. I’m changing them because of this reason. I’m making a pivot because of these reasons. Courage and conviction go together.”

Trust the navigation — and your teams

“You have to lead with purpose and clarity. That’s important for the organization. But you need a lot of flexibility when it comes to the execution,” says Manny Rivelo, CEO of ConnectWise.

Like a ship on a wild sea, you have a destination in mind. Getting there, though, requires navigating through a lot of tumult.

“You have to be able to respond quickly to change,” Rivelo says. “It can be anything from a market shift, the technology, or internal organizational challenges. You don’t want to lose sight of that long-term strategy, but you may have to pivot along the way. It’s not only about moving with speed, but with flexibility.”

Just like that ship navigating rough seas, you have to get accurate readings and trust your navigators to know how to steer through the chaos.

“How you collect information is important,” he says. “I look at it as a signal-to-noise ratio. What is the signal that’s driving you to go someplace, and what is just noise? How do you remove the noise so you can focus on what the signal is telling you.”

Rivelo believes in facts and data. But you also need to be able to test your own assumptions and, to do that, you have to trust your team, he says.

“You have to build diverse teams that are willing to challenge your thinking,” he advises. “In my experience, you can’t train for that. You have to hire for it.”

Rivelo digs deep when hiring to find people who have a history of being opinionated and, especially, curious.

“Curiosity is one of the greatest gifts you can have as a leader. You need to be curious enough to disrupt yourself and not assume that, because we are doing things a certain way, we have to continue. The best idea should win — wherever it comes from,” he says.

Empty your cup

According to one Zen parable, you have to empty your cup before you can fill it. To learn, you have to accept that you don’t already know.

“For me, being agile means seeing the truth and not making assumptions,” says Dr. Akvile Ignotaite, founder and data scientist at System Akvile. “I go into new projects thinking, ‘Let’s see what we can learn.’ And I learn from the data.”

It sounds simple. But when you have achieved a leadership role, you likely got there because of your expertise. You have become accustomed to people expecting you to know what to do. Letting go of that expert role is, Ignotaite admits, a process.

“I try to keep a very open mind,” she says. “I make assumptions, then measure those assumptions against real user data and behavior. I can’t know everything. The speed we live in is too fast.”

Use the ‘hot-shot rule’

Every day is full of decisions and responsibilities. It’s easy to get caught up in that and keep navigating toward a goal without stopping to check whether you are headed in the right direction. To stay flexible, Ingrid Curtis, CEO at Sparq, likes to test wind direction frequently with what she calls the “hot-shot rule.”

“This is not a concept I created,” she says. It is a mental exercise that helps people to let go of a decision, path, or progress that is no longer serving their purposes.

“Imagine you’ve been fired,” she says. “Who’s the hot shot that’s coming in to take over. What do you think they will do that you aren’t doing?”

The hot shot can be fictional or a real-life leader from the tech or business world.

“There are plenty of big, wild entrepreneurs to choose from,” she says. “They come with this huge persona. And we’ve seen that it has gotten some of them — the WeWork founder, Elizabeth Holmes, and others — in serious trouble. But there is also admiration for this flagrant, ‘I’m willing to do whatever it takes’ kind of leadership.”

It’s surprising, she says, how much this game allows people to disconnect from minutia and look at their job with fresh perspective. It’s fascinating to watch it unlock ideas.

“We all allow ourselves to be hamstrung,” she says. “Yet you imagine someone else would disregard those self-imposed restrictions and be able to get the thing done. Suddenly, with that perspective, you are able to do that, too.”

Rethink your approach to decision-making

“Everyone frames agility as a personality trait — be flexible, stay curious, embrace change,” says Nik Kale, principal engineer at Cisco Systems. “All of that is fine, but personality does not scale.”

Agility, he says, is less about mindset and more about structure.

“Adaptable leaders aren’t the ones with the most flexible temperament,” he says. “They’re the ones who build decision-making systems to absorb change without breaking.”

One big part of this structure, he says, is sorting decisions by weight. Some decisions are reversable. Others are not. Therefore, those two types of decisions should be sorted into different piles. Slow down and ponder non-reversable decisions. Decide fast and iterate on those that are reversible.

“Many leaders do the opposite,” he says. “They agonize over things that don’t matter and rush through things that do.”

For reversible decisions, schedule a point where you will stop to reevaluate them.

“I put reassessment dates on the calendar,” he says. That way changing your mind is part of the process. “It won’t hurt anybody’s ego if we planned to reevaluate that decision.”

This structure, he believes, overcomes the risk decision-makers face when they change their mind.

“Admitting you were wrong, in most corporate cultures, is expensive — reputationally, career-wise, politically. People double down on failing strategies because the cost of admitting they were wrong feels higher than the cost of failure,” Kale says. “Courage shouldn’t be a prerequisite for good decisions.”

Factor in the fact that permanence is a thing of the past

According to Ram Palaniappan, CTO at TEKsystems Global Services, when the software you use every day changes almost that often, clinging to the idea that anything you decide today won’t change tomorrow is holding on to a world that no longer exists.

This is especially true when working with AI, he says. When you make a decision about something repeatable, and offload the work to AI, verifying the results is essential because an AI will amplify mistakes. This also helps you learn to trust the AI.

This kind of mental agility, he says — making decisions that you are willing to unmake if the output doesn’t match expectations — requires that people to stay alert and keep learning. That goes not just for leaders but entire teams, he adds.

“We ask our teams to spend a percentage of time upskilling,” he says. “We set goals. We provide a learning path. Then we allow them to apply what they learn in a lab facility.”

The idea is, he says, to learn to let go of the way it was.

“Tech companies change their products, sometimes daily,” he says. “We all have to be able to move like that.”

Let go of the idea that anything you decide is permanent. Decide quickly. Then check how that decision is doing.

Exercise your emotional muscles

According to Sarah Noll Wilson, founder of The Noll Wilson Group and author of Don’t Feed the Elephants, many technical leaders believe that emotion has nothing to do with their decisions. But that can make you blind to the power emotion has over them.

“When you build your emotional skillset, it gives you access to a higher level of self-awareness and intellectual humility,” she says.

Curiosity is one emotional skill. “Instead of making you fear discovering a bad decision, curiosity can make it fun to wonder — with interest and even excitement — where you might be wrong,” she explains.

Another emotional skill is to let go of the idea that it is your expertise that’s needed.

“Some problems are technical,” she says. “Those are clear and typically solved with expertise. But some are adaptive challenges. In that case, the problem might not be clear and solving it requires learning, not expertise.”

Fear is another emotion that drives resistance to change. People don’t fear change, they fear loss, she says. “Ask yourself, ‘What am I losing?’ or ‘What am I afraid I’m going to lose?’”

One of the practices her team uses to increase emotional self-awareness, she says, is a courageous audit. This is a process where leaders examine what they want to be — an agile leader, for example — and interrogate behaviors that conflict with that goal.

“A question you can ask is, ‘What do I do or not do that’s in conflict with being an agile leader?” she says. “Do I protect my ideas or my team’s ideas? Do I dismiss ideas from people who aren’t in my field or ‘in’ group? Who gets to submit ideas? Who doesn’t?”

These exercises are designed to raise your awareness of the emotional reactions that affect your decisions and to help you develop the ability to be comfortable with uncertainty.

Change how you measure and build

According to Shahrzad Rafati, founder and CEO of RHEI, keeping a plastic viewpoint requires you to fundamentally change how you build technology and measure success.

“When you spend two years building an enterprise tool, your ego becomes tied to its deployment. You lose agility because you are financially and emotionally invested in the solution, rather than the problem,” she says.

“Instead of measuring success with metrics like uptime or deployment milestones, measure workforce elevation. When your metric is ‘Did it elevate human output and strategic thinking?’ you won’t hesitate to kill a failing project.”

The second step, she says, is to find a way to experiment quickly and cheaply. “We no longer live in a world where prototyping costs millions of dollars. You can ‘vibe code’ an idea, stand up a specialized agent, and test its capabilities almost instantly.”

“Use this to your advantage,” she says, “by lowering the stakes of your experiments. If testing a hypothesis costs nothing, your willingness to abandon a bad idea and admit you were wrong goes up exponentially.”

When AI writes code, it joins the software supply chain

AI tools designed to assist developers are no longer staying in the background. They are starting to shape what actually gets built and deployed.

They open pull requests.

They modify dependencies.

They generate infrastructure templates.

They interact directly with repositories and CI/CD pipelines.

At some point, this stops being assistance.

It becomes participation.

And participation changes the problem.

When assistance becomes participation

The shift from generative to agentic behavior is the inflection point.

Earlier tools operated inside a tight loop. A developer prompted. The system suggested. The developer reviewed. Nothing moved without human intent.

That boundary is eroding.

Newer systems propose changes, update libraries, remediate vulnerabilities and interact with development pipelines with limited human intervention. They don’t just accelerate developers. They begin to shape the artifacts that move through the software supply chain — code, dependencies, configurations and infrastructure definitions.

That makes them something different.

Not tools.

Participants.

And once something participates in the supply chain, it inherits the same question every other participant does:

How is it governed?

A simple scenario

Consider a common pattern already emerging in many environments.

An AI system identifies a vulnerable dependency.

It opens a pull request updating the library.

A workflow triggers automated tests.

The change is promoted into a staging environment.

Four steps.

No human review.

No explicit governance checkpoint.

Each step is individually valid. Nothing looks wrong in isolation.

But taken together, they create something fundamentally different: A system that can change enterprise software without human intent being re-established at any point. Research from Black Duck found that while 95% of organizations now use AI in their development process, only 24% properly evaluate AI-generated code for security and quality risks.

This is autonomous change propagation across the software supply chain.

The “human-in-the-loop” fallacy

Many organizations rely on a “human-in-the-loop” (HITL) requirement as a safety mechanism for AI-generated code.

At low volumes, this works.

At scale, it breaks.

When an AI system generates dozens of pull requests in a short window, review becomes a throughput problem, not a control. The cognitive load of validating machine-generated logic exceeds what a human can realistically govern.

What remains is not oversight, but a checkpoint.

And checkpoints without effective review are not controls.

The governance gap

Most governance models assume a stable truth: Humans are the primary actors.

Controls tied identity to individuals, approvals to intent and audit trails to accountability.

Even automation systems are treated as extensions of human intent — predictable, bounded and deterministic.

AI systems break that model.

They can generate new logic, act on it and propagate changes across systems. Yet in most environments, they are still governed as if they were static tools.

That mismatch is the gap.

Machine identity is no longer what it was

One way to see this clearly is through identity.

Every interaction an AI system has — repository access, pipeline execution, API calls — requires credentials. In practice, these systems operate as machine identities.

But they are not traditional machine identities.

A service account executes predefined logic. Its behavior is known in advance. Its risk is bounded by what it was configured to do.

An AI-driven system is different. It generates the logic it then executes.

It can propose new code paths, interact with new systems and trigger actions that were not explicitly predefined at the time access was granted.

That is a category change.

Not just a new identity type, but a new attack surface: Identities that can generate the behavior they are authorized to execute.

The World Economic Forum has identified this class of non-human identity as one of the fastest-growing and least-governed security risks in enterprise AI adoption.

Measuring exposure before solving it

Most organizations already track access-related metrics. Those metrics were designed for human-driven systems.

They are no longer sufficient.

If AI systems are participating in the software supply chain, organizations need to measure where and how that participation introduces risk.

A few signals matter immediately:

  • AI-generated artifact footprint: What portion of code, dependencies or infrastructure definitions in production originates from AI-assisted processes?

  • Authority scope of AI systems: What systems can these identities access — and what actions can they take across repositories and pipelines?

  • Autonomous change rate: How often are changes introduced and propagated without explicit human review?

  • Cross-system interaction surface: How many systems does a single AI workflow touch as part of normal operation?

  • Auditability of AI-driven actions: Can changes be traced cleanly to a system, workflow and triggering context?

These are not abstract concerns. They are measurable.

And until they are measured, they are not governed.

The regulatory imperative

This is not just a technical shift. It is a governance and liability shift.

As regulatory expectations evolve — from AI accountability frameworks to cybersecurity disclosure requirements — organizations are increasingly responsible for explaining and controlling automated decisions inside their environments.

If an AI-driven change introduces a vulnerability or leads to a material incident, “the system generated it” will not be an acceptable answer.

Accountability will still sit with the enterprise.

That raises the bar: Governance must extend to how autonomous systems act, not just how they are accessed.

The architecture gap

Diagram of the AI architecture governance gap
AI systems operate horizontally across systems, while governance remains vertical

Puneet Bhatnagar

The issue is not that any one control is missing.

It is that AI systems operate across the seams of systems designed to govern within their own boundaries.

Repositories enforce code controls.

Pipelines enforce deployment controls.

Identity systems enforce access controls.

Security tools enforce policy checks.

Each works as designed.

But AI systems move across all of them.

They read from one system, generate changes, trigger another and influence a third. Authority is exercised across systems, while governance remains within them.

That is the architectural gap.

A different governance model

Most organizations will respond to this shift by trying to extend existing access controls. That instinct is understandable — and insufficient.

The problem is no longer just who or what can access a system. It is how control is maintained when authority can generate new actions dynamically.

This requires a different model of governance.

One that treats software systems as actors whose behavior must be bounded, observed and continuously evaluated across workflows — not just permitted or denied at a point of access. Governance becomes less about static permissions and more about controlling the shape and impact of actions across systems.

That is the shift.

Conclusion

The conversation around AI in software development often focuses on productivity.

But as AI systems begin to participate in producing and modifying enterprise software, the more important question becomes governance.

AI is not just accelerating the software development lifecycle. It is becoming part of the software supply chain itself.

And that changes the problem.

The challenge for CIOs is no longer just managing developers, tools or pipelines. It is understanding and governing the authority that software systems exercise across them.

Because in a world where software can act on behalf of the enterprise, governance is no longer just about access.

It is about authority — what systems are allowed to do, and how that authority is controlled and measured over time.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Coherence: Where leadership and AI success intersect

In an era where AI is accelerating faster than most organizations can absorb, many IT leaders are grappling with how to move quickly without creating fragmentation. For Leigh-Ann Russell, BNY’s CIO and global head of engineering, the answer comes down to a single word: coherence.

For Russell, coherence isn’t a slogan. It’s a leadership discipline that connects strategy to execution, technology to trust, and ambition to sustainability. In a recent episode of the Tech Whisperers podcast, she discussed why coherence is so integral to the modern CIO playbook and how to leverage it to scale impact instead of chaos.

Russell’s perspective is shaped by a career defined by range and resilience, from her formative years in Scotland to leading complex, high-stakes transformations across industries and geographies. After the podcast, we spent more time exploring how that journey has informed her leadership operating system, how she translates coherence into enterprise-scale AI execution, and why IT leaders must learn to navigate the intersection of innovation, control, and reusability. What follows is that conversation, edited for length and clarity.

Dan Roberts: Are there experiences from your formative years growing up in Scotland that shaped how you lead today?

Leigh-Ann Russell: I credit my father as the biggest inspiration in my life. He worked seven days a week and had two jobs, yet he was always present. When I look back, I don’t know how that was possible, for him to have a 7-day-a-week job and still take me to the park and to ballet lessons, or even afford ballet lessons. He was this embodiment of a combination of work ethic and family, and that’s how I’ve led.

There was also a lot about my life growing up that I hid. I hid the fact that I was a single parent. I hid the fact that I grew up in a council estate, similar to public housing in the US. I hid a number of things about me. It wasn’t until I got to be a bit more mature in life that I realized these were not things to hide. These are the stepping stones that make me the person I am.

My father taught me to do more with less and the power of work ethic. My daughter taught me ninja productivity skills. These were not things you should be uncomfortable about. These are things you should celebrate. My whole virtuous feedback on that, as a leader, if you share what makes you human, then it will be much easier to connect with other leaders and other people in your team. And then they feel comfortable sharing what makes their human foundation.

When the environment is complex, fast-moving, and high stakes, like it is today with AI, what are the core operating principles you return to as a leader, when you’re tired, under pressure, or being watched?

My core operating model centers on two things: talent and clarity. My philosophy is, my job is to find great talent and help them be the best version of themselves. If you achieve that, then real, magical things happen, because it’s people who create magic, not technology.

The second part of my role is about creating clarity. Life is complex, leadership is complex, and what teams need is simplicity. It’s about trying to simplify the problem, understand the trade-offs, and align people. Take AI as an example: The technology can create enormous value while also creating friction at scale if we’re not redesigning the work thoughtfully around it. That’s why it feels like the next leadership challenge — it’s not just about deploying AI well but designing the system around it with clarity and consistency in mind.

During the podcast, we talked about how you were very intentional in choosing coherence as your word of the year. Can you give some examples of coherence applied as a leadership discipline?

Coherence is a hard thing to build and a fast thing to lose. It starts with humans. It’s something Robin Vince, our CEO, does really well in bringing our leadership team together. He talks publicly about the fact that we all have a coach, and we have the same coaches from the same company and come together around that. He’s very intentional about creating coherence as a leadership structure.

As you go vertically down to the different meanings of coherence, it also applies equally to technology. It’s very easy to chase the shiny thing, and there’s a very tenuous conflict between people being empowered to do great innovation but also doing it in a structured way so that you avoid lack of controls or duplication or issues with architecture and costs firing out of control.

That double meaning of coherence and being very tight on the balance between individual innovation and empowerment and leadership becomes critical.

Can you talk about what your AI strategy looks like across the enterprise?

We set up the AI hub in 2023, and when I came to BNY in 2024, we started thinking about adoption and enablement across the enterprise, guided by our mantra: AI for everyone, for everywhere, for everything. In 2025, we set the goal of having 65% of the bank trained on AI, but we made 100% as early as June, and we’ve had to rewrite the training program twice since then to enable our employees to continue deepening their proficiency.

That enablement was important and we’ve had an amazing uptake, with over 220 AI solutions now in production accessible in Eliza, our enterprise AI platform. Eliza is built on the premise of foundational, reusable capabilities and is designed to enhance client service and company operations and drive cultural transformation through the power of AI. We talk more about Eliza and how we are advancing responsible and ethical AI in financial services on our BNY website.

Over half the bank have built their own agents, and we also have digital employees at the bank — 140 autonomous agentic employees who work alongside our human employees and that have direct human managers who monitor what logic is applied to decision-making. This is truly agentic.

So, 2025 was about widespread adoption and literacy, and now we’re moving from AI adoption to AI at the core. Even in the most complicated use cases that we’ve put into production, I still think they’re somewhat at the edges — anomaly detection, pulling together client briefing documents, or looking at contract reviews. Very advanced use cases compared to most enterprises, but we are pivoting in 2026 to having AI at the core of everything that we do at the bank. That’s truly transformational, and it’s the next step in our journey.

You have 140 digital employees, an idea that wasn’t even on the radar at the beginning of 2025. How did your organization move and adapt to scale that up so quickly?

It goes back to the philosophy that leadership is all about having talent and enabling them. We have an amazing team in engineering — and across the bank, because this is not just an engineering piece. If you look at our first digital employee, it was in the payment space, looking at reconciliations, and that was born out of a collaboration between engineering and operations. Our first human manager of a digital employee was in the operations side of the business.

Reimagining how work gets done can’t just be an engineering issue. It has to be in partnership with the business. Those individuals in the businesses and in engineering who can think back to first principles about how work gets done are leaping ahead on their AI journeys because they’re not just thinking about adding in AI as an afterthought; they’re thinking about redesigning their workflows with AI at the core.

A great recent example of that is in our onboarding process. We now have a multi-agentic model that has taken the research part of the process down from double-digit hours to single-digit minutes. This partnership and ability for our people to reimagine how we work with AI now at the core is foundational.

The strategy you’ve laid out really is a journey, and it seems there are some key foundational steps that many organizations are trying to skip, which is creating all sorts of problems for them.

In the podcast, we talked about the flip side of coherence being chaos. If you have chaos, AI will just amplify that chaos. That’s at a stack level or a leadership level. As the pressure is on companies to go out and adopt, this is where Eliza, our platform, has been truly instrumental to us because everything AI-related at the company is centralized in Eliza. It’s our tech stack; it’s our governance framework. Having one place for AI so you’re not chasing multiple tools and multiple companies, and having that very clear AI strategy embedded in a single platform, has been really differentiating for us.

In truth, there has been no single silver bullet in our AI journey. We have a tech-enthusiastic CEO in Robin Vince, who realized very early on that AI would be transformative for our company and has been determined that BNY remain at the forefront. With his leadership from the top, we invested early in our people to cultivate the AI-literate workforce we have today. So the fact that it’s a CEO-led strategy, plus the platform, plus the enablement has really helped us get to the speed of having 220 AI solutions in production supporting the enterprise.

Considering the strategic priority you’ve placed on AI adoption, how do you balance innovation and control?

Our goal is to enable innovation across the firm, which speaks to this mindset that being “AI first” is more important than just control. Obviously, we need control from a risk, compliance, legal, resilience, and cyber point of view, but from a financial and leadership perspective, we’re not trying to control and damp down and make everyone justify every single use case. As a result, we’ve said no to a lot less than I think most other companies would have done.

When I think about the trade-offs of that, it’s understanding “what is innovation?” and making sure there’s a reusable core. Because people have in their mind, “If I just hire my own engineers, and I have my own architecture, and I have my own software, I’m really innovative.” People tie innovation to having the new shiny thing, and it’s very hard to shift to a mindset that understands sometimes innovation is reusing what other teams have already developed and building on that. So, sometimes the more painful conversations involve trying to help people reground in reusability, common architectures, and common data platforms, understanding that isn’t hampering their innovation, it’s providing a solid foundation they can build on to go faster.

That intersectionality of innovation and control and reusability is the difficult thing we have to get right, because if you have too little control, you have the chaos we spoke about, and we know that AI amplifies chaos. If you have too much control and too much centralization, then you do hamper innovation. It’s something I’m very conscious of, that we don’t want to do either.

Great leadership often comes down to managing those tensions. What is the central tension you’re learning to navigate right now and how is it shaping the leader you’re becoming?

It goes back to a quote I mentioned in the podcast about Atticus Finch: How do you maintain your convictions without being rigid? Because one thing I know for sure, there are no right answers right now. Does one discipline shift left and one discipline shift right? What is the role of engineering in the future? There is absolutely no right answer to that. There’s only a set of choices that’s right for your institution, and what might be right for a marketing company will not be right for a regulated bank.

I use my digital twin to make sure I’m not too over-convicted, because I have that tendency, in my past growing up in the oil field and it’s a slightly Scottish tendency. So that’s the thing I’m really pushing myself on: How do I retain my conviction, but not become rigid, and stay really open to what is coming at us, because the change is like we’ve never seen — in a very positive way as an engineer, but we have to remain convicted, not rigid.

In world of “double VUCA,” where the impact of external volatility, uncertainty, complexity, and ambiguity is being compounded by fragmented AI journeys, a lack of clear AI strategy, and ineffective leadership internally, Leigh-Ann Russell shows us why coherence is an essential strategic discipline. In the age of AI, it can spell the difference between leaders who scale impact and those who simply scale chaos. For more insights from Russell’s leadership playbook, tune in to the Tech Whisperers.



Los directores de sistemas de información alertan de que la escasez de talento está frenando la IA en las empresas

La escasez de experiencia ha frenado las iniciativas de IA en muchas organizaciones, pues la limitación de conocimiento de la tecnología ha restringido la capacidad de los profesionales para hacer realidad el potencial de la IA.

Según la encuesta State of the CIO 2026 de CIO.com, la falta de talento interno fue el principal reto al que se enfrentaron los equipos de TI a la hora de implementar estrategias de IA durante los últimos 12 meses, según el 40% de los encuestados.

Ha Hoang, directora de sistemas de información (CIO) del proveedor de resiliencia cibernética de Commvault, sostiene que la escasez es especialmente grave en los puestos situados en la intersección entre la IA y la ciberseguridad. A su juicio, las empresas de ciberseguridad necesitan personas capaces de comprender los datos y las operaciones, y de traducir los conocimientos sobre riesgos en decisiones empresariales, afirma.

Es más, considera que los proveedores como Commvault también necesitan ingenieros y analistas que sepan cómo proteger los modelos de IA, salvaguardar los datos de entrenamiento y detectar amenazas relacionadas con la IA, como la inyección de prompts y el envenenamiento de modelos.

Hoang tiene claro que, “a medida que la automatización impulsada por la IA transforma las operaciones de TI y seguridad, los directores de sistemas de información (CIO) y los directores de seguridad de la información (CISO) necesitarán profesionales capaces de interpretar, ajustar y gestionar los sistemas de IA, no solo de supervisar alertas”. Por eso cree que “necesitaremos menos especialistas aislados y más perfiles generalistas con dominio de la IA que puedan evolucionar al mismo ritmo que la tecnología”.

Se necesita una gran experiencia

Parte del problema es la escasez de personas que comprendan el potencial de la IA y puedan predecir hacia dónde se dirigen las tecnologías de IA, añade Anand Srinivasan, director de estrategia de o9 Solutions, proveedor de una plataforma de planificación empresarial basada en IA.

En su opinión, “el reto no es simplemente una escasez de expertos en IA, sino una brecha estructural más profunda entre cómo están organizadas las empresas y lo que permite la IA moderna”. Es más, considera que “la mayoría de las grandes organizaciones siguen operando mediante modelos de toma de decisiones jerárquicos y compartimentados, diseñados para la estabilidad y la escala, no para la velocidad y la adaptabilidad”.

Hay que considerar que la brecha de conocimientos más crítica no se encuentra sólo en la creación de sistemas de IA, sino también en replantear cómo se toman y ejecutan las decisiones en toda la empresa, según Srinivasan; y añade que la IA puede propiciar grandes cambios en la agilidad y la adaptabilidad, pero sólo si las capacidades de toma de decisiones permiten a las organizaciones convertir la estrategia en acción más rápidamente y con menos riesgo.

Como ejemplo cita a Srinivasan cita a Wayne Gretzky, leyenda del hockey sobre hielo, para ilustrar el problema: “Patinad hacia dónde va el disco, no hacia donde ha estado”. El disco de la IA se mueve muy rápido, señala, y la experiencia en IA es un objetivo en constante movimiento.

En su opinión, “las habilidades en aprendizaje automático tradicional están siendo rápidamente desplazadas por las necesidades de IA generativa, IA agente y gobernanza de la IA”. A lo que añade: “Los trabajadores con habilidades en IA obtienen ahora importantes primas salariales respecto a sus compañeros en los mismos puestos que carecen de esas capacidades”.

Más allá de los retos que plantea una tecnología en rápida evolución, existe un problema de conocimientos superficiales sobre IA, tal y como cree AJ Sunder, director de sistemas de información y director de producto de Responsive, proveedor de software de gestión de respuestas estratégicas. Es más, este experto sugiere que hay mucha gente disponible que tiene algunos conocimientos de IA, pero a muchos les falta una comprensión más profunda de cómo implementarla para satisfacer las necesidades empresariales.

Y dice lo siguiente: “Sin duda, hay escasez de personas capaces de crear sistemas de IA fiables, seguros y escalables para entornos de producción», añade. «Esta abundancia de talento con conocimientos de IA, combinada con la escasez de personas capaces de traducir eso en aplicaciones de IA funcionales, crea un enorme problema a la hora de filtrar el ruido”.

No tiene reparos en reconocer que para Responsive ha sido un reto encontrar trabajadores con ese nivel de experiencia, pero la empresa ha tenido la suerte de encontrar algo de talento externo.

Y la cosa no acaba aquí: “El tipo de problemas de IA que resolvemos requiere experiencia en el manejo de contenido a gran escala, con todas las complejidades que entrañan los desordenados datos empresariales. No hay demasiadas personas con la experiencia suficiente para resolver el tipo de problemas que abordamos a la escala a la que lo hacemos”, apostilla.

Formación práctica

Responsive ha priorizado la formación interna para desarrollar la experiencia dentro de la empresa, con equipos internos impulsando las iniciativas formativas, tal y como explica Sunder. La empresa centrada en la IA partía con cierta ventaja, ya que se había enfocado en esta tecnología antes de la ola actual.

Y añade: “Hemos tenido la suerte de contar con personas con talento que reconocieron rápidamente el ritmo de la IA y el valor del aprendizaje práctico, la experimentación, el ensayo y error, y el desaprendizaje para adquirir nuevos conocimientos. Eso permitió que todos, colectivamente, aprendiéramos, compartiéramos y nos enseñáramos unos a otros”.

Este experto admite que la empresa también forma equipos emparejando a especialistas en IA con expertos del ámbito de negocio, en lugar de colocarlos en grupos aislados. No en vano, Responsive también ha invertido de forma agresiva en herramientas de IA que permiten que un grupo más amplio de ingenieros contribuya a funciones impulsadas por IA sin necesidad de tener una formación profunda en aprendizaje automático.

Así que, como afirma, “no es necesario que todo el mundo sea un experto en IA desde el principio”.

Incluso cuestiona la necesidad de más programas externos de formación en IA, alegando que quizá ya existan demasiados.

“Es necesaria cierta formación estructurada para que la mayoría, si no todos, los miembros del equipo alcancen un nivel básico de conocimientos, y eso ya existe. Más allá de eso, el aprendizaje no estructurado, los ejercicios prácticos y la creación de soluciones útiles que vayan más allá de los tutoriales de hello world son mucho más eficaces que cualquier programa de formación de larga duración. Esto se debe principalmente a la rapidez con la que evolucionan las cosas”, dice para concluir su intervención en este reportaje.

Commvault también se centra en métodos de formación interna y en el reciclaje profesional de los empleados actuales, reconoce Hoang. Incluso está explorando colaboraciones con universidades y bootcamps de ciberseguridad.

Por eso dice que “las habilidades más difíciles de encontrar son aquellas que combinan los fundamentos de la seguridad con la gobernanza de modelos de IA o las herramientas de automatización. Muchos profesionales dominan un aspecto de la ecuación, pero no ambos».

También considera que las empresas también deben ser flexibles en cuanto a su visión de la experiencia en IA.

Y para acabar, Hoang deja esta reflexión: “Muchas organizaciones siguen basándose en descripciones de puestos rígidas que dan demasiada importancia a los años de experiencia o a certificaciones específicas, mientras que los candidatos tienen habilidades transferibles pero carecen del título exacto o de experiencia concreta con determinadas herramientas. Los directores de sistemas de información con visión de futuro están replanteándose el proceso de selección, dando prioridad a la capacidad y a la mentalidad de aprendizaje frente a una experiencia limitada”.

❌