Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • Su agente de IA está listo para funcionar… ¿Lo está su infraestructura?
    IDC estima que a finales del año pasado había más de 28 millones de agentes de IA desplegados, y predice que en 2029 habrá más de 1.000 millones activos, ejecutando 217.000 millones de acciones al día. Es fácil crear una prueba de concepto (POC) de un agente de IA, afirma Venkat Achanta, director de tecnología, datos y análisis de TransUnion, una empresa global de informes crediticios con unos ingresos de 4.600 millones de dólares. Pero gestionarlo, protegerlo y escalarl
     

Su agente de IA está listo para funcionar… ¿Lo está su infraestructura?

30 de Abril de 2026, 06:43

IDC estima que a finales del año pasado había más de 28 millones de agentes de IA desplegados, y predice que en 2029 habrá más de 1.000 millones activos, ejecutando 217.000 millones de acciones al día.

Es fácil crear una prueba de concepto (POC) de un agente de IA, afirma Venkat Achanta, director de tecnología, datos y análisis de TransUnion, una empresa global de informes crediticios con unos ingresos de 4.600 millones de dólares. Pero gestionarlo, protegerlo y escalarlo supone todo un reto, especialmente para empresas de sectores altamente regulados, como los servicios financieros y la sanidad. Para abordar el problema, TransUnion ha dedicado los últimos tres años a desarrollar su plataforma de IA agentiva, OneTru. El objetivo era crear algo tan fiable y determinista como los antiguos sistemas basados en scripts y diseñados por expertos, pero tan flexible como la IA general, y tan fácil de interactuar como un chatbot.

El truco, sin embargo, consistía en combinar lo mejor de ambos mundos utilizando sistemas tradicionales para los procesos centrales, donde la explicabilidad y la fiabilidad son clave, e incorporando la funcionalidad de la IA general de forma limitada para las tareas para las que resultaba especialmente adecuada. Y dado que no se disponía de la infraestructura necesaria para ello, TransUnion construyó la suya propia, destinando 145 millones de dólares al proyecto. Fue una gran inversión en una tecnología sin probar, pero ya ha supuesto un ahorro de 200 millones de dólares. Más aún, una vez construida la plataforma, TransUnion la utilizó para crear soluciones orientadas al cliente.

En marzo de este año, por ejemplo, TransUnion lanzó su AI Analytics Orchestrator Agent, creado con la plataforma OneTru y basado en los modelos Gemini de Google. TransUnion ya utiliza este agente internamente para mejorar los análisis, y los clientes también pueden utilizarlo para realizar sofisticados análisis de datos sin necesidad de recurrir a científicos de datos.

Muchos clientes utilizan los datos de TransUnion, pero no utilizan otras soluciones ni plataformas, afirma Achanta. El nuevo agente de orquestación tiene el potencial de ayudar a los clientes a sacar más partido a los datos y de abrir nuevas fuentes de ingresos para la empresa. Y hay más agentes en desarrollo, afirma Achanta. La clave para que funcionen son las capas de orquestación, gobernanza y seguridad. Hacer que un agente haga algo es muy fácil para cualquiera, dice, y puede llevar solo unos días. La empresa también puede crear agentes rápidamente. “Pero yo tengo la base y las barreras de seguridad, y el agente que se encuentra en mi plataforma las utiliza todas. Eso es lo que nos da poder”, afirma.

El secreto para lograr que los agentes de IA se comporten es separar las capas de la tarea y asignar cada capa a un sistema diferente, cada uno de los cuales opera bajo un conjunto de restricciones. Este enfoque limita el daño que puede causar cualquier agente en particular, crea un sistema de controles y contrapesos, y restringe las actividades más arriesgadas a una tecnología de IA de generación previa.

Por ejemplo, en TransUnion, la toma de decisiones principal la lleva a cabo una versión actualizada de un sistema experto. Funciona bajo un conjunto de reglas bien definidas y auditables, y opera de forma predecible, rentable y con baja latencia. Cuando se encuentra con una situación que no ha visto antes, se utiliza un LLM para analizar el problema; a continuación, un agente diferente podría convertirlo en una nueva regla, y luego se podría recurrir a un humano para revisar los resultados antes de que la nueva regla se añada al sistema experto. Hay diferentes agentes que comprenden la capa semántica, interactúan con los humanos y realizan otras tareas. “Con la capa de razonamiento neuronal —el LLM— incorporamos a los humanos al proceso. Cuando se trata de una capa de razonamiento simbólico, que se basa en la lógica y el aprendizaje automático, dejamos que se automatice”, explica.

Así, cuando cada agente opera dentro de restricciones muy estrictas, solo con los datos limitados que necesita para esa tarea concreta, y está limitado a lo que puede hacer, todo el sistema se vuelve mucho más manejable y fiable. Es como la diferencia entre una cadena de montaje, donde varios trabajadores realizan cada uno una tarea única y distinta, en lugar de un taller donde un solo artesano lo hace todo. La cadena de montaje puede trabajar más rápido y de forma más fiable, pero hoy en día muchas empresas implementan sus agentes de IA como si fueran artesanos. Este último enfoque puede dar lugar a productos creativos y únicos, pero no siempre es lo que una empresa necesita.

Nicholas Mattei, presidente del grupo de interés especial de la ACM sobre IA y profesor de la Universidad de Tulane, sugiere que las empresas se centren en incorporar seguridad adicional en los puntos donde se conectan las diferentes partes del sistema de agentes. “Hay que asegurarse de que hay seguridad en las uniones”, afirma. Por ejemplo, si un agente envía solicitudes a un servicio de correo electrónico, hay que configurar un punto de control entre ambos. “En los huecos entre los agentes poco fiables y donde reside el software tradicional es donde se tienen que ubicar los procesos de seguridad”, relata.

Crear una base de seguridad para la IA agentiva

En una encuesta de Jitterbit realizada a 1.500 líderes de TI publicada en marzo, la responsabilidad de la IA —seguridad, auditabilidad, trazabilidad y medidas de protección— es el factor más importante a la hora de tomar la decisión final de compra de IA, por delante de la velocidad de implementación, la reputación del proveedor e incluso el coste total de propiedad. Los riesgos de seguridad, gobernanza y privacidad de los datos también fueron las principales cuestiones que impedían que las iniciativas de IA pasaran a producción, por delante de los costes y los retos de integración. Y tienen razón en estar preocupados.

A principios de este año, investigadores de la empresa de ciberseguridad CodeWall lograron vulnerar la nueva plataforma de IA de McKinsey, Lilli. Utilizando una herramienta de IA propia, los investigadores afirmaron que pudieron acceder a 47 millones de mensajes de chat, 728.000 archivos, 384.000 asistentes de IA, 94.000 espacios de trabajo, 217.000 mensajes de agentes, casi 4 millones de fragmentos de documentos RAG y 95 indicaciones del sistema y configuraciones de modelos de IA. “Se trata de décadas de investigación, marcos y metodologías propios de McKinsey: las joyas de la corona intelectual de la empresa almacenadas en una base de datos a la que cualquiera podía acceder”, escribieron los investigadores.

¿El motivo? De los más de 200 puntos finales de API expuestos públicamente, 22 no requerían autenticación. Los investigadores tardaron solo dos horas en obtener acceso completo de lectura y escritura a toda la base de datos de producción de Lilli. McKinsey respondió rápidamente a la alerta, corrigió los puntos finales sin autenticación y tomó otras medidas de seguridad. “Nuestra investigación, respaldada por una empresa forense externa líder, no identificó ninguna prueba de que este investigador o cualquier otro tercero no autorizado hubiera accedido a datos o información confidencial de los clientes”, afirmó la empresa en un comunicado.

IDC indica que el incidente pone de relieve lo peligrosa que puede ser la violación de un sistema de IA para una empresa. “La mayoría de las empresas siguen pensando en los riesgos de la IA en términos del pasado: fuga de datos, resultados erróneos y daño a la reputación de la marca”, explica Alessandro Perilli, vicepresidente de investigación en IA de IDC. “Esos son problemas graves, pero el mayor riesgo reside en delegar autoridad a los sistemas de IA”.

Al obtener acceso a una plataforma de IA agentiva, un atacante no solo puede ver algo que no debería, sino también cambiar de forma encubierta la forma de actuar de la empresa. Y proteger sistemas de IA agentiva a escala empresarial como Lilli es solo la mitad del reto. Según Gartner, el 69% de las organizaciones sospecha que sus empleados utilizan herramientas de IA prohibidas, y el 40% sufrirá incidentes de seguridad o de cumplimiento normativo para 2030 como consecuencia de ello.

Pero las herramientas de detección disponibles no están del todo preparadas para encontrar agentes de IA, indican desde Gartner. “Si te preguntara cuántos agentes se ejecutan en tu empresa en este momento, ¿dónde irías a buscarlo?”, pregunta Swaminathan Chandrasekaran, director global de IA y laboratorios de datos en KPMG, que ahora cuenta con varios miles de agentes de IA en producción. “¿Se han incorporado todos y tienen identidades? ¿Han pasado por un proceso de autenticación adecuado y quién está a cargo de ellos? Esa infraestructura no existe”.

Sin embargo, las herramientas están empezando a surgir, o las empresas están creando soluciones “hazlo tú mismo”, cuenta. “Eso es lo que va a dar tranquilidad a los directores de sistemas de información”. Ya estamos viendo ejemplos públicos de empleados individuales que implementan una potente IA agentiva con consecuencias negativas. Summer Yue, directora de alineación de Meta, decidió recientemente utilizar OpenClaw, una herramienta viral de IA agentiva de código abierto, para ayudarla a gestionar su bandeja de entrada. Después de que funcionara en una bandeja de entrada de prueba, la implementó de verdad.

“Nada te hace sentir más humilde que decirle a tu OpenClaw que confirme antes de actuar y ver cómo borra tu bandeja de entrada a toda velocidad”, escribió en X. “No pude detenerlo desde mi teléfono. Tuve que correr hacia mi Mac mini como si estuviera desactivando una bomba”. En el pasado, un empleado podía subir información confidencial a un chatbot o pedirle que redactara un informe que luego copiaría y pegaría, haciéndolo pasar por suyo. A medida que estos chatbots evolucionan hacia sistemas agenticos completos, los agentes tienen ahora la capacidad de hacer cualquier cosa para la que el usuario tenga privilegios, incluido el acceso a los sistemas corporativos.

Para gestionar este nuevo riesgo de seguridad, las empresas tendrán que pasar de controles basados en roles e identidades a otros basados en la intención, afirma Rakesh Malhotra, director de tecnologías digitales y emergentes en EY. No basta con preguntar si un agente tiene permiso para acceder a un sistema y realizar un cambio en un registro, afirma. Las empresas deben poder preguntar por qué se está realizando ese cambio. Ese es un gran reto en este momento. “La tecnología de observabilidad no capta la intención de por qué el agente ha hecho algo”, afirma. “Y eso es realmente importante de entender. La confianza se basa en la intención, y no hay forma de que ninguno de estos sistemas capte la intención”.

Si un empleado humano intentara refactorizar toda la base de código, se le pediría que diera una buena razón para hacerlo. “Y si estás refactorizando sin ninguna razón específica, quizá no deberías hacerlo”, dice Malhotra. “Con las personas, hay formas de juzgar esto. No sé cómo hacerlo con los agentes”.

Creación de una base de datos semántica para la IA agentiva

Achanta, de TransUnion, menciona repetidamente la base semántica de la plataforma OneTru de la empresa. Esa comprensión de la información ayuda a los sistemas a entender no solo qué son los datos, sino qué significan y cómo se relacionan con otros datos. Gartner afirma que desarrollar una capa semántica es ahora imprescindible para las empresas que implementan IA. “Es la única forma de mejorar la precisión, gestionar los costes, reducir sustancialmente la deuda de IA, alinear los sistemas multiagente y detener las costosas inconsistencias antes de que se extiendan”, dice.

Para 2030, las capas semánticas universales se considerarán infraestructura crítica, junto con las plataformas de datos y la ciberseguridad, predice Gartner. Y los agentes necesitan contexto para poder hacer algo significativo con los datos, afirma Chandrasekaran, de KPMG. Ahí es donde reside el conocimiento de una empresa. “Esa es tu nueva propiedad intelectual para la empresa. El contexto es la nueva muralla defensiva”.

Para John Arsneault, director de sistemas de información de Goulston & Storrs, crear una base de datos sólida es también una forma de evitar la dependencia de un proveedor. “Si compras productos y trasladas tus datos a ellos para automatizar flujos de trabajo o crear asistentes de trabajo para los agentes, te costará mucho salir de ahí. Pero si adoptas un enfoque centrado en los datos, al menos podrás pasar de uno a otro si se produce un cambio en el mercado”.

El bufete de abogados ha migrado sus productos de trabajo orientados al cliente a NetDocuments, un sistema de gestión de documentos enfocado específicamente al sector jurídico. Y el resto de los datos que recopila la empresa se almacenan en el ‘data lakehouse’ jurídico de Entegrata.

“Nuestro objetivo es que, con el tiempo, todas nuestras demás aplicaciones apunten a ese lago de datos. Entonces tendremos estos dos entornos donde residen todos los datos del bufete, lo que nos permitirá integrar cualquier herramienta de IA que utilicemos”, afirma.

También facilitará la gestión de los flujos de datos, añade, y permitirá al bufete adaptarse rápidamente a cualquier tecnología de IA que surja en el futuro. “Ya sea IA generativa, agéntica o de Anthropic, con el complemento legal de Cowork, es muy difícil mantenerse al día. Y cambia cada seis meses”.

Orquestación de agentes

La última pieza del rompecabezas de la infraestructura de agentes, tras establecer las medidas de seguridad y crear una capa de datos utilizable, es la orquestación. Los sistemas de IA de agentes requieren que los agentes se comuniquen entre sí y con los usuarios humanos, e interactúen con fuentes de datos y herramientas. Es un reto complicado, y esta tecnología se encuentra todavía en una fase muy incipiente, aunque avanza rápidamente. MCP es un ejemplo de ello, y es una pieza clave para resolver el rompecabezas de la orquestación. Los proveedores de IA se han mostrado muy dispuestos a cooperar en este ámbito.

“Cuando surgieron las redes sociales, y Facebook y Twitter debatían sobre un protocolo estándar para interactuar, nadie quería adoptar el protocolo de sus competidores”, afirma Agustín Huerta, vicepresidente sénior de innovación digital y vicepresidente de tecnología en Globant, una empresa de transformación digital. “Ahora todo el mundo está adoptando MCP y madurándolo como protocolo estándar”.

Pero eso no quiere decir que la integración de agentes se haya resuelto. Según una encuesta de Docker realizada a más de 800 responsables de la toma de decisiones de TI y desarrolladores, la complejidad operativa de orquestar múltiples componentes es el mayor desafío a la hora de crear agentes.

En concreto, el 37% de los encuestados afirma que los marcos de orquestación son demasiado frágiles o inmaduros para su uso en producción, y el 30% señala deficiencias en las pruebas y la visibilidad en orquestaciones complejas.

Además, aunque el 85% de los equipos están familiarizados con MCP, la mayoría afirma que existen importantes problemas de seguridad, configuración y gestionabilidad que impiden su implementación en producción. Y hay otros problemas de integración a los que las empresas deben hacer frente.

“Un problema aún por resolver es cómo conseguir un panel de control adecuado para gestionar todos estos agentes, para saber exactamente qué está pasando con cada uno de ellos”, afirma Huerta. “Hay un panel que permite supervisar los agentes creados con OpenAI y otro para los que residen en Salesforce, pero ninguno puede mostrar la telemetría en un panel centralizado para el control, la auditoría y el registro”.

Para las empresas que acaban de empezar a implementar agentes, o que se ciñen a una única plataforma, esto aún no supone un problema, añade, pero a medida que aprovechen una red más amplia de agentes, empezarán a experimentar estos retos. La propia Globant está creando su propio panel de control interno para la IA basada en agentes, por ejemplo.

Y en Brownstein Hyatt Farber Schreck, un bufete de abogados con 50 años de antigüedad, unos 700 empleados y clientes en todo Estados Unidos, hay varias áreas en las que se está implementando la IA, incluido un sistema generador de propuestas.

Normalmente, varias personas pueden tardar días en revisar la solicitud de propuesta de un cliente, examinar notas manuscritas o transcripciones de reuniones y recopilar otros materiales relevantes, afirma Andrew Johnson, director de sistemas de información del bufete. “Podemos introducir toda esa información en un ordenador y extraer los criterios clave para producir un primer borrador de calidad en cuestión de minutos”, afirma.

Se requieren múltiples agentes para las diferentes partes del proceso: uno para extraer los criterios de éxito o los requisitos de personal, otro para buscar precedentes y lecciones aprendidas, y otros para la fijación de precios y los estándares de marca. “Cada uno de esos agentes es autónomo y debe coordinarse para que los resultados de cada uno se incorporen al siguiente paso”, explica Johnson. En su mayor parte, eso significa un sistema RAG, ya que la mayoría de las plataformas heredadas que utiliza la empresa aún no han incorporado una capa MCP.

Dependiendo de la tarea, los agentes individuales pueden funcionar con diferentes modelos, lo que supone otra capa de coordinación que hay que gestionar. Luego está el control de costes. Si un agente de IA o un grupo de agentes entra en un bucle de retroalimentación infinito, los costes de inferencia pueden aumentar rápidamente. “Somos conscientes de la preocupación, aunque aún no la hemos visto materializarse”, afirma Johnson. “Por eso contamos con un sistema de supervisión. Si superamos los umbrales, reaccionamos”.

Independientemente de las estrategias o medidas para absorber los contratiempos, todo lo relacionado con la IA está cambiando más rápido que cualquier otra cosa que las empresas hayan visto. “Llevo 25 años en el sector tecnológico y nunca había visto nada igual”, señala Malhotra, de EY. “Las empresas de más rápido crecimiento de la historia se han creado todas en los últimos tres o cuatro años. El crecimiento en la adopción no tiene precedentes. Y hablo constantemente con clientes que están implementando tecnologías que eran muy relevantes hace nueve o diez meses, y todo el mundo ha pasado página”.

  • ✇Cybersecurity News
  • OpenAI Launches “Workspace Agents” to Industrialize Corporate Labor Ddos
    The post OpenAI Launches “Workspace Agents” to Industrialize Corporate Labor appeared first on Daily CyberSecurity. Related posts: OpenAI Unveils AI-Powered Browser: ChatGPT Integration to Revolutionize Web Browse & Challenge Chrome The Final Countdown: OpenAI to Retire GPT-4o—But There’s a Catch for Enterprise Users The Rise of the Digital Concierge: OpenAI Hires OpenClaw Visionary to Turn ChatGPT into an Autonomous Agent
     
  • ✇Security | CIO
  • Ways CIOs can prove to boards that AI projects will deliver
    There’s been a wake-up call for CIOs. All the talk about perceived productivity boosts that have previously dominated conversations about AI has been replaced with a demand for measurable value from investments in emerging tech. As MIT states that project failure rates are as high as 95%, executive boards are starting to question when AI will pay dividends. PWC’s Global CEO Survey shows that more than half of companies have seen neither higher revenues nor lower costs fr
     

Ways CIOs can prove to boards that AI projects will deliver

22 de Abril de 2026, 07:00

There’s been a wake-up call for CIOs. All the talk about perceived productivity boosts that have previously dominated conversations about AI has been replaced with a demand for measurable value from investments in emerging tech.

As MIT states that project failure rates are as high as 95%, executive boards are starting to question when AI will pay dividends. PWC’s Global CEO Survey shows that more than half of companies have seen neither higher revenues nor lower costs from AI, and only one in eight have achieved positive outcomes.

While Gartner predicts significant growth in AI spending this year, John-David Lovelock, distinguished VP analyst at the research firm, says the lack of tangible returns means digital leaders are changing tack. Rather than hoping their AI explorations will produce returns, CIOs are switching to more targeted initiatives.

“The projects growing quickly are the ones doing business, and those initiatives include AI,” he says. “CIOs are starting to de-emphasize AI and re-emphasize business. These projects are about AI enhancing existing work and moving away from moonshot transformational projects.”

Lenovo’s CIO Playbook for 2026, produced with tech analyst IDC, also suggests enterprises will get serious about AI deployments this year, with explorations replaced by production-level services that drive business transformation. With boards exerting pressure for measurable returns, Ewa Zborowska, research director at IDC, says more digital leaders want to use AI to enhance, innovate, and reinvent their organizations.

“CIOs aren’t just considering AI out of curiosity, they want to see what they can get out of it to grow the business,” she says. “AI adoption is much more about doing new things or taking a fresh approach to creating value rather than becoming more efficient at cost-cutting.”

Such is the clamor for value that Richard Corbridge, CIO at property specialist Segro, suggests that returns from AI are a main digital leadership priority: “If you discover, for example, that everyone in the organization used Copilot 10 times today, that might mean they’ve been more efficient,” he says. “But what have they actually done with the time they saved? How has saving time created value?”

CIOs will grapple with these questions during the next 12 months. With CEOs and boards becoming impatient for returns, digital leaders are working more with their bosses to define value. Successful CIOs fine-tune their arguments to ensure their projects are backed, and then demonstrate the value of their AI initiatives to the board.

Defining a valuable AI project

What’s clear is CIOs can’t deliver outputs from AI projects without input from their enterprise peers. IDC’s Zborowska says tighter cooperation across project ownership and KPIs ensure emerging technology investments are targeted at the right places.

This increased interaction between digital and business leaders also changes project aims. As stakeholders work closely together to generate value from AI, Zborowska expects executives to seek KPIs that stretch across operational concerns.

“I’d bet we see more non-financial aims over the next few years,” she says. “Executives will consider things such as are employees more engaged, has their work improved in any way, are AI implementations impacting customer experiences, and are internal decisions being made more efficiently.”

Martin Hardy, cyber portfolio and architecture director at the UK’s Royal Mail, agrees that defining valuable AI projects is all about finding the right focus. Effective deployments target processes in distinct areas, and business stakeholders must be part of the value-defining process.

“If we’re making decisions about legal documentation, AI is probably not there yet,” he says. “But if we can use AI to approve holidays, for instance, that might be something because if you have rules that say no more than two people off at a time, you could use AI to check about booking holidays without having to ask everyone in the office.”

For CIOs seeking value-generating use cases, Gartner’s Lovelock suggests AI can deliver results in key business areas such as boosting revenue, supporting decision-making, engaging staff, and improving experiences. He says the right path to AI exploitation correlates with Gartner’s enterprise technology adoption profiles, which group companies into a range of categories.

“The folks who are furthest forward, what we call the agile leaders in technology, are much more likely to drive AI to change the business,” he says. “The laggards on the other side are more likely to take on the technology that’s given to them by incumbent software providers, and use it in a prescriptive manner.”

Fine-tuning the use case

The challenge now is for digital leaders to work with their business peers to determine a more refined approach to AI deployment. For some CIOs, the value of AI is clear but the potential risks must be considered.

Take Dan Keyworth, executive director of performance technology and systems at McLaren Racing, whose focus is operational stability and race-day reliability. While he says being aware of developments in generative and agentic AI is crucial, the priority is tried-and-tested technologies rather than innovations that put performance at risk.

“Formula One is grounded in traditional machine learning and simulation,” he says. “Developing models has been a big part of our performance journey, and since the engine already existed, gen AI is the turbo that’s bolted on with more investment in AI.”

For other digital leaders, like Barry Panayi, group chief data officer at insurance firm Howden, success depends on keeping the human in the loop. Yes, automation can improve customer service, but rather than replacing staff, he wants to use AI to ensure Howden’s professionals have the right insight when they interact with clients.

“There’s absolutely no desire to use data to drive productivity by automating what we do with our customers,” he says. “This is a business where people speak to people. Our brokers need information that can give them an edge, and prove to their clients they understand the risks and can give them the best deals.”

Nick Pearson, CIO at technology specialist Ricoh Europe, adds that the use case for AI at his firm is two-fold: boosting operational productivity and improving customer processes. So he’s established a tri-party AI council with the head of service operations and the commercial manager in Spain. This council explores opportunities to buy, build, and reuse emerging tech.

“We’ve got a strategy that looks at where AI matters, which means exploring the technology we already have to boost internal productivity,” he says. “We’ve got a lot of people who know how to code and build things in Copilot Studio and other platforms, so let’s use that to increase productivity.”

Showing returns to the board

For Gartner’s Lovelock, the key lesson for CIOs eager to generate value from AI is to work with their peers and set desired outcomes before investing. “Most people start with the idea that more is more, and if you do that, you won’t get to the idea of quality,” he says.

That sentiment resonates with Segro’s Corbridge, who encourages digital leaders to start conversations with other professionals by focusing on value. Ask people how investing in an AI implementation will create value for them personally, for the wider business, and the customers the organization serves.

He says CIOs shouldn’t try to prove that AI works, but rather concentrate on how emerging tech adds value. That definition is so critical to Segro’s way of working that the organization uses the phrase proof of value rather than proof of concept.

“Most things work, but they might be more expensive,” he says. “For example, you might be able to use AI to transform how the organization uses spreadsheets, but that project might cost you $300,000. And if you’re currently paying someone $40,000 to do that work, and they’re happy doing it, then you have to question the value.”

Lessons are being learned, says IDC’s Zborowska, whose firm’s research suggests that half of AI POCs now transition into production. While some people might think this success rate isn’t impressive, the quantity a year ago was 10%. After several years of AI exploration, it appears CIOs and their businesses are now firmly focused on real returns.

“These numbers speak to the fact that companies are being more mature and mindful in how they allocate budgets,” she says. “They also support the main theme that we’re on a journey to transformation and a maturing market for AI adoption.”

Beyond the ‘25 reasons projects fail’: Why algorithmic, continuous scenario planning addresses the root causes

21 de Abril de 2026, 06:00

A widely shared Template22 graphic on why projects fail prompted this article. I am using that chart as a prompt, not as evidence. The more useful question is not whether the familiar causes of failure are real. They are. The more useful question is why they keep repeating across programs, portfolios and enterprise transformations, even after years of investment in methods, PMOs, digital tools and AI.

The answer, in many cases, is not a lack of effort. It is a lack of decision logic. Enterprises still launch, govern and defend large initiatives without a planning discipline capable of calculating trade-offs, exposing constraints, modeling dependencies and recalculating the impact of change quickly enough to support real governance.

The pattern under the pattern

Most discussions of project failure start with visible symptoms, unclear scope, weak requirements, scope creep, poor communication, resource shortages, unrealistic deadlines, weak sponsorship and poor change control. Those symptoms matter, but when they recur at scale, they usually point to a deeper problem in the planning system itself. In PMI’s 2025 research on the strategy execution gap, PMI President and CEO Pierre Le Manh argued that AI will create value only when organizations can translate bold ideas into executed initiatives. In most enterprises, the gap is not ambition. The gap is conversion. Strategy is declared, portfolios are funded, work begins, yet leaders still cannot calculate trade-offs, expose constraints, model dependencies or replan fast enough when conditions change.

The scale of the issue is hard to dismiss. BCG’s 2024 study of large-scale technology programs found that more than two-thirds are not expected to be delivered on time, within budget and within scope, and that only 30% fully meet expectations on those three dimensions. Gartner’s 2024 survey found that only 48% of digital initiatives across the enterprise meet or exceed their business outcome targets. Those are not isolated execution misses. They are signs of systemic underperformance in how organizations prioritize, fund, sequence and govern change.

Other firms sharpen the diagnosis from different directions. McKinsey’s work on successful transformations found that among companies whose transformations failed to engage line managers and frontline employees, only 3% reported success. Bain’s David Michels argues that “red is good,” meaning organizations perform better when risk is surfaced early rather than hidden behind reassuring dashboards. Deloitte’s research on digital acceleration and strategy makes the strategic requirement explicit: Digital possibilities must shape strategy, and strategy must shape digital priorities. Put together, those findings point to one conclusion. Large programs rarely fail because a single team misses a task. They fail because the enterprise cannot see the interaction of priorities, constraints, dependencies and consequences early enough to respond intelligently.

Why this is a planning problem, not just a delivery problem

At the portfolio level, failure begins when organizations select too much work, fund the wrong work or fund the right work without a realistic view of capacity, technical debt and delivery interdependencies. BCG ties poor outcomes directly to inaccurate timeline and resource planning, weak end-to-end roadmaps and ineffective management of interdependencies. That is not simply a delivery problem. It is a portfolio design problem. Forrester’s 2025 work on operating model change adds a related warning: Fewer than half of IT leaders say their organizations prioritize operating model adaptation, leaving strategy to collide with structures that are not built to absorb change.

At the governance level, failure shows up as a value problem. Traditional oversight mechanisms can collect status, enforce templates and schedule reviews, yet still fail to answer the executive question that matters most: What happens if a key dependency slips, a budget is reduced or a shared team becomes overcommitted? Bain’s “red is good” matters here because watermelon reporting, green on the outside and red underneath, is usually a sign that governance is reporting milestones instead of modeling consequences. Gartner’s survey of Digital Vanguard organizations reinforces the point. The highest performing digital organizations do better when business and technology leaders are more aligned on execution and outcome ownership.

At the execution level, the familiar problems remain, but they look different when viewed through a planning lens. PMI’s communications research found that one out of five projects is unsuccessful due to ineffective communication, and PMI’s later analysis of communication failures linked poor communication to more than half of the projects that fail to meet business goals. The important nuance is that communication is not merely a soft skill problem. It is often a failure to express the implications of planning decisions in a form that the business can act on. An unclear scope can be a weak scenario definition. Poor requirements can reflect commitments made before constraints were visible. Scope creep is often an unmanaged consequence. Weak sponsorship often reflects weak evidence. Poor change control often means the organization can log a change but cannot calculate its ripple effects.

Why algorithmic planning is now a governance requirement

This is where the conversation needs to become more precise. Continuous scenario planning is valuable, but it only becomes decision-grade when it is supported by algorithmic planning. In large programs and portfolios, governance cannot rely on static reporting, intuition or periodic review alone. It must be able to calculate the impact of change quickly, expose hard constraints clearly and place dependencies, capacity limits, sequencing conflicts and trade-off consequences where they belong, at the center of decision-making. Without that discipline, governance is mostly a matter of interpretation. With it, governance becomes evidence-based control. That conclusion follows directly from the documented failure patterns of PMI, BCG, McKinsey, Bain, Deloitte and Gartner.

AI makes this requirement even more important. Used well, AI can be a powerful interface for senior leaders, helping them interrogate scenarios, surface anomalies, summarize risks and engage more directly with the planning environment. Used badly, it can do the opposite. If AI is not tightly coupled to mathematically sound planning data, explicit constraints, dependency logic and algorithmic calculations, it can turn supposition into false confidence. That is dangerous in portfolio and program governance, where plausible-sounding answers are not the same as decision-grade answers. The sequence matters. First, the organization needs a locked down, calculation based planning model with clear borders. Then AI can sit on top of that model as an accelerator, interpreter and executive interface. Without those boundaries, AI can easily magnify weak assumptions rather than expose them. This caution is consistent with PMI’s strategy execution framing and with EY’s 2026 CEO Outlook and Accenture’s AI reinvention thesis, both of which insist that AI must be scaled with discipline and strong foundations.

Strategic intent is inherently directional. Governance must be exacting. The bridge between the two is algorithmic planning. It is the mechanism that translates ambition into modeled consequences by testing scenarios, exposing constraints, mapping dependencies and recalculating trade-offs as conditions change. Without that bridge, governance becomes subjective. With it, leadership can distinguish between what is desirable, what is feasible and what is now at risk. That is why constraints, dependencies and capacity should not be treated as soft considerations. They are the black-and-white rules of execution.

AI is most valuable when it explains a sound planning model, not when it improvises one.

Why continuous scenario planning matters

Continuous scenario planning becomes strategically important when it gives leaders a way to compare options side by side, test trade-offs before they commit, expose bottlenecks early, map dependency cascades and continuously recalculate what changes when budgets, priorities or constraints shift. That directly addresses many of the structural drivers identified above. It does not solve every reason projects fail. It does attack a large share of the root causes beneath them.

Seen this way, many of the familiar 25 reasons collapse into a smaller set of systemic failures. An unclear scope often results in a weak scenario definition. Poor requirements are often commitments made before constraints and dependencies were visible. Scope creep is often an unmanaged consequence. Poor communication often reflects fragmented planning logic, with business, finance and delivery working from different maps. Resource shortages are often hidden by overcommitment. Weak sponsorship often reflects weak evidence. Poor change control usually means the organization can record changes but cannot model impact. At the project level, teams can sometimes survive these problems through heroic effort. At the portfolio level, heroics stop working. Constraints win. Bottlenecks win. The question is whether leadership can see them early enough to respond intelligently.

PMI’s newer M.O.R.E. framework supports this shift. PMI argues that project outcomes improve materially when organizations manage perceptions, own success, relentlessly reassess and expand perspective. Two of those ideas matter especially here. Relentlessly reassess describes a discipline of continuous adjustment as conditions shift. Managing perceptions requires communicating value and risk in ways stakeholders can act on. That is remarkably close to what mature continuous scenario planning should do at scale.

Why the urgency is rising

The pressure on CIOs is increasing, not falling. EY’s 2026 CEO Outlook says leaders are pursuing growth and adaptability through bold AI transformation, with 2026 becoming a turning point as organizations move from pilots to scaled enterprise use. Accenture makes a similar point from a different angle, arguing that organizations that build strong AI foundations will be better positioned to reinvent, compete and achieve new levels of performance. Those are reasonable claims, but they do not reduce the need for disciplined planning. Faster change increases the premium on a planning system that can calculate consequences quickly and credibly. AI can accelerate analysis, summarize scenarios and improve executive access to planning insight. It cannot replace the need to govern trade offs across budgets, capacity, architecture, timing and risk. In fact, AI is only trustworthy in this context when it is tightly coupled to mathematically sound planning data, explicit constraints, dependency logic and algorithmic calculations. Otherwise, it risks producing plausible but unsupported answers.

What CIOs should demand

For CIOs, this leads to a more useful conclusion than simply restating the 25 reasons projects fail. Large programs usually fail because the enterprise cannot see and govern the interaction of those reasons in time. A modern control system for change, therefore, needs at least six capabilities: A unified planning model across priorities, budgets and capacity; side-by-side scenario comparison; interdependency mapping; early visibility into bottlenecks; continuous recalculation as conditions shift; and executive-facing summaries that turn data into decisions. Those are the capabilities that make continuous scenario planning strategically important. The question is no longer whether planning happens. It already does. The real question is whether planning remains static, fragmented and largely narrative, or whether it becomes dynamic, scenario-based and decision-grade.

That is the real fix hidden beneath the 25 symptoms.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The increasing need to expand a tech knowledge base
    Technological sovereignty is often debated in terms of jurisdiction, compliance, or vendor origin. All of that is important, but it leaves out the important issue of retaining critical knowledge, which directly impacts the CIO. Case in point, British bank TSB undertook a critical platform migration in 2018. The operation relied on a structure that, on paper, had guarantees of a validated provider, testing, and formal program governance. Once the migration was complet
     

The increasing need to expand a tech knowledge base

10 de Abril de 2026, 07:00

Technological sovereignty is often debated in terms of jurisdiction, compliance, or vendor origin. All of that is important, but it leaves out the important issue of retaining critical knowledge, which directly impacts the CIO.

Case in point, British bank TSB undertook a critical platform migration in 2018. The operation relied on a structure that, on paper, had guarantees of a validated provider, testing, and formal program governance.

Once the migration was complete, the new platform began experiencing technical difficulties, resulting in a significant disruption to branch, telephone, online, and mobile banking services, affecting a large portion of its 5.2 million customers. The extent of the matter was so complex, key problems weren’t resolved until the end of the year.

The crisis also had a significant economic impact. Banco Sabadell, which acquired TSB in 2015, had to absorb losses exceeding €200 million, and four years later, in December 2022, British regulators imposed a combined fine of nearly £49 million on the bank for failures in operational risk management, governance, and outsourcing supervision related to the migration. Then, in April 2023, the Prudential Regulation Authority, the Bank of England’s prudential supervisor, personally fined the then CIO £81,620 for failing to take reasonable steps to ensure adequate supervision.

The lesson from this case isn’t that a large migration can go wrong. Every CIO knows that can happen. But TSB didn’t have the capacity to govern and question critical vendor dependency.

The constant of knowledge

When we talk about technological dependence, we usually think of market concentration, long-term contracts, proprietary formats, migration difficulties, or negotiating power with the vendor. All of that exists and will continue to be important. But knowledge dependence is another form that comes up in the conversation and has a greater impact on the CIO’s day-to-day work.

This occurs when the organization doesn’t retain enough internal knowledge to discuss the technology, or subject it to serious scrutiny.

The TSB case was a clear example. The oversight of a critical department relied too heavily on unquestioned supplier guarantees. In other words, there was insufficient internal capacity to rigorously govern the outsourcing relationship.

With this example, the meaning of lock-in changes. It no longer manifests itself when migration becomes prohibitive or when an architecture becomes unchangeable. It begins earlier, when the company is operating its technology but can no longer reliably evaluate it.

In fact, this dependency isn’t easy to perceive because it coexists with a sense of reasonable operation. The services are available and the providers respond, and yet risks are being taken.

On the other hand, it forces a broader definition of sovereignty. The issue goes beyond where the data resides, under what jurisdiction a provider operates, and what degree of regulatory exposure a platform introduces.

Another question is how much critical knowledge does the company retain about what underpins its operations. From this perspective, maintaining sovereignty doesn’t equate to reviewing ownership of the technology or internalizing its implementation, thus preventing reducing the conversation to a legal or geopolitical debate.

Hidden knowledge dependencies

The common mistake when discussing tech dependence is to focus solely on the noisiest areas like cloud computing, AI, large platforms, and data storage. When discussing knowledge dependence, it’s essential, but not always easy, to look inward.

One area to consider is the architecture. Even if systems are functioning, it may become increasingly difficult to answer basic questions, like why the environment is designed this way, which parts are replaceable, or what changing a critical layer would entail. If this is the case, it’s a sign of dependency.

Another aspect is the operation. Outsourcing execution can make perfect sense, but problems arise when understanding is also outsourced. That is, when the internal team needs to go externally to make decisions or solve problems.

Dependency can also be hidden within the complexity of technological layers. In other words, it doesn’t necessarily have to be directly linked to a large platform, but to the set of integrations and connectors surrounding it, or a partner ecosystem that’s become a tangled mess. If no one understands the complete picture, dependency exists.

The knowledge CIOs can’t afford to lose

All of this shifts the focus to the specific responsibility of knowledge. Not all capabilities carry the same weight or have the same strategic value. But there’s a decisive threshold, the moment when the organization no longer sufficiently understands a dependency to be able to manage it. From that point on, the risk extends beyond the operation itself. The quality of decisions deteriorates, the CIO’s ability to discuss risks or costs diminishes, and many aspects end up being accepted without clear rationale.  

If it isn’t detected in time, there’s a risk of reaching a point of no return, where control of the technological roadmap is lost.

The debate for the CIO

The solution isn’t necessarily to distrust suppliers or outsourcing on principle. There’s a more subtle and demanding issue for the CIO of clearly deciding what knowledge can and can’t be shared externally. So the debate on sovereignty needs to become more pragmatic and more linked to the company’s actual capacity to understand what it depends on, and to change course when necessary.

In an environment of complex platforms, encapsulated services, and outsourced intelligence, preserving decision-making capacity will be an indisputable condition for technological autonomy.

❌
❌