Visualização normal

Ontem — 8 de Maio de 2026Stream principal
  • ✇Security | CIO
  • Cómo elaborar un plan de continuidad del negocio eficaz
    Las organizaciones se enfrentan a un entorno operativo cada vez más amenazante y volátil. Los directivos señalan un aumento de los riesgos en múltiples áreas, incluyendo el fraude cibernético, el phishing y las interrupciones en la cadena de suministro, según el informe ‘Global Cybersecurity Outlook 2026’ del Foro Económico Mundial. Al mismo tiempo, a los ejecutivos les preocupa cada vez más cómo la inteligencia artificial, las interdependencias digitales, la geopolític
     

Cómo elaborar un plan de continuidad del negocio eficaz

8 de Maio de 2026, 08:19

Las organizaciones se enfrentan a un entorno operativo cada vez más amenazante y volátil. Los directivos señalan un aumento de los riesgos en múltiples áreas, incluyendo el fraude cibernético, el phishing y las interrupciones en la cadena de suministro, según el informe ‘Global Cybersecurity Outlook 2026’ del Foro Económico Mundial.

Al mismo tiempo, a los ejecutivos les preocupa cada vez más cómo la inteligencia artificial, las interdependencias digitales, la geopolítica y el complejo entorno operativo actual aumentan el riesgo a la hora de proteger la tecnología de su organización y garantizar la continuidad del negocio. Dos tercios (66%) de las organizaciones han aumentado el apoyo financiero o de recursos para la continuidad del negocio y la resiliencia en respuesta a ello, según el informe ‘State of Continuity and Resilience 2025’ del Business Continuity Institute.

Aun así, los líderes empresariales se preparan para incidentes impactantes cada vez más frecuentes, lo que hace que un plan de continuidad del negocio sólido sea más crítico que nunca. “Todas las empresas deben tener la mentalidad de que se enfrentarán a un desastre, y todas necesitan un plan para abordar los diferentes escenarios potenciales”, afirma Goh Ser Yoong, CISO de Ryt Bank y miembro del Grupo de Trabajo sobre Tendencias Emergentes de ISACA.

Un plan de continuidad del negocio ofrece a las organizaciones la mejor oportunidad de capear un desastre, al proporcionar instrucciones predefinidas sobre quién debe realizar qué tareas y en qué orden para mantener la viabilidad del negocio. Sin dicho plan, la organización tardará más de lo necesario en recuperarse de un evento o incidente —si es que llega a recuperarse.

¿Qué es un plan de continuidad del negocio?

Un plan de continuidad del negocio (BCP) es un manual estratégico creado para ayudar a una organización a mantener o reanudar rápidamente sus funciones empresariales ante una interrupción, ya sea causada por un desastre natural, disturbios civiles, un ciberataque o cualquier otra amenaza para las operaciones comerciales.

“La continuidad consiste en conocer el tiempo mínimo o la pérdida que una organización puede absorber y seguir siendo viable y operando. Se trata de la rapidez con la que puede recuperarse antes de que la situación se agrave para su clientela o su negocio, y de qué sistemas y procesos debe restablecer y en qué orden”, relata Matt Chevraux, director general de FTI Consulting.

Como tal, un plan de continuidad del negocio describe los procedimientos que la organización debe seguir para minimizar el tiempo de inactividad, abarcando los procesos empresariales, los activos, los recursos humanos, los socios comerciales y mucho más.

Un plan de continuidad del negocio no es lo mismo que un plan de recuperación ante desastres, que se centra en restaurar la infraestructura y las operaciones de TI tras una crisis. No obstante, un plan de recuperación ante desastres forma parte de la estrategia global para garantizar la continuidad del negocio, y el plan de continuidad del negocio debe servir de base para las medidas detalladas en el plan de recuperación ante desastres de una organización. Ambos están estrechamente relacionados, por lo que a menudo se agrupan bajo el término BCDR.

La continuidad del negocio también difiere de la resiliencia, aunque ambas están interrelacionadas. La continuidad del negocio se centra en restablecer las operaciones en caso de una interrupción, mientras que la resiliencia empresarial se refiere a la estrategia de una organización para responder a todo tipo de fuerzas internas y externas con el fin de garantizar su supervivencia y éxito a largo plazo.

Elementos de la planificación de la continuidad del negocio en la actualidad

Los eventos disruptivos son inevitables, según investigadores, responsables de riesgos y asesores ejecutivos. “Han quedado atrás los días en que las organizaciones utilizaban los programas de continuidad del negocio o de resiliencia como una especie de seguro por si algo fallaba. Ahora, las organizaciones deben afrontar la realidad; es solo cuestión de tiempo que se produzca un incidente catastrófico que afecte a los clientes”, escribe Forrester Research en su informe ‘Business Continuity Management Software Landscape, Q1 2026’.

Los ejecutivos no solo operan en un entorno en el que el riesgo de un incidente catastrófico no es una cuestión de ‘si’ sino de ‘cuándo’, sino que también trabajan en un mundo en el que la complejidad de las operaciones empresariales ha aumentado drásticamente.

Ahora las organizaciones deben tener en cuenta, como parte de sus planes de continuidad, un volumen creciente de usos de la IA, proveedores y conexiones digitales de terceros, afirma Ross Tisnovsky, socio de Everest Group y responsable de la práctica de investigación y asesoramiento para CIO de la empresa.

Por ejemplo, los planes actuales deben abordar la disponibilidad de la IA, así como su precisión y sus riesgos cibernéticos, como la amenaza de ataques de inyección de comandos, explica, señalando que los planes de continuidad actuales deben tener en cuenta preocupaciones más novedosas. “La preocupación con respecto a la infraestructura y las aplicaciones era la disponibilidad, pero ¿qué pasa si la IA te da basura? Esa degradación de la calidad de los resultados es una preocupación para la continuidad”.

Del mismo modo, las organizaciones deben evaluar y abordar su creciente dependencia operativa de terceros, ya sean hiperescalares o proveedores de LLM, un factor que también añade más complejidad a los planes de continuidad del negocio, afirma Tisnovsky.

“Ahora contamos con todos estos proveedores y, además, dependemos mucho más de las API y de la malla de servicios. Dependemos de conexiones potenciales de las que ni siquiera tenemos conocimiento”, explica. “Eso puede generar una exposición que no se puede controlar”.

Todas estas consideraciones se suman a la miríada de riesgos convencionales que un plan de continuidad del negocio siempre ha tenido que abordar, añade Tisnovsky.

Creación (y actualización) de un plan de continuidad del negocio

Ya sea creando el primer plan de continuidad del negocio de la organización o actualizando uno ya existente, el proceso implica varios pasos esenciales.

Evaluar los procesos de negocio en cuanto a su criticidad y vulnerabilidad: la planificación de la continuidad del negocio comienza por comprender qué es lo más importante para la empresa. Evaluar los procesos de negocio para determinar cuáles son los más críticos; cuáles son los más vulnerables y ante qué tipo de incidentes; y cuáles son las pérdidas potenciales si esos procesos se interrumpen durante un día, unos días, una semana o más.

“Empiece con un análisis de impacto en el negocio: ¿cuáles son los elementos críticos que hacen funcionar el negocio?”, recomienda Lawrence Bilker, director de sistemas de información de Lift Solutions Holdings. “Identifique los procesos de negocio y los sistemas que hacen funcionar la empresa”.

Esta evaluación es más exigente que nunca debido a la complejidad del lugar de trabajo híbrido actual, el entorno de TI moderno y la dependencia de socios comerciales y proveedores externos para ejecutar o dar soporte a los procesos críticos.

Como resultado, la evaluación requiere un inventario no solo de los procesos clave, sino también de los componentes de apoyo —incluidos los sistemas de TI, las redes, el personal y los proveedores externos—, así como de los riesgos a los que se enfrentan dichos componentes, afirma Goh.

Determine el RTO y el RPO de su organización: el siguiente paso es determinar el objetivo de tiempo de recuperación (RTO) de la organización, que es el tiempo objetivo entre el momento del fallo y la reanudación de las operaciones, y el objetivo de punto de recuperación (RPO), que es la cantidad máxima de pérdida de datos que una organización puede soportar.

Cada organización tiene su propio RTO y RPO en función de su negocio, sector, requisitos normativos y otros factores operativos. Además, las diferentes partes de una empresa pueden tener distintos RTO y RPO, que deben establecer los ejecutivos.

Algunas empresas “necesitan estar operativas en todo momento sin fallos, por lo que necesitan una alta disponibilidad, lo que significa una o dos copias de seguridad”, afirma Bilker.

Detallar los pasos, funciones y responsabilidades para la continuidad: los líderes empresariales deben entonces utilizar el RTO y el RPO, junto con su análisis de impacto en el negocio, para determinar las tareas específicas que deben realizarse, quién debe llevarlas a cabo y en qué orden, a fin de garantizar la continuidad del negocio.

Una herramienta habitual para la planificación de la continuidad del negocio es una lista de verificación que incluya suministros y equipos, la ubicación de las copias de seguridad de los datos y los centros de respaldo, dónde está disponible el plan y quién debe tenerlo, así como la información de contacto de los servicios de emergencia, el personal clave y los proveedores de los centros de respaldo.

No es necesario identificar todos los riesgos posibles para la organización al elaborar o actualizar un plan de continuidad del negocio, afirma Kayne McGladrey, miembro sénior de la asociación profesional sin ánimo de lucro IEEE.

La lista de posibles escenarios de impacto es extensa. En lugar de intentar identificarlos todos, McGladrey aconseja identificar los tipos de incidentes más probables y representativos y, a continuación, centrarse en cómo dichos incidentes podrían afectar al negocio. A partir de ahí, los líderes deben determinar qué impactos serían intolerables en función de la tolerancia al riesgo de la organización. “Piense en los riesgos empresariales, no en los riesgos técnicos ni en las causas, sino en los impactos en el negocio”, afirma McGladrey.

El objetivo, subraya, es crear un plan de continuidad del negocio capaz de indicar a la organización cómo recuperarse de un evento inesperado de cualquier tipo.

La importancia de poner a prueba el plan de continuidad del negocio

Las pruebas y los simulacros son otros componentes críticos de la planificación de la continuidad del negocio, ya que muestran si un plan funcionará y en qué medida. También ayudan a preparar a las partes interesadas para un incidente real, desarrollando la memoria muscular necesaria para responder con rapidez y confianza durante una crisis.

“Las pruebas y la formación del personal son fundamentales para que todos sepan qué hacer en caso de fallo”, recuerda Bilker.

También ayudan a identificar lagunas en el plan elaborado. Por ejemplo, Bilker señala que las pruebas y la formación podrían revelar la falta de copias de seguridad o de alternativas para sistemas, proveedores o personas críticos.

Además, las pruebas y la formación ayudan a identificar dónde puede haber una falta de alineación de objetivos. Por ejemplo, es posible que los ejecutivos hayan restado prioridad a la importancia de restaurar determinados sistemas de TI, solo para darse cuenta durante un simulacro de que estos son esenciales para respaldar procesos críticos.

Tipos y periodicidad de las pruebas

Muchas organizaciones prueban un plan de continuidad del negocio entre dos y cuatro veces al año. Los expertos afirman que la frecuencia de las pruebas, así como de las revisiones y actualizaciones, depende de la organización: su sector, su velocidad de innovación y transformación, la rotación del personal clave, el número de procesos de negocio, etc.

Las pruebas habituales incluyen ejercicios de simulación, revisiones estructuradas y simulaciones. Los equipos de prueba suelen estar compuestos por el coordinador de recuperación y miembros de cada unidad funcional.

Un ejercicio de simulación suele realizarse en una sala de reuniones, donde el equipo examina minuciosamente el plan, busca lagunas y se asegura de que todas las unidades de negocio estén representadas.

En un repaso estructurado, los miembros del equipo revisan en detalle sus componentes del plan para identificar puntos débiles. A menudo, el equipo realiza la prueba teniendo en mente un desastre específico. Algunas organizaciones incorporan simulacros y juegos de rol de desastres en el repaso estructurado. Cualquier punto débil debe corregirse y debe distribuirse un plan actualizado a todo el personal pertinente.

Algunos expertos aconsejan realizar un simulacro completo de evacuación de emergencia al menos una vez al año.

Las pruebas de simulación de desastres —que pueden ser bastante complejas— también deben realizarse anualmente. Para esta prueba, cree un entorno que simule un desastre real, con todo el equipo, los suministros y el personal (incluidos socios comerciales y proveedores) que se necesitarían. La simulación ayuda a determinar si la organización puede llevar a cabo funciones empresariales críticas durante un evento real.

Durante cada fase de las pruebas del plan de continuidad del negocio, incluya a algunos empleados nuevos en el equipo de pruebas. Una mirada fresca puede detectar lagunas o omisiones de información que los miembros experimentados del equipo podrían pasar por alto.

La revisión y actualización del plan de continuidad del negocio debe ser un proceso continuo. De lo contrario, los planes quedan obsoletos y no sirven de nada cuando se necesitan. “La frecuencia con la que debe actualizarse debe venir determinada por el negocio”, afirma Tisnovsky.

Reúna al personal clave al menos una vez al año para revisar el plan y debatir las áreas que requieren modificaciones. Antes de la revisión, solicite comentarios del personal para incorporarlos al plan. Pida a todos los departamentos o unidades de negocio que revisen el plan, incluidas las sucursales u otras unidades remotas.

Además, una función sólida de continuidad del negocio requiere revisar la respuesta de la organización en caso de un incidente real. Esto permite a los ejecutivos y a sus equipos identificar lo que la organización hizo bien y en qué aspectos debe mejorar.

Buenas prácticas adicionales

Según los asesores de gestión y los ejecutivos con experiencia, las siguientes buenas prácticas pueden ayudar a las organizaciones en su planificación de la continuidad del negocio:

Utilizar la IA para ayudar a crear y mantener el plan: Zach Rossmiller, vicepresidente asociado y director de sistemas de información de la Universidad de Montana, utiliza una herramienta de IA generativa personalizada para analizar los procesos, procedimientos, infraestructura y arquitectura de la organización, así como su plan de continuidad del negocio, con el fin de identificar posibles deficiencias, como la necesidad de probar los generadores del centro de datos de la universidad. Dado el rendimiento de la herramienta, Rossmiller aconseja a otros que utilicen la IA para la planificación y las pruebas de continuidad del negocio. Chevraux afirma que la IA también puede utilizarse para el descubrimiento de datos, la cartografía y la realización de evaluaciones de impacto en el negocio.

Por su parte, Bilker destaca la importancia de incluir planes de comunicación como parte del plan de continuidad del negocio. “Durante un incidente es difícil recordar quién recibe qué información y cuándo, y quién distribuye la información, por lo que el plan de continuidad del negocio debe detallar esa información”, afirma.

Del mismo modo, el plan debe identificar quién desempeña qué funciones y responsabilidades durante y después de un incidente para agilizar la respuesta y reducir la confusión.

Bilker también aconseja a las organizaciones que revisen sus planes de continuidad cada vez que se produzca un cambio importante en el negocio. Entrar en nuevos mercados o cambiar de un proveedor de nube clave a otro debería dar lugar a una actualización del plan de continuidad del negocio.

Cómo garantizar el apoyo y la concienciación sobre el plan de continuidad del negocio

Todo plan de continuidad del negocio debe contar con el apoyo de la cúpula directiva. Esto significa que la alta dirección debe estar representada a la hora de crear y actualizar el plan; nadie puede delegar esa responsabilidad en sus subordinados. Además, es probable que el plan se mantenga actualizado y viable si la alta dirección le da prioridad dedicando tiempo a su revisión y pruebas adecuadas.

La dirección también es clave para promover la concienciación de los usuarios. Si los empleados no conocen el plan, ¿cómo podrán reaccionar adecuadamente cuando cada minuto cuente?

Aunque la distribución del plan y la formación pueden correr a cargo de los responsables de las unidades de negocio o del personal de RR. HH., es recomendable que alguien de la alta dirección inicie la formación y destaque su importancia. Esto tendrá un mayor impacto en todos los empleados, dotando al plan de mayor credibilidad y urgencia.

  • ✇Security | CIO
  • Your CEO just got AI FOMO. Here are 6 tips on what to do next.
    Every CIO I know has had some version of this conversation: their CEO comes back from a golf trip with their buddy, or a conference with peers, and is told AI is about to automate everything at their company, from HR to marketing and finance. No humans in the loop, just AI. The CEO then calls an all-hands Monday morning, and the CIO is suddenly on the hook to make it all happen. The instinct for CEOs to chase unsubstantiated claims is understandable since they’re respon
     

Your CEO just got AI FOMO. Here are 6 tips on what to do next.

8 de Maio de 2026, 07:00

Every CIO I know has had some version of this conversation: their CEO comes back from a golf trip with their buddy, or a conference with peers, and is told AI is about to automate everything at their company, from HR to marketing and finance. No humans in the loop, just AI. The CEO then calls an all-hands Monday morning, and the CIO is suddenly on the hook to make it all happen.

The instinct for CEOs to chase unsubstantiated claims is understandable since they’re responding to competitive pressure. But that leaves CIOs responsible to close the gap between ambition and reality. Making AI work in an organization with decades of accumulated process, permission frameworks, and cultural inertia is very different from deploying it in a demo.

The best response isn’t to push back on the ambition, but redirect it. Translate the CEOs vision into an honest map of what has to happen for the organization to get there, including the infrastructure, governance, and training. That helps to convert the kneejerk compulsion to move faster into a concrete plan that leadership can get behind.

Here’s what CIOs should actually be focused on to get where their CEOs want them to go, regardless of what’s discussed on the links.

1. Start where AI can build its own credibility

The hype machine wants you to climb Everest on day one. Instead, identify the repetitive tasks where AI can prove itself on familiar ground — the workflows your team already knows well, where results are easy to verify and the bar for trust is attainable.

The goal is the Eureka moment when a skeptic on your team sees a real result and becomes a believer. Those moments compound. When someone has seen AI make their work easier in a context they understand, they’re more likely to help you move things forward. You can’t force that change, but you can engineer the conditions for it.

2. Models will commoditize. Context will not.

Every few months, a new model claims to be smarter, faster, and cheaper than the last one. Don’t be distracted by that race. The lasting advantage in enterprise AI doesn’t just come from which model you’re running, it’s in the quality, governance, and semantic clarity of the data feeding it. Enterprises that invest in consistent business definitions, well-structured data, and clear lineage will outperform those that don’t, regardless of which model is in fashion. Context is your competitive moat. Focus on building that.

3. Nail down the permissions

In a world of dashboards, you know exactly what data will appear on a given page, so you can set permissions in advance for who can access it. In an AI world, the system can generate outputs that were never pre-designed. So how do you determine who has the right to see a result that was never anticipated?

Before deploying any agent that acts on someone’s behalf, such as filing a request, surfacing payroll data, or populating a record, first determine whether your existing permissions and access control frameworks can handle outputs that were never planned for. Most can’t. This is a prerequisite of what your CEO is asking for: the unglamorous infrastructure work that determines whether your AI is trustworthy in production. It needs to happen before you scale, not after.

4. Build an editing culture, not a writing one

For decades, engineers, analysts, and operations teams have been trained to write code, build reports, and define new processes. AI upends that. The skill now is editing — auditing what the system produces, catching what it got wrong, and knowing where to push back.

The truth is most people aren’t naturally good at editing because they’ve never had to be. That’s a skills gap that needs to be closed early on. Invest in helping engineers, analysts, and managers develop the judgment to evaluate AI outputs, not just generate them. Editing must become a core enterprise competency.

5. Measure behavior change, not tool adoption

Login data is a vanity metric. If your engineers are accessing AI coding tools but aren’t changing how they build, you haven’t adopted anything. The metric that makes more sense is productivity output. In agile terms, a team that completes 20 story points per sprint should hit about 28 with AI, not because the tools are magic, but because the repetitive work gets faster. If you’re not seeing that, you’re measuring the wrong thing. Pay attention to output, not usage metrics.

6. Reframe your organization’s relationship with failure

The instinct to de-risk everything made sense when software deployments were expensive and slow to reverse. AI works differently. The outputs are probabilistic, the iteration cycles are fast, and being overly cautious can cost valuable time. CIOs need to give teams permission to experiment in ways that feel uncomfortable by traditional enterprise standards, all while building the feedback loops that make fast failure safe. That culture shift has to be modeled from the top.

FOMO isn’t going away

CEOs will keep getting pulled into cycles of urgency and FOMO, and that pressure will keep landing on CIOs. The organizations that make real progress will be the ones that redirect that energy into infrastructure that makes AI trustworthy, measurement systems that show what’s working, and cultural changes that make adoption stick. That’s the agenda that’ll move your organization forward.

Antes de ontemStream principal

CISA Launches CI Fortify to Defend Critical Infrastructure From Nation-State Cyber Threats

CI Fortify

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has launched a new initiative called “CI Fortify” aimed at helping critical infrastructure operators prepare for disruptive cyberattacks linked to geopolitical conflicts. The initiative comes amid growing concerns over nation-state cyber threats targeting operational technology (OT) systems that support essential services across the United States. The CI Fortify initiative focuses on improving critical infrastructure resilience through two key objectives: isolation and recovery. CISA said the effort is designed to help operators maintain essential operations even if adversaries compromise telecommunications networks, internet services, or industrial control systems. According to the agency, nation-state actors are no longer limiting their activities to espionage. Instead, threat groups have increasingly been pre-positioning themselves inside critical infrastructure environments to potentially disrupt or destroy systems during future geopolitical conflicts.

CI Fortify Initiative Focuses on Isolation and Recovery

Under the CI Fortify initiative, CISA is urging critical infrastructure organizations to assume that third-party communications and service providers may become unreliable during a crisis. Operators are also being asked to plan under the assumption that threat actors may already have some level of access to OT networks. Nick Andersen, Acting Director at CISA, emphasized the need for organizations to prepare for worst-case operational scenarios. “In a geopolitical crisis, the critical infrastructure organizations Americans rely on must be able to continue delivering, at a minimum, crucial services,” Andersen said. “They must be able to isolate vital systems from harm, continue operating in that isolated state, and quickly recover any systems that an adversary may successfully compromise.” The isolation strategy outlined under CI Fortify involves proactively disconnecting operational technology systems from external business networks and third-party connections. CISA said this approach is intended to prevent cyber impacts from spreading into OT environments while allowing organizations to continue delivering essential services in a degraded communications environment. The agency advised operators to identify critical customers, including military infrastructure and other lifeline services, and determine the minimum operational capabilities needed to support them during emergencies. CISA also recommended updating engineering processes and business continuity plans to support safe operations for extended periods while systems remain isolated.

Recovery Planning Central to Critical Infrastructure Resilience

Alongside isolation, the CI Fortify initiative places strong emphasis on recovery planning. CISA urged operators to maintain updated system documentation, create secure backups of critical files, and regularly practice system replacement or manual operational transitions. The agency noted that organizations should also identify communications dependencies that could complicate recovery efforts, such as licensing servers, remote vendor access, or upstream network connections. CISA encouraged operators to work closely with managed service providers, system integrators, and vendors to understand potential failure points and establish alternative recovery pathways. The initiative also highlights broader benefits of emergency planning beyond cybersecurity incidents. According to CISA, the same planning processes can help organizations maintain operations during weather-related disruptions, equipment failures, and safety emergencies. The agency said isolation planning can help cut off command-and-control access to compromised systems, while strong recovery preparation can reduce incident response costs and shorten recovery timelines.

Security Vendors and Service Providers Asked to Support CI Fortify

The CI Fortify initiative extends beyond infrastructure operators and calls on cybersecurity vendors, industrial automation suppliers, and managed service providers to support resilience planning efforts. Industrial control system vendors are being encouraged to identify barriers that could interfere with isolation and recovery procedures, including licensing restrictions and server dependency issues. Managed service providers and integrators are expected to assist organizations in engineering updates, local backup collection, and recovery documentation planning. Meanwhile, security vendors are being asked to support threat monitoring and provide intelligence if nation-state actors shift from espionage-focused activity to destructive cyber operations. CISA also requested vendors share information related to tactics that could undermine recovery or bypass isolation protections, including malicious firmware updates and vulnerabilities affecting software-based data diodes.

Volt Typhoon Cyberattacks Continue to Shape U.S. Cybersecurity Strategy

The launch of CI Fortify is closely tied to ongoing concerns surrounding the Volt Typhoon cyberattacks, which U.S. officials have linked to Chinese state-sponsored threat actors. CISA’s initiative specifically references the Volt Typhoon campaign as an example of how adversaries have attempted to establish long-term access inside U.S. critical infrastructure systems to potentially support disruptive actions during military conflicts. The Volt Typhoon operation first became public in 2023, when U.S. authorities revealed that Chinese hackers had infiltrated multiple sectors of American critical infrastructure. Former CISA Director Jen Easterly stated in 2024 that the agency had identified and removed Volt Typhoon intrusions across several sectors. She later reiterated in 2025 that efforts continued to focus on identifying and evicting Chinese cyber actors from critical infrastructure environments. Despite these operations, cybersecurity researchers and some government officials have warned that Chinese threat actors may still retain access to portions of critical infrastructure networks. Several experts have argued that nation-state groups remain deeply embedded in certain environments despite years of remediation efforts. With the CI Fortify initiative, CISA appears to be shifting focus toward operational resilience, recognizing that prevention alone may not be sufficient against sophisticated nation-state cyber threats targeting U.S. critical infrastructure.
  • ✇Security | CIO
  • AI FOMO: When AI Is the wrong answer to the right problem
    Most AI project failures I have seen do not announce themselves cleanly. There is rarely a moment where someone stands up and admits to making the wrong call. Instead, the project quietly underdelivers. The team makes constant adjustments; leadership loses confidence and eventually the whole thing is filed away under “we tried AI and it did not work out.” This happens without anyone doing a real accounting of what the decision actually cost. I was close to one of those
     

AI FOMO: When AI Is the wrong answer to the right problem

6 de Maio de 2026, 08:00

Most AI project failures I have seen do not announce themselves cleanly. There is rarely a moment where someone stands up and admits to making the wrong call. Instead, the project quietly underdelivers. The team makes constant adjustments; leadership loses confidence and eventually the whole thing is filed away under “we tried AI and it did not work out.” This happens without anyone doing a real accounting of what the decision actually cost.

I was close to one of those situations not long ago. An organization had a system built around county-level values that drove a core business process. Over time, those values had drifted and the outputs were degrading in ways that affected the bottom line. The path forward was not complicated: A targeted update to the underlying values and some lightweight tooling to detect drift going forward. It would have been a few weeks of focused work at a modest cost with high confidence in the outcome.

What happened instead was that the organization decided to rebuild the system entirely using a non-deterministic AI model. This is worth pausing on because the original problem was deterministic by nature. It had known inputs, predictable logic and a correct answer that did not change based on inference or probability. Reaching for a non-deterministic solution in that context was not a technology decision; it was a category error. I understand why it was made. AI was consuming every boardroom conversation at the time and there was real pressure to be seen doing something proportionate to the moment.

The new system appeared to correct the original problem for a while, and it looked like the right call. Then the drift returned, worse than before, and the expense they had been trying to eliminate returned at a scale that dwarfed the original issue. The organization had applied the wrong class of solution to a well-defined problem, and nobody in the room had stopped to ask whether that mattered.

The capital allocation problem

This is not an isolated story. Harrison Allen Lewis, a three-time CIO, recently published a piece that puts a number on the broader pattern. He argues that in most enterprises, somewhere between 15–25 percent of technology spend is tied up in redundant systems that deliver no material business value. This trend is mirrored in recent Deloitte research on the “AI ROI paradox“: While 85 percent of organizations increased their AI spend in 2025, the average payback period for these investments has stretched to nearly four years. This is a significant departure from the traditional seven to 12-month window for enterprise technology. These are not technology failures; they are capital allocation problems.

What sits underneath that number is AI FOMO. The fear of being the organization that did not move fast enough is real and sometimes legitimate. But FOMO is a particularly dangerous input to a capital allocation decision because it optimizes for the appearance of action rather than the quality of the outcome. It pushes organizations toward the sophisticated answer when the precise one would have been faster, cheaper and more durable.

The result is spend that accumulates without a clear line back to value. Boston Consulting Group recently found that while 88 percent of organizations have begun AI pilots, only 5 percent have managed to reap substantial financial gains. The remaining 60 percent are failing to achieve any material value at all despite substantial investment. The antidote is discipline around how AI investments are evaluated, governed and killed when the evidence stops supporting them. That discipline has to start before the build decision, not after the drift sets in.

The pre-build diagnostic

Before an organization reaches for a governance framework, there is a more fundamental question that rarely gets the attention it deserves: Is this actually a problem AI is suited to solve, and does this organization have what it takes to support the solution over time? I have watched that question get skipped more times than I can count. The investment thesis gets built around what the model can do in a demo environment, and by the time the fit between the model and the actual problem becomes clear, the budget is already committed and the team is already building.

There are three things worth examining honestly before that happens. The first is whether the model can genuinely do the job at the scale and accuracy the business actually requires. Accuracy thresholds sound like a technical detail but they carry real financial weight. If the business needs 98 percent accuracy and the model reliably delivers 85, the human review layer required to catch and correct the gap will often cost more than the manual process the AI was supposed to replace.

Inference cost compounds that further. The true cost of an AI output includes not just tokens and compute but the ongoing engineering attention the system requires to stay functional. That number has to be meaningfully lower than human labor at production volume, not just at pilot scale. The scalability question is the one most sandboxes never answer honestly. A model that performs well on clean, bounded data in a controlled environment will frequently encounter the edge cases of real-world production and behave very differently.

Whether the organization can actually support what it is proposing to build is the second and often the most uncomfortable set of questions. Data ownership sits at the center of it. A project that depends on a third-party data stream the organization does not control, or on data that lacks the cleanliness the model requires to perform, is carrying a foundational risk that no amount of engineering will resolve.

Integration complexity belongs in the same conversation. A high-performing model that cannot connect to existing systems without a custom middleware project that costs more than the value being generated is not a solution; it is a different problem. And the internal talent required to keep the system from drifting over time is the dimension that gets the least scrutiny during approval and the most attention eighteen months later when something starts to go wrong and nobody knows how to respond.

The third area is whether the business will actually accept and sustain the outcome, which is a different question from whether the technology works. In regulated industries, any model that cannot produce a clear audit trail for its decisions should not survive an early review, regardless of its performance metrics. Time to measurable signal matters because a project that cannot demonstrate proof of value within ninety days is asking for extended runway without evidence. That is how pilots quietly become permanent operational commitments.

Whether the capability is genuinely defensible is worth asking early. Spending significant capital to build something a competitor can replicate with the same off-the-shelf API, and a week of engineering time is not innovation; it is an expensive way to achieve parity. And the people who are supposed to use the output have to actually trust it. A model that performs well technically but that underwriters, analysts or customers refuse to rely on has failed regardless of what the benchmark numbers say.

Working through these questions before the build decision gets made does not eliminate risk. But it shifts the conversation from what we could build to whether we are actually set up to build it well and sustain it honestly.

Governance proportional to risk

Assuming the diagnostic holds up and the case for building is genuine, the next question is what kind of governance the investment actually needs. Most organizations default to a single approach regardless of what they are building. That default is its own category of mistake. A speculative revenue experiment and a core operational system are not the same kind of bet. Treating them with the same oversight model will either strangle the experiment with bureaucracy or expose the core system to risk it was never designed to absorb.

The situation should determine the framework, not the other way around.

When an organization is exploring genuinely new territory, such as testing an AI-driven revenue stream or a product capability that has no internal precedent, the governance model needs to be tight at the front and earn its way to freedom. Room without gates is how speculative projects consume eighteen months of runway without producing anything the business can point to. What works better is a short initial window to prove the basic math, a defined accuracy threshold that has to be cleared before real-world data enters the picture, and a clear escalation path from shadow environment to full integration. Each stage gets more autonomy because each stage has earned it.

When the goal is modernizing internal operations, the governance question shifts. The risk profile is different because the organization is not exploring unknown territory; it is trying to do something it already does, but more efficiently. In these situations, the burden of proof moves away from accuracy and toward data. A model being trained on proprietary internal data to automate a known workflow is only as good as the data it runs on. Tight monitoring on error rates early, a clear standard for data sovereignty before any custom model work begins, and meaningful gates around the removal of manual steps are essential. The leeway expands as the evidence of process improvement accumulates, not before.

When the primary concern is margin protection on high-volume transactions, the economics have to be the governing logic from the start. The question is not whether AI can perform the task but whether the cost of AI performing the task stays below the cost of human labor at the volume the business actually runs. That calculation needs to be established as a baseline before build begins and monitored continuously afterward. Inference costs do not always scale linearly. A model that is economically viable at pilot volume can become a hidden tax on every transaction at production volume. The governance here is financial rather than technical. If the margin math stops working, the project stops regardless of how technically impressive the solution is.

The most complex governance situation is the one where an organization needs to manage immediate operational pressure and longer-term strategic bets at the same time. The temptation is to treat everything with the same urgency, which often means that immediate fixes consume the bandwidth that strategic work requires. Separating these explicitly, with different oversight cadences, different capital thresholds and different definitions of success for each horizon, is what allows an organization to fix what is broken today without sacrificing the position it is trying to build for the future.

Final perspective

There is a version of this conversation that treats AI governance as a compliance exercise: A set of controls designed to slow things down and protect the organization from its own enthusiasm. These frameworks are not brakes. They are the difference between capital that compounds and capital that quietly drains away while everyone is focused on the technology.

The organizations that navigate this well share a few things in common that have nothing to do with the sophistication of their models or the size of their AI budgets. They have technology leaders who are willing to kill a project when the evidence stops supporting it. This sounds obvious but is genuinely rare when a team has been building for six months and the sunk cost is visible. They have CFOs and boards who understand that a well-governed AI portfolio will have failures in it, and that those failures are not evidence of a broken process but evidence that the process is working.

The organization I described at the beginning of this piece did not fail because they chose the wrong AI approach. They failed because they chose AI for a problem that did not require it. That was a governance error that happened before a single line of code was written. Getting the category right matters more than getting the model right.

Knowing which kind of problem you have before you decide which kind of solution to reach for and then governing the investment in proportion to what you actually know, is what separates organizations building an advantage that holds from the ones already filing an AI post-mortem under things that did not work out.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • How UKG puts AI to work for frontline employees
    As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation fram
     

How UKG puts AI to work for frontline employees

6 de Maio de 2026, 07:00

As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation framework to transform the frontline worker experience, and why the CIO-CHRO partnership is critical to making it stick.

How do you leverage AI for growth and transformation at UKG?

UKG is one of the largest HR, pay, and workforce management tech platforms in the market, and our expertise is in creating solutions for frontline workers, which account for 80% of the world’s workforce. This is important because when companies rebrand themselves as AI for knowledge workers, they’re not talking about frontline workers. But people in retail, manufacturing, healthcare, and so on also benefit from AI capabilities.

So the richness of our data sets, and our long history with the frontline workforce, positions us well for AI driven workforce transformation. 

What are some examples?

We use agentic AI for dynamic workforce operations, which shows us real-time labor demand. Our customers employ thousands of frontline workers, and the timely market insights and suggested actions we give them are new and valuable.

We also provide voice agents. Traditionally, when a frontline worker requests a shift, managers would review availability, fill out paperwork or update scheduling software, and eventually offer an appropriate job. With voice agents, AI works directly with the frontline worker, going through background and skills validation, communication, and even workflow execution. The worker can also ask if they can swap shifts or even get advice on how to make more money in a particular month. This is where AI changes the entire frontline worker experience.

We also launched People Assist, an autonomous employee support agent. Typically, when an employee is onboarded, IT and HR need to trigger and approve workflows. People Assist  not only tracks workflows, but also performs those necessary IT and HR onboarding activities so new employees are productive from day one.

What framework do you use to create these new capabilities?

For internal AI usage for our own employee experience, we use an idea-to-implementation framework, which involves a community of UKG power users who are subject matter experts in their area. Ideas can come from anybody, and since we started nine months ago, more than 800 ideas have been submitted. The power users set our priorities by choosing the ideas that will make the most impact.

Rather than funneling ideas through a small central team — a linear process that kills momentum — we’ve democratized innovation across the business. We give teams the governance frameworks, change models, and risk guardrails they need to move quickly.  With AI, the most important thing isn’t to launch, but to land.

But before we adopted the framework, we defined internal personas so we could collaborate with different employee groups across the company, from sales to finance.

With the personas and the framework, we can prioritize ideas by persona, which also facilitates crowd sourcing. You’re asking an entire persona which of these 10 ideas will make their lives better, rather than senior leaders making those decisions for them.

Why do so many CIOs focus on personas for their AI engine?

Across the enterprise, every function has a role to play. We hire marketing, sales, and finance for a particular purpose. Before AI, we gave generic packaged tools to everyone. AI allows us to build capabilities to make a specific job more effective. Even our generic AI tools are delivered by persona. Its impact on specific roles is the reason personas are so important right now. Our focus is on the actual jobs, the people who do them, the skills and tasks needed, and the outcomes they want to achieve.

We know our framework and persona focus work from employee data. In our most recent global employee engagement survey, 90% said they’re getting the right AI tools to be effective. For the AI tools we’ve launched broadly across the company, eight out of 10 employees use them. For me, AI isn’t about launching 10,000 tools, because if no one uses them, it’s just additional cost for the CIO and the company.

Is the build or buy question more challenging in this nascent stage of AI?

The lifecycle of technology has moved from three years to three hours, so whenever we build at UKG, we use an open architecture, which allows us to build with a commercial product if one comes on the market.

Given the speed of innovation, we lean toward augmentation rather than build. There are areas, like our own native products, where a dedicated engineering team makes sense. But for most of our AI capabilities — customer support and voice agents, for example — we work with our vendor partners. We test and learn with multiple vendors, and decide on one usually within two weeks.

This is what AI is giving all CIOs: flexibility, rapid adoption, interoperability, and the ability to quickly switch vendors. It’s IT that’s very different from what it used to be.

Given the shift to augmentation, how will the role of the software engineer change?

For software builders, business acumen — the ability to understand context — is no longer optional. In the past, the business user would own the business context, and the developer, who owns the technology, brings that business idea to life. Going forward, the builder has the business context to create the right prompts to let AI do the building, and the human in the loop is no longer the technology builder, but the provider of context, prompts, and validation of the work. So the engineer doesn’t go away, however they now finish a three-week scope of work in hours. With AI, engineers operate at a different altitude. The SDLC stays, but agility increases where a two-week concept compresses into two days.

At UKG, you’re directly connected to the CHRO community. What should they be thinking about how the workforce is changing with AI?

The best CHROs are thinking about the skills they’ll need for the future, and how to train existing talent to be ready. They’re not questioning whether we’ll need people, but how to sharpen our teams for new roles. The runbooks for both IT and HR are evolving, which is why the CIO-CHRO partnership has never been more critical to create the right culture for AI transformation.

CIOs can deliver a wealth of employee data like roles, skillsets, and how people spend their time. And as HR leaders help business leaders think through their roadmap for talent —  both human and AI — IT leaders can equip them with exactly that intelligence.

What advice would you give to CIOs driving AI adoption?

Invest in AI fluency, not just AI tools. Your people don’t need to become data scientists, but they do need a new kind of literacy — the ability to work alongside AI, question its outputs, and know when to override it. That’s a training and culture investment, not a software investment.

And redesign work before you redeploy people. Don’t just drop AI into existing workflows. Use this moment to ask what work really matters. AI is forcing us to have the job design conversations we should’ve had years ago, so it’s important to be transparent about the journey. What’s killing workforce trust now is ambiguity. Your people can handle hard truths but not silence. Leaders who communicate openly about where AI is taking the organization will retain the talent they need to get there.

  • ✇Security | CIO
  • Cuenta atrás para presentar candidaturas en España a los CIO 50 Awards
    Un año más, vuelve la convocatoria de premios de referencia para distinguir a los mejores directivos de sistemas de información (CIO) en España y los proyectos de TI más innovadores realizados en el país. La iniciativa, conocida como los ‘Oscar de la industria de TI’, forma parte del proyecto global CIO Awards con el que la publicación internacional CIO, del grupo editorial Foundry, pone en valor la labor de ejecutivos de primer nivel capaces de impulsar valiosos resultado
     

Cuenta atrás para presentar candidaturas en España a los CIO 50 Awards

5 de Maio de 2026, 12:31

Un año más, vuelve la convocatoria de premios de referencia para distinguir a los mejores directivos de sistemas de información (CIO) en España y los proyectos de TI más innovadores realizados en el país. La iniciativa, conocida como los ‘Oscar de la industria de TI’, forma parte del proyecto global CIO Awards con el que la publicación internacional CIO, del grupo editorial Foundry, pone en valor la labor de ejecutivos de primer nivel capaces de impulsar valiosos resultados empresariales mediante el liderazgo digital, la visión estratégica y la innovación tecnológica.

Los premios esta vez recalan en España bajo el nombre de CIO 50 Awards. El plazo de recepción de candidaturas para la edición de 2026 está abierto hasta el próximo 29 de mayo y la cita de entrega de los galardones tendrá lugar el 8 de octubre en Madrid, en el marco de una gran conferencia que se celebrará en paralelo y estará centrada en la temática “Liderazgo tecnológico responsable, resiliencia y gobierno digital en el contexto español”. Durante la jornada, los galardonados en otras ediciones de los premios y los candidatos podrán compartir sus historias de éxito con otros líderes de TI, creando una experiencia de aprendizaje entre iguales de valor incalculable.

Quién puede participar

Pueden optar a los CIO 50 Awards los CIO y otros directivos/gerentes de tecnología de empresas, administraciones públicas u organizaciones sin ánimo de lucro (ONG).

Los directivos que se presenten a la convocatoria deben desempeñar una labor al más alto nivel en lo que respecta a estrategia y ejecución tecnológica y de transformación, pues los premios CIO 50 reconocen a aquellos líderes que definen la dirección de la organización, contribuyen a decisiones a nivel del consejo directivo y ejercen influencia en inversiones tecnológicas de gran envergadura. Un requisito para presentar candidatura es que los CIO lleven al menos un año en la organización para la que trabajan actualmente.

Los consultores, proveedores de TI, de software o de hardware y las empresas de estudios de mercado o servicios de información no podrán optar a los CIO 50.

Cómo se elige a los premiados

Como en las ediciones anteriores, las candidaturas serán valoradas por un jurado independiente que analizará aspectos como los desafíos afrontados en los proyectos y las soluciones implementadas; los beneficios y mejoras logrados; el impacto en el negocio (optimización de costes, mejora de márgenes, crecimiento de ingresos); los aumentos en la productividad y la transformación de los procesos empresariales gracias a las TI.

El jurado está conformado por Fernando Muñoz, director del CIO Executive by Foundry; Esther Macías, directora editorial de CIO y COMPUTERWORLD en España; los históricos CIO, ya retirados, José María Tavera, que lideró la estrategia de TI de gigantes como Telefónica o Acciona, y José María Fuster, quien estuvo al frente de las TI del Banco Santander y ahora es patrono de la Fundación Real Academia de Ciencias de España; Dimitris Bountolos, CIIO de Ferrovial y ganador de la categoría CIO del año de la edición 2025 de los CIO 100 Awards Spain; Gracia Sánchez-Vizcaíno, CIO para Iberia & Latinoamérica de Securitas Group; Mar Hurtado de Mendoza, vicepresidenta global de reclutamiento en IE University y profesora adjunta de esta escuela de negocio; y Patricia Arboleda, presidenta de Women in Tech – Spain.

Una distinción local con alma global

La historia de los galardones CIO 100 y CIO 50 a la excelencia en TI empresarial se remonta a hace más de tres décadas, cuando comenzaron a otorgarse a directivos de Estados Unidos, para extenderse después a otros mercados como Alemania, Reino Unido, España, Singapur, Australia, Corea del Sur e India.

Se trata de una iniciativa clave para reconocer logros, compartir conocimiento y conectar a una influyente comunidad de responsables de la toma de decisiones en tecnologías de la información.

En la actualidad, la publicación CIO, del grupo Foundry, tiene abierto el proceso de recepción de candidaturas a los premios CIO 100 y CIO 50 en los siguientes países/regiones:

  • CIO 100 USA (agosto de 2026).– Fase de solicitud cerrada; La inscripción para la conferencia está abierta aquí. Más información
  • CIO del Año Alemania (octubre de 2026).– Fecha límite de presentación de candidaturas: el 15 de mayo de 2026. Más información
  • CIO 100 UK (septiembre de 2026).– Fecha límite de presentación de candidaturas: el 21 de mayo de 2026. Más información
  • CIO 50 España (octubre de 2026).– Fecha límite de presentación de candidaturas: 29 de mayo de 2026. Más información
  • CIO 100 India (septiembre de 2026).– Fecha límite de presentación de candidaturas: 5 de junio de 2026. Más información
  • CIO 100 Australia (septiembre de 2026).– Fecha límite de presentación de candidaturas: 19 de junio de 2026. Más información
  • CIO 100 ASEAN (noviembre de 2026).– Fecha límite de presentación de candidaturas: 27 de julio de 2026. Más información
  • CIO 50 Japón (diciembre de 2026) – Fecha límite de presentación de candidaturas: mediados de agosto de 2026.

Australia Forms Cyber Incident Review Board to Strengthen Defences After Major Breaches

Cyber Incident Review Board

Australia has announced the creation of a Cyber Incident Review Board, a move aimed at strengthening the country’s ability to respond to and learn from major cyberattacks. The initiative places Australia among a small group of jurisdictions globally that have formalised independent review mechanisms to assess significant cyber incidents and improve long-term resilience. The Cyber Incident Review Board will conduct no-fault, post-incident reviews of major cybersecurity events affecting both government and private sector organisations. Rather than assigning blame, the board’s mandate is to identify systemic gaps and generate actionable recommendations to improve how Australia prevents, detects and responds to cyber threats. Established under the Cyber Security Act 2024, the board is a central element of the government’s 2023-2030 Australian Cyber Security Strategy. The broader goal is to position Australia as one of the most cyber secure nations by the end of the decade, supported by resilient infrastructure, prepared communities and stronger industry practices. Officials said the Cyber Incident Review Board will focus on extracting lessons from incidents and translating them into practical steps that can reduce the likelihood and impact of future attacks.

Cyber Incident Review Board Brings Leaders From Cross-Sector 

The government has appointed a panel of senior cybersecurity and industry leaders to the Cyber Incident Review Board. The board will be chaired by Narelle Devine, Global Chief Information Security Officer at Telstra. Other members include Debi Ashenden of the University of New South Wales, Valeska Bloch from Allens, Jessica Burleigh of Boeing Australia, Darren Kane from NBN Co, Berin Lautenbach of Toll Group and Nathan Morelli from SA Power Networks. The group brings experience across cybersecurity operations, legal frameworks, governance, national security and critical infrastructure. Authorities said this mix is designed to ensure independent, credible advice that reflects both technical and policy realities.

Government Emphasises Learning Over Blame

Australia’s Minister for Cyber Security Tony Burke said the Cyber Incident Review Board will play a key role in ensuring continuous improvement in national cyber defence. “We know that cyber attacks are constant. This guarantees we learn from every attack and keep increasing our resilience,” Burke said in a statement. He added that the board will examine major cybersecurity incidents, develop findings and provide recommendations that can be applied across sectors. The no-fault model is intended to encourage cooperation from affected organisations, while still producing insights that can benefit the wider ecosystem.

Response Shaped by Recent High-Profile Cyberattacks

The creation of the Cyber Incident Review Board follows a series of major cyber incidents in Australia, including breaches involving health insurer Medibank and telecom provider Optus. These events exposed sensitive customer data and triggered widespread public concern, increasing pressure on the government to strengthen cybersecurity oversight. By introducing structured post-incident reviews, authorities aim to ensure that lessons from such breaches are not lost and can inform future preparedness efforts.

How Australia’s Approach Compares Globally

Australia’s Cyber Incident Review Board aligns with similar efforts internationally but includes some distinct features. The European Union has established a comparable mechanism under its Cyber Solidarity Act, tasking the EU Agency for Cybersecurity with reviewing significant cross-border incidents. However, that framework has yet to be tested in practice. In the United States, a cyber safety review board has already examined several incidents, including a high-profile breach involving Microsoft. That report pointed to avoidable security failures and called for cultural and leadership changes within the company, prompting CEO Satya Nadella to prioritise security across operations. However, earlier U.S. reviews, such as those into the Log4j vulnerability and the Lapsus$ group, were criticised for lacking focus and impact. Analysts noted that broader, less targeted reviews made it harder to drive accountability or meaningful change.

Stronger Powers to Ensure Participation

One notable difference in Australia’s model is its ability to compel organisations to provide information if they decline to participate voluntarily. This marks a shift from the U.S. approach, which relied on cooperation from affected entities. Experts have argued that such powers could improve the depth and accuracy of findings, ensuring that the Cyber Incident Review Board has access to critical data when analysing incidents. At the same time, the framework stops short of allowing flexible expansion of board membership for specialised cases, an idea that has been suggested in international policy discussions.

Focus on Long-Term Cyber Preparedness

The Cyber Incident Review Board is expected to become a key mechanism in shaping Australia’s cybersecurity posture over the coming years. By systematically reviewing incidents and sharing lessons across sectors, the government hopes to build a more coordinated and resilient defence against evolving cyber threats. With cyberattacks continuing to target critical infrastructure, businesses and public services, the success of the Cyber Incident Review Board will likely depend on its ability to translate insights into measurable improvements across the national ecosystem.
  • ✇Security | CIO
  • The triple squeeze: Why the SaaSpocalypse story you’re hearing is missing the most dangerous part
    In early February 2026, nearly $285 billion in market value evaporated from software and related sectors in 48 hours. Atlassian dropped 36% for the month. The iShares Software ETF fell more than 30% from its September 2025 highs. Traders called it the “SaaSpocalypse.” The popular narrative goes like this. AI coding tools have gotten so good that customers can build their own software, so why pay for a SaaS subscription when an engineer can vibe-code a replacement over a
     

The triple squeeze: Why the SaaSpocalypse story you’re hearing is missing the most dangerous part

5 de Maio de 2026, 07:00

In early February 2026, nearly $285 billion in market value evaporated from software and related sectors in 48 hours. Atlassian dropped 36% for the month. The iShares Software ETF fell more than 30% from its September 2025 highs. Traders called it the “SaaSpocalypse.”

The popular narrative goes like this. AI coding tools have gotten so good that customers can build their own software, so why pay for a SaaS subscription when an engineer can vibe-code a replacement over a weekend?

That’s the least interesting version of what’s happening. The real story involves three forces converging on SaaS simultaneously, creating a structural trap that puts hundreds of thousands of white-collar jobs at risk. The force that will decide their fate isn’t AI. It’s a spreadsheet in a private equity office.

Force #1: AI isn’t replacing your product. It’s replacing the problem your product solves

Most enterprises won’t rebuild their tech stack with vibe coding, because that’s not how large organizations work. The bigger threat is that AI agents are making entire workflow categories obsolete. Take a SaaS ticketing product. The threat isn’t a competing ticketing system built in-house, it’s that customers are deploying AI agents to handle support directly, rethinking the pipeline from scratch. The old system isn’t replaced by a better one. It’s replaced by a fundamentally different approach to the job.

Satya Nadella telegraphed this on the BG2 podcast in December 2024, saying business applications would “probably collapse” in the agent era because they’re “CRUD databases with a bunch of business logic.” “All the logic will be in the AI tier.”

The data backs him up. Gartner forecasts worldwide AI spending will hit $2.5T in 2026, up 44% YoY, while overall IT budgets grew ~10%. That money is coming from other budgets. Average SaaS apps per company dropped 18% between 2022 and 2024 (BetterCloud). Among large enterprises, 82% are actively reducing vendor count (NPI Financial). Even companies not directly losing customers face fewer new purchases, slower expansions and harder renewals, because buyers are looking somewhere else.

Force #2: The $440 billion leverage trap

Between 2015 and 2025, private equity acquired more than 1,900 software companies in deals worth over $440 billion. The thesis was elegant. Sticky recurring revenue, high margins, predictable cash flows and high switching costs, all perfect for leveraged buyouts. It worked brilliantly for a decade. Then it stopped.

  • The setup (2020-2022). Public SaaS traded at a median 18x revenue in 2021 (Asana touched 89x). PE paid premium multiples with enormous debt. Anaplan went to Thoma Bravo for $10.4B. Coupa sold for $8B with $4.5-5B in leverage. Zendesk went private for $10.2B backed by ~$5B in private credit.
  • The collapse. By late 2025, the median public SaaS revenue multiple had fallen to 5.1x, over 70% below peak. Private software M&A multiples dropped below 3x in 2024.

Here’s the math. A PE firm buys a $100M-revenue SaaS company in 2021 at 8x ($800M), financing 40% with floating-rate debt, a $320M loan at SOFR plus 500 bps. The initial rate runs 5-6%. After Fed hikes, about 10%, or $32M annual interest. Then the multiple collapses. Even if revenue grows to $120M, at 2-3x the business is worth $240-360M. The loan is $320M. Equity sits somewhere between negative and barely positive.

This isn’t hypothetical. Wells Fargo now uses “keys handover” for cases where PE hands underwater portfolio companies to lenders. A record $25B of software leveraged loans trade below 80 cents on the dollar. Total tech distressed debt sits near $46.9B. Apollo cut its software exposure nearly in half during 2025.

When equity is underwater, PE has two choices. Walk away or shift into margin-maximization mode by cutting headcount, consolidating and extracting cash.

Force #3: AI is the cost-cutting weapon PE has been waiting for

Here’s the cruel irony. AI is killing revenue, the debt still needs servicing and AI is also the most powerful cost-cutting tool ever handed to a PE operating partner.

Most SaaS employees are white-collar knowledge workers, including engineers, PMs, marketers, CS, sales, support and analysts. Precisely where AI is making fastest inroads. Anthropic’s research found AI-exposed workers earn 47% more on average and are nearly 4x as likely to hold a graduate degree. Stanford Digital Economy Lab and Dallas Fed research shows employment among 22-25-year-olds in AI-exposed roles fell 13-16% between late 2022 and mid-2025, nearly 20% among young software developers.

Wall Street has picked its side. When Atlassian announced 1,600 layoffs (10% of workforce) to fund AI investment, the stock rose. When Block cut 4,000 jobs and Jack Dorsey said, “a significantly smaller team, using the tools we’re building, can do more and do it better,” the stock surged over 20%.

PE is moving too. Anthropic is reportedly in talks with Blackstone, Hellman & Friedman and Permira on a JV to embed Claude across portfolio companies. OpenAI is in parallel talks with Advent, Bain, Brookfield and TPG. Blackstone alone manages $1.3T+ across manufacturing, healthcare, real estate and financial services. Many licenses those companies cancel will belong to SaaS firms in other PE portfolios. As CNBC put it, “Private equity built the SaaS installed base. It may also be the one that rips it out.”

The loop closes. AI slows revenue, valuation collapses, debt becomes unsustainable and PE uses AI to cut headcount to service it. That’s the Triple Squeeze.

So, what can you actually do?

  • Assess exposure across three dimensions. First, your company. Is it PE-owned, and what vintage? Deals done at peak 2021-2022 valuations with heavy leverage are most precarious, and PitchBook or Crunchbase will tell you. Second, your role. Cost center or revenue engine? When growth stalls, PE defaults to margin maximization, and G&A, parts of marketing, internal tools and legacy product teams are vulnerable. Third, AI itself. How automatable is your day-to-day? If your core workflow is routing information, synthesizing documents or managing processes, the timeline is shorter than you think.
  • Supersize your T-shape. AI’s Achilles’ heel is scarce context. It doesn’t know your customers, your industry or why that one integration keeps breaking. Widen across adjacent roles while deepening your core with AI. Engineers can learn PM, UX and AI-assisted QA. Marketers can automate operational work with agents and build AI creative pipelines. Become an AI multiplier, someone who directs these tools with cross-functional judgment they can’t generate alone. If your employer isn’t giving you enough exposure, don’t wait. Vibe-code a side project. Pressure-test a financial model against your usual approach.
  • Build reputation while you still have a platform. Write publicly, contribute to communities, ship open source. Individual brand is a hedge against rising company-level risk, and far easier to build while employed than while competing with thousands of displaced workers.
  • If exposure is real, move early and deliberately. A wave of PE-backed SaaS layoffs would flood the market with experienced workers chasing a shrinking pool of roles. Those who fare best move while they can still be selective. But “move” doesn’t mean jumping to the first company with AI in its pitch deck. Apply the same structural thinking. Look for durable revenue, a real plan for AI-native competition, and profitability or a credible path.

The bottom line

The SaaSpocalypse narrative everyone’s debating, whether AI coding will kill SaaS, is a sideshow. The real story is financial, structural and already in motion.

Private equity spent a decade and $440 billion buying up software on a thesis that just broke. The debt doesn’t care about AI timelines or market sentiment. It comes due regardless. The only variable PE can control now is cost, and AI just made that variable dramatically easier to cut.

If you work in this industry, especially at a PE-backed company, it’s time for clear-eyed assessment of your exposure before the math makes the decision for you.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • El riesgo de los agentes de IA para el CIO: la pérdida de perspectiva
    El escenario no es hipotético: algunas de las empresas que más lejos llegaron en sustituir personas por IA han tenido que revertir parte del camino. Para el CIO, ese desajuste es especialmente relevante, porque los agentes están rediseñando cómo TI detecta problemas, decide y responde. Y porque esa perspectiva completa es justo la que la dirección y otras áreas necesitan que alguien ponga sobre la mesa. 1. El caso Klarna: cuando la oportunidad de la IA no dejó ver l
     

El riesgo de los agentes de IA para el CIO: la pérdida de perspectiva

5 de Maio de 2026, 04:31

El escenario no es hipotético: algunas de las empresas que más lejos llegaron en sustituir personas por IA han tenido que revertir parte del camino. Para el CIO, ese desajuste es especialmente relevante, porque los agentes están rediseñando cómo TI detecta problemas, decide y responde. Y porque esa perspectiva completa es justo la que la dirección y otras áreas necesitan que alguien ponga sobre la mesa.

1. El caso Klarna: cuando la oportunidad de la IA no dejó ver la realidad

Klarna se convirtió en 2024 en una referencia europea de lo que la IA podía hacer por una empresa. Su asistente de IA llegó a gestionar dos tercios de los chats de atención al cliente en su primer mes, con un trabajo equivalente al de 700 agentes a tiempo completo. Como resultado, decidieron congelar las contrataciones, y la plantilla bajó de unos 5.000 a 3.800 empleados. Apenas un año después, el propio CEO admitió que la compañía había ido demasiado lejos al sustituir personas por agentes, lo que fue en detrimento del servicio y del producto. De hecho, la empresa revirtió el camino: volvió a contratar agentes humanos para asegurarse de que el cliente siempre pudiera hablar con una persona.

Lo interesante aquí no es entenderlo como que la IA había fallado. El problema fue otro: entender la función de atención al cliente en clave de productividad y costes, sin ver el conjunto. Si se medía por tiempos de respuesta y FTE equivalentes, la automatización era óptima. Medida por satisfacción, calidad percibida y capacidad de resolver casos complejos, el resultado era otro, y acabó obligando a dar marcha atrás.

2. La lección que importa: la IA rediseña cómo se entrega una función

Es tentador leer Klarna en clave de una historia de atención al cliente. Pero el patrón afecta a cualquier función de negocio. Introducir agentes de IA no es añadir una herramienta más: reordena la toma de decisiones, el aprendizaje del día a día y, en última instancia, cómo se entrega el servicio.

Si solo se piensa en términos de productividad (es decir, qué se automatiza, cuánto se ahorra, cuántas FTE equivalentes libera), es fácil perder de vista las implicaciones más profundas. Es fácil terminar descubriendo tarde que lo que se entrega ya no es lo mismo, aunque sobre el papel se produzca más.

Esto es algo difícil de ver al principio. Una función puede realizarse peor y, aun así, mostrar mejores métricas operativas durante meses. Las consecuencias aparecen en otras áreas, lejos de la función que se automatizó: en reputación, en clientes perdidos o en decisiones mal tomadas.

3. En TI, con los agentes, el efecto es más intenso

En el terreno del CIO este patrón aparece antes y con más fuerza. Cuando un agente deja de ser un asistente que ayuda y pasa a intervenir, entonces llegan los cambios. Por ejemplo, condiciona qué alertas llegan al equipo, qué modificaciones de código se proponen o qué incidencias se priorizan. Esto va más allá de acelerar el trabajo: se decide lo que el equipo ve o deja de ver, y se desplaza el espacio donde se toman las decisiones.

En otras palabras, los agentes no solo ejecutan. Cambian la forma de detectar los problemas, la forma de responder e incluso la de aprender. Si este fenómeno se evalúa únicamente con métricas de rendimiento, se corre exactamente el riesgo de Klarna dentro de casa: ganar velocidad y perder perspectiva.

4. La paradoja: más capacidad de acción, menos visión directa

De ahí la paradoja que muchos responsables de TI empiezan a notar. La organización puede actuar más rápido, entregar más volumen y automatizar más decisiones, y a la vez perder el contacto con la complejidad de la realidad.

Antes, un equipo de soporte aprendía no solo resolviendo incidencias, sino viendo dónde fallaban las integraciones, o qué comportamientos del usuario revelaban un problema más profundo. Si ese trabajo pasa por sistemas automatizados, la organización puede seguir resolviendo, pero los empleados pierden recorrido de aprendizaje.

El riesgo que corre el equipo es que la IA funcione lo suficientemente bien como para desplazar fuera del primer plano el conocimiento y capacidades sobre cómo debe operar una unidad de negocio.

5. El CIO, portador de la perspectiva completa dentro y fuera de TI

Es aquí donde cambia de verdad el papel del CIO. No se trata de que sea el responsable de adoptar agentes y automatizar procesos de forma inteligente. Pasa a ser quien aporta, dentro y fuera de su área, la lectura completa de lo que la IA hace con una función de negocio. Es decir, ir más allá de las ganancias de productividad, aportando otros aspectos que no se ven, como la experiencia, la perspectiva de negocio y los cambios en la entrega de un servicio, sea hacia el empleado o hacia el cliente.

Esa perspectiva tiene un gran valor tanto en la dirección general y en otras áreas como operaciones, atención al cliente y, por supuesto, Recursos Humanos. En el contexto actual, donde se ven continuos anuncios de reducción de plantilla, la conversación tiende a quedarse en ahorro de costes y tiempos. El CIO, está bien situado para aportar la otra parte: dónde conviene mantener una supervisión sólida, qué puede delegarse en la IA y dónde hace falta prever la posibilidad de revertir una automatización que, sobre el papel, funcionaría.

Esa capacidad de revertir es, de hecho, una de las que la organización no puede perder. No todas las organizaciones pueden recuperar capacidades con la rapidez con la que se pierden.

6. La misión: entender y comunicar la delegación en agentes de IA

La misión, por tanto, no es frenar la IA ni desconfiar por principio de los agentes. Es aportar claridad qué puede delegarse y qué no conviene ceder sin perder capacidad de intervención. En unos casos la respuesta es clara: tareas repetitivas, clasificación inicial, generación de borradores o búsqueda técnica. En otros la frontera es más delicada: priorizar riesgos, decidir excepciones, cambiar sistemas heredados o actuar sobre procesos sin suficiente supervisión.

Ese será uno de los servicios más relevantes en el papel del CIO durante los próximos años. Más allá de avanzar en la adopción de los agentes, tendrá que aportar, dentro y fuera de la TI esa lectura necesaria del impacto de los agentes en una función de negocio. Y, finalmente, conservar la capacidad de revertir cuando no se entregue lo que debería, por muy bien que luzca en las métricas.

Todo esto apunta una dirección: el papel del CIO se está ensanchando más allá de lo puramente tecnológico. Gobernar bien la IA exige capacidades nuevas, que hoy apenas empezamos a nombrar y que van a marcar buena parte de la diferencia en los próximos años. A ellas dedicaremos las próximas tribunas.

  • ✇Security | CIO
  • CIOs rethink IT’s operating model to deliver better business outcomes
    The IT department at Unum Group had a product management structure and worked in an agile delivery model. This operating model gave IT teams and the company wins by rapidly delivering what they call “investment capabilities” that were aligned to the business. But Shelia Anderson, who became executive vice president and chief information and digital officer in May 2025, saw room for improvement. She wanted to fine-tune her department’s operating structure to ensure in
     

CIOs rethink IT’s operating model to deliver better business outcomes

4 de Maio de 2026, 07:01

The IT department at Unum Group had a product management structure and worked in an agile delivery model.

This operating model gave IT teams and the company wins by rapidly delivering what they call “investment capabilities” that were aligned to the business.

But Shelia Anderson, who became executive vice president and chief information and digital officer in May 2025, saw room for improvement. She wanted to fine-tune her department’s operating structure to ensure investments deliver returns.

“There wasn’t a great correlation between those investments and that value recognition. So the part of it that needed some work was achieving value recognition in the business and making sure there was accountability for that,” Anderson says.

She also wanted accountability for improving time-to-value for investments, whether that value stemmed from productivity gains, improving customer experiences, or some other objective.

To do that, Anderson adopted a value stream model to analyze and optimize the end-to-end experience across each value chain within Unum Group, a provider of workplace benefits and services, including disability insurance, life insurance, and supplemental health products.

With that model shift, the company’s product-based approach became a business-owned one. Each value stream is now owned by a business leader, with the product management team associated with that value stream now reporting to that owner.

The core IT team assigned to each value stream includes a customer experience professional, a data lead, and an architecture lead. The team uses agile practices to deliver products along with product improvements, as products are part of the value stream.

“In insurance we have a lot of processes; products are within the larger processes,” Anderson explains, “and you have multiple processes that sit within a journey, so most of our products are within a value stream.”

Furthermore, the value stream model facilitates IT and business function collaboration to “make decisions around what we’re solving for and what are we going to deliver in this round of iteration,” she says. It also enables Unum Group to deliver end-to-end process improvements and change management as part of the deliverables, and measure the value delivered, she adds.

“The value stream concept is truly wrapping those all together,” Anderson says.

Rethinking the IT operating model

Anderson’s overhaul of the IT strategy at Unum Group showcases how CIOs are rethinking the IT operating model.

Many IT leaders are moving from traditional silos — security, development, support, etc. — to groupings designed around products, value streams, journeys, and customer lifecycles.

And they are doing so for several reasons, says Amar Aswatha, senior vice president for global business engineering at consulting and services firm CGI. To start, they have found that IT isn’t getting the business what it needs when it needs it. They have also found that IT’s work costs too much — “efficiency is slow as is productivity,” Aswatha says. And they are coming to realize that IT as traditionally configured can’t keep up with the pace of technology change and innovation.

As a result, CIOs are finding that a traditional structure focused on outputs (the delivery of a project, for example) instead of outcomes (for example, a specific, measurable improvement in business productivity) won’t get the business where it wants to go.

“So today CIOs are thinking about how to build a level of adaptability and agility in their operations model, and they’re thinking about how to build an organization that is continuously learning what’s working, where are the bottlenecks and points of frictions, and how they can get earlier signals on what’s not working so they can make adaptive changes,” says Fiona Mark, principal analyst at Forrester Research.

Ken Spangler, an instructor at Carnegie Mellon University’s CIDO Program, sees a trend toward organizing IT operations in a hybrid centralized-federated structure organized around products, domains, or capabilities. Here, there are centralized platforms, but enablement of those platforms is federated. For example, IT creates and maintains AI as a platform (the centralized component) but has IT working in business-facing teams to enable the various uses of the AI platform to deliver business value (the federated part).

The centralized IT work includes engineering and security teams, Spangler says. Product teams, which include product owners and business roles, support the federated work. CIOs ensure there’s a governance structure to manage both sides.

“In the AI era, it’s about product, platform, and governance: product for speed, platform for scale, and governance for control and risk management,” adds Spangler, who formerly served as executive vice president and CIO of FedEx Global Operations Technology.

Making the shift

For Anderson, Unum’s shift to a value stream model has required “a reimagining for roles” within the IT department as well as the expectations for those roles, she says. It also has IT teams thinking about the next evolution of agile and how they’ll use it to improve the work they’re aiming to do.

Sharing possibilities give Anderson the opportunity to leverage that centralized-federated approach within this value stream model as well. “There are some components of a value stream that could be shared,” she notes, pointing to the company’s data layer and integration layers that also exist to enable and support the products that are part of the value stream.

At Unum, teams are no longer static, with some IT workers assigned to multiple value steam core teams, Anderson says. To support IT professionals working in this new value stream model, Anderson will likely also adopt a chapter model, where IT employees are organized by discipline. That way chapter members who work on different value stream teams can come together to develop skills, foster expertise, define standards, and advance their careers.

So far, Unum’s shift to a value stream model has been incremental, with the first iteration having been completed in the first quarter of 2026. Still, Anderson is confident that the move will yield benefits.

For example, having a single value stream owner creates a higher level of accountability for ensuring investments deliver value.

“They know the north star of the value stream,” she says, adding that this empowers value stream owners to make quicker, better decisions around starting, continuing, or stopping investments. The model works well with a persistent funding approach as well as using metrics and score cards for measuring benefits and ROI. All that in turn helps teams “have a clear understanding of the value stream and what’s expected.”

“It’s truly shifting culture, so there is a clear structure around what result do we want, what is the process change, what’s the change needed with people, and then what’s the technology that’s needed. It’s getting the right people in the room to make those decisions,” Anderson says. “It’s truly business and technology at the table doing that design.”

Getting to the big picture

After starting as CIO of Tungsten Automation in February 2025, Shelley Seewald restructured her IT department into three components: business operations, enterprise IT delivery, and IT operations. The IT department had had a traditional operating model structured around technology, with a Salesforce team, a financial systems team, and the like.

Seewald felt a shakeup in how IT operates would move the organization away from being order-takers, which she says leads to an “inefficient and ineffective use of technology.” Her new structure breaks down silos and allows IT to “see the bigger picture, to see the ecosystem, to connect dots, and to spot opportunities,” she says.

Now each IT delivery team is aligned to a commercial organization (sales, marketing, customer support, etc.) or a back-office function (finance, legal, HR, etc.).

“The teams meet with them weekly, prioritize work, learn the business,” Seewald says. “This allows us [the IT department] to be a really a good technology partner. We’re there to understand the business first and then we offer them AI or technology solutions to help them reach their goals.”

IT operations is its own group comprising networking, help desk, and the like, Seewald adds. But even IT operations is expected to know the business side of the house. “They don’t align with a business per se, but we have them meet with IT delivery teams so they know what’s happening with the business as well and so they know about product introductions, new offices, and such.”

Challenges to tackle

Many CIOs have yet to move their IT operations from a conventional structure to one focused on products or value streams, says Rob Holbrook, principal of technology strategy and architecture with professional services firm Slalom.

The Global Tech Agenda 2026 from consultancy McKinsey & Co. reports that only about “one in ten top-performing companies have fully adopted product and platform models across all teams, which is more than four times that of other organizations. And nearly half of these companies indicate that at least half of their teams now operate this way.”

Such figures are not surprising, given the challenges that come with shifting how an organization operates.

For a shift to work, Holbrook says some CIOs and their teams must cultivate a true product mindset where IT leaders and workers have a clear understanding of the product IT delivery model.

CIOs also need to put in place a strong governance program to guide product teams, the business units, and the IT department in how to successfully work under such a model, Holbrook says. “They have to learn how to navigate it, and how to get needs prioritized,” he adds.

And CIOs should ensure that the focus on products and business outcomes doesn’t allow back-office needs to slip through the cracks, lest they end up with shadow IT filling those gaps, Holbrook says.

Powering growth

Julie Averill says moving to a modern IT operating model can produce significant value for an organization.

Averill modernized the IT operating model at Lululemon while working as executive vice president and global CIO from 2017 to 2025. She transitioned the department to product model mode, a shift that moved IT workers away from delivering initiatives to teams that owned products and the outcomes they were meant to generate.

“The business, management, and the product teams were all aligned along a mission and an outcome,” Averill explains. “The goal was to keep these teams together to work on outcomes but also have enough elasticity for team members to move to other products as business objectives changed.”

In this structure, Averill had centralized platform teams supporting shared infrastructure and capabilities, such as infrastructure, networking, and security. These teams, she notes, “became internal service teams.”

Averill, nowCEO of Gold Thread LLC and author of the book Chief Impact Officer, says changing the way IT operates “was an exercise in leadership.” It required convincing executive colleagues and business teams that the collaboration contributions they’d have to make and the new ways they’d have to fund product work would produce better results for the company. And it required hiring “technology-minded businesspeople and business-minded technologists who can understand and speak to the business but also can talk tech.”

Averill also says she needed to create professional communities, such as an engineering community, to support skills, standards, and a positive career experience for IT workers.

But the work was worth it, she says, crediting the changes she made while CIO with helping Lululemon grow from $2 billion to more than $10 billion in annual revenue over eight years.

  • ✇Security | CIO
  • 전 세계 AI 에이전트 2,800만 개 시대…기업 경쟁력은 ‘인프라’에 달렸다
    IDC에 따르면 지난해 말 기준 2,800만 개 이상의 AI 에이전트가 배포됐으며, 2029년에는 10억 개 이상이 실제 운영 환경에서 활용되면서 하루 2,170억 건의 작업을 수행할 것으로 전망된다. 매출 46억 달러(약 6조 7,500억 원) 규모의 글로벌 신용평가 기업 트랜스유니온(TransUnion)의 최고 기술·데이터·분석 책임자 벤캇 아찬타는 “AI 에이전트 PoC(Proof of Concept)을 구축하는 것은 쉽다”라며 “하지만 이를 통제하고, 보안을 확보하며, 확장하는 것은 완전히 다른 차원의 과제”라고 말했다. 특히 금융 서비스와 헬스케어처럼 규제가 엄격한 산업일수록 이러한 어려움은 더욱 크다고 설명했다. 이 문제를 해결하기 위해 트랜스유니온은 지난 3년간 에이전틱 AI 플랫폼 ‘원트루(OneTru)’를 구축했다. 목표는 기존의 규칙 기반 전문가 시스템처럼 신뢰성과 예측 가능성을 확보하면서도, 생성형 AI처럼 유연
     

전 세계 AI 에이전트 2,800만 개 시대…기업 경쟁력은 ‘인프라’에 달렸다

4 de Maio de 2026, 05:14

IDC에 따르면 지난해 말 기준 2,800만 개 이상의 AI 에이전트가 배포됐으며, 2029년에는 10억 개 이상이 실제 운영 환경에서 활용되면서 하루 2,170억 건의 작업을 수행할 것으로 전망된다.

매출 46억 달러(약 6조 7,500억 원) 규모의 글로벌 신용평가 기업 트랜스유니온(TransUnion)의 최고 기술·데이터·분석 책임자 벤캇 아찬타는 “AI 에이전트 PoC(Proof of Concept)을 구축하는 것은 쉽다”라며 “하지만 이를 통제하고, 보안을 확보하며, 확장하는 것은 완전히 다른 차원의 과제”라고 말했다. 특히 금융 서비스와 헬스케어처럼 규제가 엄격한 산업일수록 이러한 어려움은 더욱 크다고 설명했다.

이 문제를 해결하기 위해 트랜스유니온은 지난 3년간 에이전틱 AI 플랫폼 ‘원트루(OneTru)’를 구축했다. 목표는 기존의 규칙 기반 전문가 시스템처럼 신뢰성과 예측 가능성을 확보하면서도, 생성형 AI처럼 유연하고 챗봇처럼 쉽게 사용할 수 있는 환경을 만드는 것이었다.

핵심은 두 접근 방식의 장점을 결합하는 데 있었다. 설명 가능성과 안정성이 중요한 핵심 업무는 전통적인 시스템이 담당하고, 생성형 AI는 특화된 작업에 한해 제한적으로 적용하는 방식이다. 이를 구현할 인프라가 시장에 존재하지 않았던 만큼, 트랜스유니온은 약 1억 4,500만 달러(약 2,100억 원)를 투자해 자체 구축에 나섰다.

검증되지 않은 기술에 대한 대규모 투자였지만, 이미 약 2억 달러(약 2,800억 원)의 비용 절감 효과를 거뒀다. 더 나아가 해당 플랫폼을 기반으로 고객용 솔루션까지 개발했다.

대표적으로 올해 3월, 트랜스유니온은 구글 제미나이 모델을 기반으로 원트루 플랫폼에서 구축한 ‘AI 애널리틱스 오케스트레이터 에이전트’를 공개했다. 이 에이전트는 내부 분석 효율을 높이는 데 활용되고 있으며, 고객 역시 데이터 과학자 없이 고급 데이터 분석을 수행할 수 있도록 지원한다.

아찬타는 “많은 고객이 트랜스유니온 데이터를 사용하면서도 다른 솔루션이나 플랫폼은 활용하지 않는다”라며 “이번 오케스트레이터 에이전트는 데이터 활용 가치를 높이고 새로운 수익원을 창출할 가능성이 있다”고 말했다.

현재 추가적인 에이전트도 개발 중이다. 아찬타는 “에이전트의 성능을 좌우하는 핵심은 오케스트레이션, 거버넌스, 보안 계층”이라며 “단순히 에이전트를 만드는 것은 며칠이면 가능하지만, 이를 안정적으로 운영하는 기반과 통제 장치가 진짜 경쟁력”이라고 강조했다. 이어 “플랫폼 위의 에이전트는 모든 가드레일과 기반을 활용하도록 설계돼 있으며, 이것이 우리의 힘”이라고 덧붙였다.

AI 에이전트를 효과적으로 통제하기 위한 핵심 전략은 작업을 여러 계층으로 분리하고, 각 계층을 서로 다른 시스템에 할당하는 것이다. 각 시스템은 일정한 제약 조건 아래 동작하며, 이를 통해 개별 에이전트의 영향 범위를 제한하고 전체 시스템에 견제와 균형 구조를 만든다. 또한 위험도가 높은 작업은 생성형 AI 이전 기술에 맡겨 리스크를 낮춘다.

트랜스유니온의 경우 핵심 의사결정은 업그레이드된 전문가 시스템이 담당한다. 이 시스템은 명확하게 정의되고 감사 가능한 규칙에 따라 동작하며, 예측 가능하고 비용 효율적이며 지연 시간도 낮다. 새로운 상황이 발생하면 LLM이 이를 분석하고, 다른 에이전트가 이를 새로운 규칙으로 변환한 뒤 인간이 검토해 최종적으로 전문가 시스템에 반영한다. 이 외에도 시맨틱 계층을 이해하거나 인간과 상호작용하는 등 다양한 역할을 수행하는 에이전트가 존재한다.

아찬타는 “신경망 기반 추론 계층인 LLM에는 인간을 개입시키고, 논리와 머신러닝 기반의 상징적 추론 계층은 자동화한다”고 설명했다.

이처럼 각 에이전트가 제한된 데이터와 역할 내에서 엄격한 제약을 가지고 동작하면, 전체 시스템은 훨씬 더 통제 가능하고 신뢰성 높은 구조로 발전한다.

이는 하나의 장인이 모든 작업을 수행하는 공방보다, 여러 작업자가 각자 역할을 나눠 수행하는 생산 라인에 비유할 수 있다. 생산 라인은 더 빠르고 안정적으로 작업을 수행할 수 있지만, 현재 많은 기업은 여전히 AI 에이전트를 장인처럼 운영하고 있다. 이러한 방식은 창의적인 결과를 만들 수 있지만, 기업 환경에서는 항상 적합한 선택은 아니다.

툴레인대학교 교수이자 ACM AI 특별 관심 그룹 의장인 니콜라스 마테이는 에이전트 시스템 간 연결 지점에서 보안을 강화해야 한다고 조언했다.

그는 “시스템 간 연결 지점마다 보안을 확보해야 한다”라며 “예를 들어 에이전트가 이메일 서비스에 요청을 보내는 경우, 두 시스템 사이에 검증 단계(체크포인트)를 두는 것이 필요하다”고 말했다. 이어 “신뢰하기 어려운 에이전트와 기존 소프트웨어가 만나는 경계 지점이 바로 보안 통제를 집중해야 할 영역”이라고 강조했다.

에이전틱 AI를 위한 보안 기반 구축

자동화 솔루션 기업 지터빗(Jitterbit)이 올해 3월 공개한 설문조사에 따르면, 1,500명의 IT 리더들은 AI 도입 최종 결정에서 가장 중요한 요소로 ‘AI 책임성’을 꼽았다. 이는 보안, 감사 가능성, 추적성, 가드레일 등을 포함하는 개념으로, 구현 속도나 벤더 평판, 심지어 총소유비용(TCO)보다도 높은 우선순위를 차지했다. 또한 보안, 거버넌스, 데이터 프라이버시 리스크는 비용이나 통합 문제보다도 AI 프로젝트의 운영 전환을 가로막는 주요 요인으로 나타났다. 이러한 우려는 충분히 근거가 있다.

실제 올해 초 사이버 보안 기업 코드월(CodeWall) 연구진은 맥킨지의 신규 AI 플랫폼 ‘릴리(Lilli)’를 침해하는 데 성공했다. 연구진은 자체 AI 도구를 활용해 4,700만 건의 채팅 메시지, 72만 8,000개 파일, 38만 4,000개의 AI 어시스턴트, 9만 4,000개 워크스페이스, 21만 7,000건의 에이전트 메시지, 약 400만 개에 달하는 RAG 문서 조각, 그리고 95개의 시스템 프롬프트 및 AI 모델 설정 정보에 접근할 수 있었다고 밝혔다.

연구진은 “수십 년간 축적된 맥킨지의 독점 연구와 프레임워크, 방법론이 누구나 읽을 수 있는 데이터베이스에 그대로 노출돼 있었다”며 “기업의 핵심 지식 자산이 사실상 무방비 상태였다”고 지적했다.

문제의 원인은 단순했다. 200개가 넘는 공개 API 엔드포인트 가운데 22개가 인증 절차 없이 열려 있었던 것이다. 연구진은 단 2시간 만에 릴리의 전체 운영 데이터베이스에 읽기 및 쓰기 권한을 확보했다. 이후 맥킨지는 즉각 대응에 나서 인증되지 않은 엔드포인트를 차단하고 추가 보안 조치를 시행했다.

맥킨지는 공식 성명을 통해 “외부 포렌식 전문기관과 함께 진행한 조사 결과, 해당 연구자나 다른 비인가 제3자가 고객 데이터 또는 기밀 정보를 실제로 열람했다는 증거는 발견되지 않았다”고 밝혔다.

IDC는 이번 사건이 AI 시스템 보안 침해가 기업에 얼마나 치명적인 영향을 미칠 수 있는지를 보여주는 사례라고 분석했다.

IDC AI 리서치 부문 부사장 알레산드로 페릴리는 “대부분의 기업은 여전히 데이터 유출, 잘못된 출력, 브랜드 평판 훼손 등 기존 관점에서 AI 리스크를 바라보고 있다”라며 “물론 중요한 문제지만, 더 큰 위험은 AI 시스템에 의사결정 권한을 위임하는 데 있다”고 강조했다.

에이전틱 AI 플랫폼에 대한 접근 권한을 확보할 경우, 공격자는 단순히 비인가 정보를 열람하는 데 그치지 않고 기업의 행동 방식 자체를 은밀하게 바꿀 수 있다. 또한 릴리(Lilli)와 같은 엔터프라이즈급 에이전틱 AI 시스템을 보호하는 것은 전체 과제의 절반에 불과하다. 가트너에 따르면 69%의 조직이 직원들이 금지된 AI 도구를 사용하고 있다고 의심하고 있으며, 이로 인해 2030년까지 40%의 조직이 보안 또는 규정 준수 사고를 겪을 것으로 예상된다.

그러나 현재의 탐지 도구만으로는 AI 에이전트를 충분히 식별하기 어렵다고 가트너는 지적한다.

현재 수천 개의 AI 에이전트를 운영 중인 KPMG의 글로벌 AI 및 데이터 랩 총괄 스와미나단 찬드라세카란은 “지금 기업 내에서 얼마나 많은 에이전트가 실행되고 있는지 묻는다면 어디에서 확인할 수 있겠느냐”라며 “이들이 모두 온보딩돼 정체성을 부여받았는지, 적절한 인증 절차를 거쳤는지, 누가 관리하는지 확인할 수 있는 인프라는 아직 존재하지 않는다”고 말했다.

그는 이어 “관련 도구들이 이제 막 등장하고 있거나 기업들이 자체적으로 구축하는 단계”라며 “이러한 체계가 CIO에게 안정감을 제공하게 될 것”이라고 덧붙였다.

이미 개인 직원이 강력한 에이전틱 AI를 도입해 부정적인 결과를 초래한 사례도 공개되고 있다. 메타(Meta)의 얼라인먼트 디렉터 서머 위는 최근 오픈소스 에이전틱 AI 도구 ‘오픈클로(OpenClaw)’를 이메일 관리에 활용하기로 결정했다. 테스트 환경에서 정상적으로 작동한 이후 실제 업무에 적용했다.

메타의 서머 위는 지난 2월 “작업 전 확인을 하도록 설정했음에도, 순식간에 받은 편지함을 삭제하는 모습을 보며 크게 당황했다”라며 “휴대폰으로는 중단할 수 없어 폭탄을 해체하듯 맥 미니로 달려가야 했다”고 X를 통해 전했다.

과거에는 직원이 민감한 정보를 챗봇에 입력하거나 보고서를 작성하게 한 뒤 이를 복사해 사용하는 수준에 머물렀다. 그러나 챗봇이 완전한 에이전트형 시스템으로 발전하면서 이제 에이전트는 사용자 권한 범위 내에서 가능한 모든 작업을 수행할 수 있으며, 기업 시스템에 접근하는 것까지 가능해졌다.

EY의 디지털 및 신기술 부문 책임자 라케시 말호트라는 이러한 새로운 보안 리스크를 관리하기 위해 기업들이 기존의 역할 기반 및 신원 기반 통제를 넘어 ‘의도 기반 통제’로 전환해야 한다고 강조했다.

그는 “에이전트가 시스템에 접근해 데이터를 변경할 권한이 있는지만 확인하는 것으로는 충분하지 않다”라며 “왜 그 변경을 수행하는지까지 확인할 수 있어야 한다”고 설명했다.

이어 “현재 관측 시스템은 에이전트의 행동 의도를 포착하지 못한다”라며 “신뢰는 의도에서 비롯되지만, 이를 측정할 수 있는 방법이 없는 상황”이라고 지적했다.

또 “만약 사람이 전체 코드베이스를 리팩토링하려 한다면 그 이유를 설명해야 한다”라며 “명확한 이유 없이 그런 작업을 진행해서는 안 된다. 사람의 경우 이를 판단할 방법이 있지만, 에이전트에는 아직 그런 체계가 없다”고 덧붙였다.

에이전틱 AI를 위한 시맨틱 데이터 기반 구축

트랜스유니온의 벤캇 아찬타는 자사의 원트루(OneTru) 플랫폼에서 ‘시맨틱 기반’의 중요성을 반복적으로 강조했다. 시맨틱 기반은 데이터가 무엇인지뿐 아니라 그 의미와 다른 데이터와의 관계까지 이해하도록 돕는 구조다. 가트너는 AI를 도입하는 기업이라면 시맨틱 레이어 구축이 이제 필수 과제라고 지적한다.

가트너는 “시맨틱 레이어는 정확도를 높이고 비용을 관리하며 AI 부채를 크게 줄이는 동시에, 멀티 에이전트 시스템을 정렬하고 비용이 큰 불일치를 사전에 차단할 수 있는 유일한 방법”이라고 설명했다.

또한 가트너는 2030년까지 범용 시맨틱 레이어가 데이터 플랫폼, 사이버 보안과 함께 핵심 인프라로 자리 잡을 것으로 전망했다. KPMG의 스와미나단 찬드라세카란은 “에이전트가 데이터를 활용해 의미 있는 작업을 수행하려면 맥락이 필수적”이라며 “그 안에 기업의 지식이 담겨 있다”고 말했다.

그는 이어 “이것이 기업의 새로운 지식재산(IP)”이라며 “맥락이 곧 새로운 경쟁력”이라고 강조했다.

미 법률 회사 굴스턴앤스토어스(Goulston & Storrs)의 CIO 존 아르스노는 견고한 데이터 기반 구축이 벤더 종속을 피하는 방법이기도 하다고 설명했다.

그는 “워크플로 자동화나 에이전틱 업무 지원을 위해 특정 솔루션에 데이터를 옮겨 넣으면, 이후 빠져나오기 매우 어려워진다”라며 “반면 데이터 중심 접근 방식을 취하면 시장 변화에 따라 다른 솔루션으로 유연하게 이동할 수 있다”고 말했다.

이 로펌은 고객 관련 업무 데이터를 법률 특화 문서 관리 시스템인 넷도큐먼츠(NetDocuments)로 이전했으며, 기타 데이터는 엔테그라타(Entegrata)의 법률 데이터 레이크하우스에 저장하고 있다.

아르스노는 “궁극적으로 모든 애플리케이션이 이 데이터 레이크를 중심으로 연결되도록 하는 것이 목표”라며 “이렇게 되면 회사의 모든 데이터가 두 개의 환경에 통합되고, 그 위에 어떤 AI 도구든 자유롭게 적용할 수 있다”고 설명했다.

이어 “데이터 흐름 관리도 훨씬 쉬워지고, 향후 등장할 AI 기술에도 빠르게 대응할 수 있다”라며 “생성형 AI든, 에이전틱 AI든, 혹은 앤트로픽 기반 기술이든 변화 속도가 너무 빨라 따라잡기 어렵다. 실제로 6개월마다 상황이 달라지고 있다”고 덧붙였다.

에이전트 오케스트레이션

보안 가드레일을 구축하고 활용 가능한 데이터 레이어를 마련한 이후, 에이전트 인프라 퍼즐의 마지막 단계는 ‘오케스트레이션’이다. 에이전틱 AI 시스템은 에이전트 간 상호작용은 물론, 인간 사용자와의 협업, 다양한 데이터 소스 및 도구와의 연동이 필요하다. 이는 매우 복잡한 과제로, 기술은 빠르게 발전하고 있지만 아직 초기 단계에 머물러 있다. MCP(Model Context Protocol)는 이러한 오케스트레이션 문제를 해결하기 위한 핵심 요소 중 하나로 꼽히며, AI 벤더들도 이 분야에서 협력적인 태도를 보이고 있다.

디지털 전환 기업 글로번트(Globant)의 디지털 혁신 수석 부사장이자 기술 담당 부사장인 아구스틴 우에르타는 “소셜 네트워크 초기, 페이스북과 트위터가 상호작용 표준을 논의할 때는 경쟁사의 프로토콜을 채택하려는 기업이 없었다”라며 “하지만 지금은 모두가 MCP를 중심으로 표준을 발전시키고 있다”고 말했다.

그러나 에이전트 통합 문제가 완전히 해결된 것은 아니다. 800명 이상의 IT 의사결정자와 개발자를 대상으로 한 도커(Docker) 설문조사에 따르면, 여러 구성 요소를 조율하는 운영 복잡성이 에이전트 구축의 가장 큰 과제로 나타났다.

구체적으로 응답자의 37%는 오케스트레이션 프레임워크가 운영 환경에 적용하기에는 아직 불안정하거나 미성숙하다고 답했으며, 30%는 복잡한 오케스트레이션 환경에서 테스트 및 가시성 부족을 문제로 지적했다.

또한 85%의 팀이 MCP를 인지하고 있음에도 불구하고, 실제 운영 환경 적용을 가로막는 보안, 구성, 관리 측면의 문제도 여전히 존재하는 것으로 나타났다. 이 외에도 기업이 해결해야 할 통합 과제는 적지 않다.

우에르타는 “아직 해결되지 않은 문제 중 하나는 모든 에이전트를 통합적으로 제어하고 상태를 파악할 수 있는 대시보드”라며 “오픈AI 기반 에이전트를 모니터링하는 도구와 세일즈포스 기반 에이전트를 관리하는 도구는 각각 존재하지만, 제어·감사·로깅을 위한 텔레메트리를 하나의 중앙 대시보드에서 통합 제공하는 솔루션은 없다”고 지적했다.

그는 이어 “단일 플랫폼에서 에이전트를 운영하거나 도입 초기 단계에서는 큰 문제가 아니지만, 에이전트 네트워크가 확장될수록 이러한 한계가 본격적으로 드러난다”고 설명했다. 실제로 글로번트는 자체적인 에이전트 AI 통합 대시보드를 개발 중이다.

한편 미국 전역에 고객을 둔 약 700명 규모의 로펌 브라운스타인 하얏트 파버 슈렉(Brownstein Hyatt Farber Schreck)은 제안서 생성 시스템 등 다양한 영역에 AI를 적용하고 있다.

이 회사의 CIO 앤드루 존슨은 “기존에는 고객 제안요청서(RFP)를 검토하고, 수기 메모나 회의 기록을 분석한 뒤 관련 자료를 정리하는 데 며칠이 걸렸다”라며 “이제는 모든 정보를 시스템에 입력해 핵심 기준을 추출하고 몇 분 만에 수준 높은 초안을 생성할 수 있다”고 말했다.

이 과정에는 여러 에이전트가 협력한다. 성공 기준이나 인력 요건을 추출하는 에이전트, 과거 사례와 교훈을 분석하는 에이전트, 가격 책정과 브랜드 기준을 담당하는 에이전트 등이 각각 역할을 수행한다. 존슨은 “각 에이전트는 독립적으로 동작하지만, 결과물이 다음 단계로 이어지도록 반드시 오케스트레이션이 필요하다”고 설명했다. 현재는 대부분 기존 시스템에 MCP 레이어가 없기 때문에 RAG 기반 구조를 활용하고 있다.

또한 작업에 따라 서로 다른 AI 모델이 사용되기도 하는데, 이 역시 추가적인 오케스트레이션 관리 요소로 작용한다.

비용 관리도 중요한 이슈다. AI 에이전트가 무한 피드백 루프에 빠질 경우 추론 비용이 급격히 증가할 수 있기 때문이다.

존슨은 “이러한 가능성을 인지하고 있으며, 아직 실제로 발생한 사례는 없지만 모니터링 체계를 구축해 임계치를 초과할 경우 즉각 대응하도록 하고 있다”고 말했다.

이처럼 다양한 대응 전략에도 불구하고, AI를 둘러싼 변화 속도는 기업이 경험한 그 어떤 기술보다 빠르다.

EY의 말호트라는 “25년간 기술 업계에 있었지만 지금과 같은 변화는 처음”이라며 “역사상 가장 빠르게 성장한 기업들이 최근 3~4년 사이에 등장했고, 기술 도입 속도 역시 전례가 없다”고 말했다. 이어 “불과 9~10개월 전까지만 해도 핵심이었던 기술이 이미 지나간 사례도 많다”고 덧붙였다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • The cloud migration fulfilling FC Bayern Munich’s AI ambitions
    Management for Germany’s record-holding football championship team aims to optimize processes and provide new digital services using AI. Here, CIO Michael Fichtner discusses what the club’s IT department has implemented, and what advantages they’ll bring to the company internally, and to fans around the world. Why did FC Bayern migrate to SAP Cloud ERP Private? Migrating to the cloud gives us access to innovation and other developments. Some SAP services are only ava
     

The cloud migration fulfilling FC Bayern Munich’s AI ambitions

1 de Maio de 2026, 07:00

Management for Germany’s record-holding football championship team aims to optimize processes and provide new digital services using AI. Here, CIO Michael Fichtner discusses what the club’s IT department has implemented, and what advantages they’ll bring to the company internally, and to fans around the world.

Why did FC Bayern migrate to SAP Cloud ERP Private?

Migrating to the cloud gives us access to innovation and other developments. Some SAP services are only available in the cloud environment, so these are now accessible to us. An important aspect was the simplified integration of other technologies or services predominantly or exclusively provided as cloud services.

Another important aspect was the realignment within IT. The migration allows us to focus more on process, application, and business innovation, and therefore on topics that’ll further develop and future-proof our company.

The use of highly available cloud infrastructures also provides us with additional security since in critical situations, we’ll benefit from professional backup and disaster recovery strategies. With all the dedication our employees have shown so far, this will be a further step toward professionalizing operations and further reducing risks.

In addition to security, scalability and flexibility are always important to us. Computing power, storage, and network resources can be scaled more quickly with a cloud provider. This is particularly significant in the frequent peak situations of our business model. For our projects, new systems like sandbox, test, and POC systems can be deployed faster and in a more standardized way, without requiring any investment or new equipment. Plus, security and compliance are becoming increasingly important for us. So migration allows us to leverage our partner’s established security features, and centrally managed access and authorization concepts simplify our operations. Certified data centers also directly support us to meet regulatory, association, and official requirements.

SAP’s strategy is consistently moving toward the cloud, and migration has allowed us to eliminate the risk of eventually having to rely on an outdated on-premise technology so we were able to eliminate legacy tech through migration as well as upgrade to modern, high-performance hardware.

How many applications or systems have been migrated to the cloud?

We migrated our multi-tiered SAP S/4HANA system. But before the migration, we worked together to consolidate our system landscape, merging 52 systems carrying fan data into S/4. There, the central fan database was established, the Golden Fan Record was built, and the data was combined into a redundancy-free, 360-degree view. So this approach was a significant milestone to implement our sovereign cloud strategy.

So we’ve only migrated one system physically, but in abstract terms, our phased approach allowed us to migrate data from all 52 systems to the cloud through consolidation, thus taking a big step toward controlled and consistent data sovereignty.

Which digital innovations does FC Bayern want to implement with the cloud?

Our business model is heavily influenced by peak situations like knockout phases in sporting competitions, live broadcasts, and special sales activities. In these situations, we need to not only scale technically, but provide innovative process solutions that reliably support peak loads.

Consider the short timeframes of ticket requests that must be processed during knockout stages. Or the launch of jerseys, where fans, even during peak periods, have the right to expect that goods will be delivered as quickly as possible. So in departments experiencing significant annual peaks in volume, it’s crucial employees receive highly automated support. Handling these seasonal peaks would otherwise be impossible.

We rely heavily on solutions supported by AI and digital agents, so developing them is always a joint initiative with our specialist departments.

What digital services and personalization strategies is FC Bayern planning to use to reach fans worldwide with the help of the new cloud platform?

Our aim is to address our fans in an individual, personalized way. The way forward is to move away from mass communication and large target groups or segments, and toward a personal approach, specifically tailored to the needs of each fan.

For this, we need the relevant data and ability to process large amounts of data in compliance with data protection regulations. This isn’t feasible without the appropriate infrastructure and scalability. We see personalized communication as a crucial element to remain relevant to our fans in the future. Mass mailings to fans via email, push notifications, or standardized content without specific relevance to the individual fan won’t help us remain attractive to them.

By providing targeted, relevant content, we want to further increase the attractiveness of FC Bayern Munich, and ensure the relationship with fans for the future.

What advantages do you expect from SAP Cloud ERP Private and AI?

A crucial factor in our decision to migrate was the conviction that we could significantly optimize our internal processes by using AI approaches. Specifically, we’re working on corresponding implementations in HR using SAP’s SuccessFactors and Concur. Initial approaches have also been developed and are being put in logistics and financial accounting. We expect this will allow us to increasingly automate more activities, freeing up colleagues in specialist departments to focus on specific tasks that require a particular approach or interaction. Ultimately, this will enable us to provide better service to fans as we gain time to address other issues.

What role did digital sovereignty or data sovereignty play in the decision to migrate to the SAP cloud?

Digital sovereignty, and control over our data and the data of our fans, have been of paramount importance for many years, and have guided our actions for just as long. Driven by this principle, we’ve developed and operated our key applications ourselves.

With the capabilities our partners have made available to us, we could implement these requirements in a sovereign cloud environment without compromising standards. So we’re confident we’ve not created any dependencies and will remain operational in the years to come. We’re convinced that the de facto and legal control of our critical data is sustainably ensured in our chosen setup.

  • ✇Security | CIO
  • Why smaller is smarter: How SLMs make GenAI operational and affordable
    I have learned to treat small language models (SLMs) as less of a model category and more of a portfolio strategy. They are the pragmatic answer to a question leaders end up asking sooner or later: How do we scale GenAI across real workflows without turning inference cost, latency, data ownership and boundaries into a systemic risk? The short answer is SLMs make GenAI operational. Frontier LLMs keep it capable; an appropriate multi-model strategy is required in the enter
     

Why smaller is smarter: How SLMs make GenAI operational and affordable

1 de Maio de 2026, 06:00

I have learned to treat small language models (SLMs) as less of a model category and more of a portfolio strategy. They are the pragmatic answer to a question leaders end up asking sooner or later: How do we scale GenAI across real workflows without turning inference cost, latency, data ownership and boundaries into a systemic risk?

The short answer is SLMs make GenAI operational. Frontier LLMs keep it capable; an appropriate multi-model strategy is required in the enterprise to run both responsibly.

What I mean by an SLM

When I say SLM, I am usually referring to two different things. They are related and mixing them leads to bad architecture decisions.

Model size is the mechanical part: Parameter count, memory footprint, compute requirements. It surfaces in questions like whether you can run inference on a single GPU, how unit cost changes as concurrency grows and whether latency holds as context grows. Size determines what is feasible to deploy and what it will cost to operate over time.

Operational intent is the part I care most about in an enterprise setting. I treat a model as a workflow component under tight constraints: Cost/transaction, latency, data boundaries and residency. This is also why agentic systems often benefit from SLM’s. Many agent subtasks in production are repetitive and scoped, which makes it sensible to prefer specialist models for most calls and reserve frontier LLMs for the hard exceptions. A clear articulation of this viewpoint is in “Small language models are the future of agentic AI”.

I see operational intent split across two deployment contexts.

  • Enterprise workflows: The high volume, repeatable steps inside workflows. The model’s job is to turn messy inputs such as email, call transcripts or OCR into a structured object, then let deterministic checks decide whether to proceed, abstain or escalate.
  • On-device/ edge: Where the constraints are even sharper. UX must be near instant, tolerate intermittent networks and in some environments, keep data local by design.

In summary, size sets the ceiling; it determines what is feasible to deploy, what it costs to run at scale and where the model can run. Operational intent sets the standard; the right model may not be the most capable one, but the one that holds up under real workflow constraints, whether in business processes or on edge devices.

How small is “small”?

There isn’t one universal cutoff, but I use tiers to map infrastructure decisions.

  • Tiny (under 1B): Edge experiments and narrow tasks.
  • Core SLM zone (1B to 10B range): The sweet spot for workflow automation and on-device deployments.
  • Upper SLM (10B to 30B):  Still small in some contexts, but serving costs grow with concurrency and long context.
  • Frontier LLM (Above 30B when disclosed or proprietary equivalents): The default choice for open-ended reasoning and long tail ambiguity, with correspondingly higher cost and governance overhead.

Additionally, in an enterprise, I have seen two categories:

  • Open models are self-hosted, meaning you own the deployment, the infrastructure, operations and control.
  • Closed models arrive as API endpoints, shifting operational overhead to the vendor, but also the data boundary.

If you want an external, size-aware benchmark view for open models, the Hugging Face open LLM leaderboard is a useful reference point.

The decision framework

For workflows requiring open-ended research, deep multi-step reasoning or broad judgment, I would not recommend an SLM. This is where Frontier LLM’s still earn their keep.

I do recommend SLMs when:

  • The task is bounded enough to define an output schema, a finite label or both.
  • Volume is high enough that unit economics matter.
  • The business can state what happens when the model is uncertain or wrong, including who reviews exceptions.

If any of the above are unclear, the problem is workflow design and not model selection.

In practice the right frame is not which model is smarter, but which produces the best outcome per unit of cost and risk.

DimensionSLMLLM
Cost per caseLowest; enables broad rolloutHighest; must be rationed
LatencyUsually better; easier to hit 95/99 % targetsOften slower, especially at long context
Data boundaryEasier to keep private via self-hosting or minimize data sent externallyHigher governance overhead if the model is external
Best atRouting, extraction, templated summaries, RAG retrieval answersAmbiguous reasoning, synthesis, nuanced drafting
Failure surfaceContained; schemas, validators and escalations limit blast radiusNeeds guardrails but errors in complex reasoning are harder to catch
Architectural patternDefault engine with escalation routing built inEscalation tier reserved for exceptions

The remaining question is whether a general SLM is sufficient or whether the domain is specific enough that the generality becomes a liability. This is where domain-specific small language models (DSLM) appear and the SLM strategy becomes a competitive differentiator.

From SLM to DSLM

A DSLM is where SLM strategy becomes a competitive advantage rather than a cost play. I think of DSLM as an SLM fine-tuned on the language, labels and edge cases of a specific workflow. The goal is stable, structured output, not broad generalization. The fine-tuning is supported by governance processes that treat model updates the way engineering teams treat software releases.

Some have equated this to a permanent embedded RAG; however, I avoid describing fine-tuning as that. Fine-tuning changes what the model intrinsically understands. Retrieval augmented generation (RAG) changes what the model can access at runtime. They solve different problems and in mature systems, they are complementary. I recommend using both DSLM as the inference engine, with RAG layered on cases where the model needs current or use case-specific information it has not been trained on.

In my experience, DSLM’s outperform general SLM’s because domain tuning reduces brittleness on edge cases. It also outperforms LLMs in high-volume, well-defined workflows because cost and stability dominate and in regulated environments, the data never needs to leave your infrastructure.

The tradeoff is discipline. A DSLM demands curated training data, evaluation sets tied to workflow outcomes, regression gates before any update ships, versioning and a tested rollback path. The same specificity that made it reliable inside a workflow makes it brittle outside it. Every time the underlying workflow changes, the model potentially needs retraining. Teams that skip the discipline end up with a model that drifts quietly and fails loudly.

For governance, the NIST AI Risk Management Framework is a practical anchor because it is designed to be operationalized and adapted.

Adoption roadmap

I recommend a four-stage maturity sequence where order matters more than pace:

  • Learn the workflow: Start with a capable model to map failure modes and build a gold evaluation set tied to real outcomes.
  • Standardize the controls: Define schemas, validators, escalation pathways and audits. This is where reliability becomes systemic.
  • Run a portfolio: Default to SLM for routine high-volume work and route exceptions to a frontier LLM. This is where unit economics become predictable.
  • Specialize when it pays: Introduce DSLM fine-tuning only when the workflow is stable enough to justify the lifecycle investment.

The model landscape will keep shifting, context windows will grow, benchmarks will move and new tiers will appear between what we call small and frontier today. What will not change is the underlying question: How you run AI at scale, across real workflows, without turning cost, latency and data boundaries into systemic risks. Enterprises that answer that question well will not do it by chasing the most capable model. They will do it by building the operational discipline first and treating model selection as a downstream decision.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Why most AI strategies fail and how to design one that actually sticks
    Most organizations today include AI in their strategic roadmaps. These strategies often focus on selecting technologies, defining use cases and executing deployments. Yet many fail to generate sustained impact. What is often missing is not ambition or capability, but a clear design for how AI should be deployed into real work. This gap explains a familiar pattern: Pilots that never scale Tools that generate resistance instead of value Automation that erodes
     

Why most AI strategies fail and how to design one that actually sticks

30 de Abril de 2026, 10:00

Most organizations today include AI in their strategic roadmaps. These strategies often focus on selecting technologies, defining use cases and executing deployments. Yet many fail to generate sustained impact.

What is often missing is not ambition or capability, but a clear design for how AI should be deployed into real work.

This gap explains a familiar pattern:

  • Pilots that never scale
  • Tools that generate resistance instead of value
  • Automation that erodes judgment
  • “AI adoption” initiatives disconnected from daily reality

The problem is rarely the AI itself. It is the absence of deployment design — the deliberate architecture that connects strategic intent with how work is performed. This idea echoes earlier work on augmenting human intellect, which framed technology not as a replacement for human capability, but to extend it.

From AI strategy to AI deployment

Traditional AI strategies tend to focus on capabilities, data and platforms, governance and risk and lists of potential use cases.

These elements are necessary but insufficient, because they explain what AI can do but not how it should be integrated into the organization without distorting how people work, think and decide. Deploying AI is therefore not a simple technical rollout but a design problem. Research on human–AI collaboration, including work published by California Management Review, consistently shows that value emerges when AI systems are designed to complement human judgment rather than replace it.

Different types of work require different forms of AI: Some tasks benefit from direct automation, others demand supervision, some should remain human but supported by cognitive guidance rather than production, and a few should not be touched by AI at all — at least not yet. Applying the same deployment logic everywhere is how AI strategies fail.

What is AI strategy deployment design?

AI strategy deployment design is the discipline that defines how AI should be introduced into work — in what form, at what scale and with what type of human–AI relationship.

Rather than treating AI as a generic capability, it frames it as an intervention into work, with cognitive, cultural and organizational consequences.

The goal is not maximal automation, but the right fit between AI and the nature of the work.

It provides a structured way to translate AI strategy into coherent forms of deployment across an organization.

Instead of starting from technologies or use cases, the framework starts from work itself — how it is performed, by whom, at what scale and with what cognitive and cultural implications.

Its purpose is not to maximize AI usage, but to design the right form of AI intervention for each type of work, ensuring alignment between strategic intent and everyday execution.

The 4 core elements of the deployment design

The framework is built on four foundational elements. Together, they allow organizations to reason systematically about how AI should be deployed, not just where. Importantly, this does not reject a task- or process-level focus; rather, it reframes it. The true scope of deployment is the task as it exists within a specific role, context and way of working — not the task in isolation.

1. Nature of the work (Repeatability × Creativity)

This dimension captures whether work is repetitive or variable, and whether it requires judgment, originality or non-deterministic thinking.

It distinguishes between:

  • Mechanical work suitable for automation
  • Creative work requiring augmentation or supervision
  • Work that should remain primarily human

2. Scale of impact (Users affected)

The same task requires different deployment approaches depending on whether it is performed by a few specialists or across large populations.

Scale determines whether AI should be:

  • Personal and flexible
  • Standardized and organizational
  • Governed through explicit controls

3. Perception of the task (Positive × Negative)

Beyond structural characteristics, the framework explicitly considers how a task is experienced by the people who perform it. Task perception captures whether an activity is generally seen as valuable, meaningful and identity-building, or as burdensome, frustrating and low-value.

This dimension does not determine whether AI can be applied, but strongly influences how it should be introduced. In highly repetitive, low-creativity work, perception mainly affects adoption narratives and change management. In creative or judgment-heavy work, perception often signals whether creativity is authentic or degraded, and whether AI should automate, augment or stay out altogether.

4. Deployment intent

Different interventions pursue different intents:

  • Efficiency and cost reduction
  • Individual productivity
  • Development of advanced capabilities
  • Quality, consistency and risk control.

Making deployment intent explicit avoids hidden mismatches between expectations, outcomes and organizational response. It also creates the necessary bridge to a subsequent, more technical decision layer: Once the deployment intent and the nature of the work are clear, organizations can then assess which type of solution is most appropriate — whether AI-based, RPA-driven or a traditional information system — as well as the associated implementation complexity. While this solution-selection step is critical, it sits outside the scope of this article, which deliberately focuses on the deployment design framework itself.

AI strategy deployment design

Together, these elements form a structured matrix that maps types of work to appropriate AI deployment patterns.

This matrix is not a prioritization tool, but a design instrument. It visualizes dominant deployment logics rather than cataloguing every possible case.

From this structure, six deployment zones emerge, grouped into five dominant logics.

4x4 matrix: Nature of work x human impact

Raúl García Vega

Based on the matrix, the framework consolidates the space into five dominant deployment zones. These zones are not strict categories, but recurring patterns that describe how AI should be deployed given the nature of work and its human impact.

The detailed 16‑cell grid supports rigor and operational use. For clarity, the article focuses on these five zones, which capture the essential deployment logic.

ZoneType of workDeployment logicWhat to doWhat to avoid
Out of Scope / Redesign FirstLow creativity · Low repeatabilityNot an AI problemEliminate, simplify, redesignAutomating broken work
Reengineering and Standardization FirstLow creativity · Low repeatability · ScaleStabilize before AIStandardize, define rules, clarify processesPremature automation
Quick Wins — Direct AutomationHigh repeatability · Low creativity · Many usersEfficiency at scaleAutomate safely (AI/RPA)Overengineering
Personal AI / ProductivityHigh creativity · High repeatability · Few usersIndividual augmentationCopilots, flexible tools, enablementStandardizing outputs
SCG / Cognitive AugmentationHigh creativity · Low repeatabilityCognitive supportCo-create, review, explore with AIReplacing human judgment
Supervised Creative AutomationHigh creativity · High repeatability · Many usersScaled creative systemsAgentic platforms + supervisionUncontrolled automation

Conclusion

Beyond frameworks and matrices, the current market context matters. After years of inflated expectations, organizations are increasingly fatigued by abstract AI promises and are shifting toward practical, reusable use cases and plug-and-play solutions that promise fast results.

This shift is understandable — and in some areas effective. However, relying exclusively on standardized solutions overlooks a structural reality: Successful AI deployment depends less on technology and more on understanding how work is actually performed.

Jobs are not defined by a single type of task or a single deployment zone. In practice, most roles combine activities that span multiple zones of the framework. This mix — rather than any individual task — determines how AI should be introduced, governed and scaled within an organization. Treating roles as monolithic leads to oversimplification and unrealistic expectations.

For this reason, managing expectations is as important as selecting technology. In most cases, AI deployment will continue to require human intervention, supervision and judgment by design. Not everything is, or should be, fully automatable.

Ultimately, the AI strategy deployment design framework shifts the conversation away from where to use AI toward a more durable question: What type of human–AI relationship makes sense for each form of work, and where human judgment must remain by design.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • How NOV is moving from FOMO to calculated scaling
    For decades, the industrial sector has operated on the simple mantra to live by automation, die by automation. In the oil and gas industry, where precision is measured in millimeters and safety in lives, automation is a necessity, not just nice to have. But as gen AI sweeps through the enterprise, a new challenge has emerged in how a global leader in energy services should transition from experimental chatbots to industrial-grade AI without compromising safety or security.
     

How NOV is moving from FOMO to calculated scaling

30 de Abril de 2026, 07:00

For decades, the industrial sector has operated on the simple mantra to live by automation, die by automation. In the oil and gas industry, where precision is measured in millimeters and safety in lives, automation is a necessity, not just nice to have. But as gen AI sweeps through the enterprise, a new challenge has emerged in how a global leader in energy services should transition from experimental chatbots to industrial-grade AI without compromising safety or security.

Here, Alex Philips, CIO of NOV, formerly National Oilwell Varco, discusses implementing OpenAI and securing it with zero trust for 25,000 employees, and why the next phase of agentic AI requires a fundamental shift in how to view human expertise and digital safeguards.

From FOMO to ROI

Like many global companies, NOV’s initial move into gen AI was driven by executive pressure fueled by fear of missing out. Philips remembers the early talks with his CEO about the investment.

“I said we have this opportunity, and it costs this much,” he says. “He asked about the ROI and I replied that’s something I couldn’t calculate, nor what it’d replace or what it’d displace in cost, but I couldn’t say any of that for email either.”

Just as no modern business can function without email, even without a direct line-item ROI, Philips argues that LLMs will soon become the standard for employee productivity. Currently, NOV reports about 50% of its workforce actively use the tool to enhance productivity.

The results, though qualitative, are profound. Philips says that response times for urgent customer requests, for instance, have plummeted, language barriers are crumbling, and employees are tackling complex analyses once considered out of reach.

The six-month validation lesson

One example Philips details involves an engineer who spent six months mastering a highly specialized skill. With ChatGPT, the engineer was able to replicate that six-month learning process in just 10 minutes.

And while his initial response was to think he wasted six months of his life, the response was to show him he spent six months to validate what the AI told him. “This is a great example of why humans are still needed in the AI loop,” says Philips. “AI execution without human validation can lead to errors that cost companies significant time and money.”

This underscores the crucial pillar of NOV’s AI strategy of human accountability because in an industrial setting, AI dictating terms is never an acceptable excuse. Whether designing a drill bit or automating a workflow, the end user remains responsible for the output.

Securing the Wild West of shadow AI

As AI becomes more widespread, shadow AI poses a significant security risk. To address this, NOV uses Zscaler to route all traffic, and ensure visibility and control. And by doing so, the company can:

  • Redirect users: If an employee tries to use a non-approved LLM, they’re redirected to a page that explains NOV’s policy, and directed to the approved enterprise OpenAI instance.
  • Monitor SaaS evolution: Many authorized SaaS applications are now adding agentic features during contract periods. Zscaler provides the visibility needed to identify these changes before sensitive IP is fed into an unvetted model.
  • Enforce data privacy: Preventing intellectual property from leaking into public training sets is the first step in any industrial AI deployment.

The shift to agentic AI

In software development, NOV already benefits from AI-assisted coding, where AI works alongside developers who accept about 32% of AI suggestions. “We’re now beginning to explore the next evolution of full agentic coding,” says Philips, adding that this next stage truly supercharges teams, enabling them to move faster and better meet customer demand for innovation.

However, this efficiency feeds the dilemma of a widening talent gap. The challenge moving forward is if all the low-level, entry-level tasks can be automated, and what’s the best way to develop skilled workers. “I don’t know how we’ll adapt to it, but we’ll figure it out,” he says.

Safety first

In the oil field, some processes are too critical to be left entirely to a black-box algorithm. Philips is adamant that for safety issues, AI remains an advisor, not a decider. NOV uses AI-powered vision to monitor red zones, or dangerous areas on a drilling rig. If the AI detects a person in a restricted area, it can trigger an emergency stop. However, for actual drilling operations, the final call remains with an onsite human operator. “You can’t have a hallucination,” he says. “You can’t say it’s right 90% of the time. It has to be all the time.”

NOV’s journey shows that transitioning to industrial-grade AI isn’t just about choosing the best model but building a framework of trust, transparency, and responsibility. By using Zscaler for governance and GitHub Advanced Security for code validation, NOV is moving toward a future where AI becomes more essential to the oil industry.

“Development teams should produce twice the output with half the people in half the time,” he says. “The only remaining question is how do we train the next generation of developer experts to control the machines that do the work.”

  • ✇Security | CIO
  • Las compañías siguen buscando mejoras incrementales, no transformadoras, con la IA
    La mayoría de las empresas aún no han aprovechado el poder transformador de la IA, centrándose en cambio en mejoras incrementales de productividad y eficiencia que no conducen a ventajas competitivas, según un informe de la firma de análisis Forrester. Las mejoras internas de productividad derivadas de la IA siguen siendo marginales, no sustanciales, ya que las organizaciones no han descubierto cómo generar beneficios más significativos a través de la tecnología, afirma Fo
     

Las compañías siguen buscando mejoras incrementales, no transformadoras, con la IA

30 de Abril de 2026, 06:13

La mayoría de las empresas aún no han aprovechado el poder transformador de la IA, centrándose en cambio en mejoras incrementales de productividad y eficiencia que no conducen a ventajas competitivas, según un informe de la firma de análisis Forrester. Las mejoras internas de productividad derivadas de la IA siguen siendo marginales, no sustanciales, ya que las organizaciones no han descubierto cómo generar beneficios más significativos a través de la tecnología, afirma Forrester en su reciente informe Accelerate Your AI Voyage.

La evidencia: el 43% de los responsables de la toma de decisiones sobre IA encuestados por la empresa miden las mejoras de productividad obtenidas gracias a la IA, y el 41% miden las ganancias de eficiencia, pero solo el 32% vincula los resultados de la IA con los beneficios o los ingresos.

“Ahorrar 10.000 horas de trabajo de los empleados puede parecer bueno sobre el papel, pero no cubrirá la factura de las GPU, y mucho menos impulsará la reinvención”, escriben los analistas de Forrester en el informe. “Este pensamiento incremental constituye la base de una desconexión fundamental con respecto a la promesa del potencial transformador de la IA”.

Solo entre el 5% y el 15% de las organizaciones cuentan actualmente con una estrategia de IA eficaz, y es probable que el porcentaje se sitúe más cerca del extremo inferior, estima Brian Hopkins, vicepresidente de tecnologías emergentes de Forrester. Al centrarse en las ganancias de productividad o eficiencia, la mayoría de las organizaciones se pierden el verdadero poder de la IA, añade. “La eficiencia no es estrategia; es gestión de proyectos. Estás intentando mejorar tus procesos actuales de forma incremental”, dice.

Dar a los empleados un copiloto para ver qué hacen con él no es un enfoque ganador, añade Hopkins: “Toda esta idea de que vamos a invertir de forma incremental en productividad y que, de alguna manera, eso va a capturar el potencial que ofrece la IA, es una quimera”.

Por otra parte, las mejoras de productividad de la IA suelen depender de recortes de plantilla tras la implementación, añade. “El problema de las mejoras incrementales de productividad es que, para obtener los beneficios que exige tu director financiero, tienes que implementar una solución, demostrar que funciona y, a continuación, despedir a gente”, relata. “¿Crees que las personas a las que vas a despedir te van a ayudar a hacerlo? No lo harán. Es un trabajo complicado y desagradable”.

Los datos de Forrester coinciden con otra encuesta reciente del proveedor de plataformas de agentes de IA Decidr, que desveló que el 40% de las empresas estadounidenses obtienen la mayor parte del valor de su IA de herramientas al estilo de ChatGPT, en lugar de agentes o modelos de IA personalizados.

Maneras de pensar ya obsoletas

Otros líderes de TI también ven los problemas destacados en el estudio de Forrester. Muchas empresas se centran en estrategias de IA a la altura de los carros tirados por caballos en un mundo que avanza hacia los coches autónomos, afirma Christine Park, directora de transformación de IA en Branch, proveedor de una plataforma de seguimiento de enlaces móviles.

“Esto es exactamente lo que ocurre cuando el mercado avanza más rápido que el modelo operativo. Los líderes están optimizando la eficiencia de forma limitada dentro de las funciones en lugar de replantearse cómo debería cambiar fundamentalmente el trabajo en sí”, afirma.

Las mejoras en productividad y eficiencia no supondrán un cambio significativo para la mayoría de las organizaciones, añade. La verdadera transformación de la IA no se limita a habilitar funciones individuales, sino que requiere coordinación entre todos los flujos de trabajo. “La IA para la eficiencia de costes eleva el nivel mínimo, ¿y qué?”, dice Park. “Si solo se trata de una estrategia de eficiencia, no vas a obtener más que ganancias a corto plazo. Si se compara la reducción de costes con la eficiencia, podemos crecer sin un aumento proporcional de la plantilla, pero se necesita una verdadera transformación para elevar el techo”.

En cambio, las organizaciones inteligentes se centrarán en la IA como una amplificación tanto de los ingresos como de la experiencia de las personas, añade. La naturaleza del trabajo está cambiando, ya que ahora se desarrolla en flujos de trabajo multidimensionales en lugar de tareas paso a paso, afirma.

“La IA se está tratando como una característica cuando debería tratarse como una transformación”, afirma. “Eso significa adoptar una perspectiva centrada en las personas y que los líderes cambien la forma en que formamos a las personas, definimos los roles y medimos el éxito. La IA es un cambio humano, no solo una nueva herramienta”.

Auténtica transformación del flujo de trabajo

Las organizaciones deberían buscar una transformación del flujo de trabajo en toda la empresa, añade Mike Flynn, líder de consultoría del sector tecnológico en la firma de servicios profesionales EY. Muchas organizaciones se centran en la automatización a nivel de tareas en lugar de rediseñar los flujos de trabajo de principio a fin, indica.

Al centrarse en las mejoras a nivel de tareas, las empresas añaden costes de herramientas de IA y de computación sin eliminar una cantidad significativa de trabajo del sistema, lo que conduce a lo que Flynn denomina “trabajo atrapado”.

Las organizaciones deben adoptar un enfoque centrado en la IA para todos sus flujos de trabajo e intentar rediseñar los procesos para eliminar el trabajo humano repetitivo en la medida de lo posible, recomienda Flynn. Las organizaciones deberían entonces añadir la intervención humana cuando sea necesaria, señala. “Si piensas en aplicar la IA a tu problema empresarial, a medida que sigues añadiendo IA, el esfuerzo que se requiere sigue aumentando, en comparación con rediseñar tus procesos de tal manera que la IA se integre en ellos”, añade.

Crear una estrategia de IA duradera va más allá de implementar unas pocas herramientas de IA para los empleados, afirma Flynn, y añade que EY guía a los clientes a través de un plan de valor de la IA que les muestra los posibles resultados de diversas estrategias de IA. “Las empresas se están dando cuenta de que esto no es tan fácil como simplemente habilitar y dar a las personas herramientas con las que puedan hacer algo y que se puedan acoplar a sus trabajos actuales”, agrega. “Para mí, lo importante es pensar en rediseñar los procesos operativos. Se trata de una transformación de los procesos y de las personas tanto como de la IA en sí misma”.

¿Están las organizaciones listas para dar el salto?

La mayoría de las organizaciones aún no están preparadas para dar el siguiente paso, sugiere Thomas Prommer, expresidente de la empresa de diseño, TI e IA Huge. Los casos de uso sustanciales, como la revisión de precios y la toma de decisiones en la cadena de suministro, requieren prácticas de gestión del riesgo de los modelos y registros de auditoría que la mayoría de las empresas aún no tienen, afirma.

“La productividad interna es el único caso de uso que la organización puede probar realmente de forma segura con la gobernanza actual”, apunta Prommer. “Están utilizando copilotos porque los copilotos no necesitan un comité de riesgo de modelos”.

Además, la transición de ganancias incrementales a ganancias sustanciales impulsadas por la IA requiere que alguien o algo fuerce el cambio, como un director ejecutivo, un inversor activista o una sacudida competitiva, añade. Los directores de sistemas de información (CIO) rara vez pueden impulsar el cambio por sí solos, afirma.

Sin embargo, algunas organizaciones han dejado atrás los ahorros de productividad porque no aparecen en las cuentas de resultados (P&L), dice Prommer. “Si le ahorras a un ingeniero 90 minutos al día, eso no aparece en la cuenta de resultados; aparece como: ‘Hemos lanzado un 15% más de funciones”, afirma. “Los consejos de administración quieren una partida concreta. Las empresas que pasaron a casos de uso sustanciales lo hicieron porque contaban con un único responsable de la cuenta de resultados dispuesto a arriesgar sus cifras por ello”.

Hopkins, de Forrester, insta a las organizaciones a replantearse las estrategias de IA y a centrarse en cambios sustanciales, a pesar de las dificultades. Si las organizaciones apuntan lo suficientemente alto, pueden utilizar la IA para permitir una transformación empresarial completa y encontrar usos de la IA que impulsen ventajas competitivas, afirma.

Forrester aconseja a los responsables de TI y a los líderes empresariales que se centren en cuatro áreas clave:

  • Definir los resultados empresariales y las métricas de éxito para sus iniciativas de IA.
  • Identificar casos de uso específicos para la implementación de la IA alineados con esos resultados empresariales.
  • Establecer una estructura para planificar, probar e implementar aplicaciones de IA.
  • Ampliar las aplicaciones de IA utilizando el poder de la nube, los modelos de vanguardia y los agentes integrados.

Si las organizaciones adoptan el enfoque adecuado, pueden implementar la IA de formas que generen ventajas competitivas reales, según Hopkins. “La estrategia consiste en aplicar una fuerza masiva, basada en una visión que uno tiene, que le da fuerza y debilita a la competencia. Se tiene una visión que los competidores no ven y se establece una capacidad que los competidores no pueden replicar”.

  • ✇Security | CIO
  • “적은 자원으로 더 많은 성과를” 글로벌 CIO가 AI로 IT 생산성 한계를 깨는 법
    보안 기술 기업 넷스코프의 최고 디지털·정보 책임자 마이크 앤더슨은 IT 직원들에게 이례적인 과제를 부여했다. 각자의 역할을 반영한 ‘제미나이 젬(Gemini Gems)’ 디지털 트윈을 생성하고, 기술 문서 등 다양한 정보를 AI에 입력해 해당 역할의 업무와 필요 역량을 학습시키라는 것이다. 앤더슨은 이러한 AI 기반 디지털 트윈이 직원들의 업무 수행을 지원할 것으로 기대하고 있다. 간단한 질의만으로도 거의 실시간에 가까운 속도로 필요한 정보를 찾아볼 수 있도록 돕는다는 설명이다. 그는 “팀이 의지할 수 있는 전문가 역할의 젬(Gems)를 만들었다”라며 “각 직원이 일정 시간을 절약할 수 있도록 하는 것이 목표”라고 말했다. 이 같은 시도는 IT 부서의 효율성과 생산성을 높이기 위해 앤더슨이 추진 중인 다양한 워크플로우 및 프로세스 혁신 전략 중 하나다. 실제로 일부 성과도 나타나고 있다. 예를 들어 개발팀은 AI를 활용
     

“적은 자원으로 더 많은 성과를” 글로벌 CIO가 AI로 IT 생산성 한계를 깨는 법

30 de Abril de 2026, 03:59

보안 기술 기업 넷스코프의 최고 디지털·정보 책임자 마이크 앤더슨은 IT 직원들에게 이례적인 과제를 부여했다. 각자의 역할을 반영한 ‘제미나이 젬(Gemini Gems)’ 디지털 트윈을 생성하고, 기술 문서 등 다양한 정보를 AI에 입력해 해당 역할의 업무와 필요 역량을 학습시키라는 것이다.

앤더슨은 이러한 AI 기반 디지털 트윈이 직원들의 업무 수행을 지원할 것으로 기대하고 있다. 간단한 질의만으로도 거의 실시간에 가까운 속도로 필요한 정보를 찾아볼 수 있도록 돕는다는 설명이다.

그는 “팀이 의지할 수 있는 전문가 역할의 젬(Gems)를 만들었다”라며 “각 직원이 일정 시간을 절약할 수 있도록 하는 것이 목표”라고 말했다.

이 같은 시도는 IT 부서의 효율성과 생산성을 높이기 위해 앤더슨이 추진 중인 다양한 워크플로우 및 프로세스 혁신 전략 중 하나다. 실제로 일부 성과도 나타나고 있다.

예를 들어 개발팀은 AI를 활용해 코드를 생성하고 있다. 직원들은 ‘바이브 코딩’을 통해 빠르게 초기 결과물을 만든 뒤 이를 반복 개선하는 방식으로 개발을 진행하며, 기존 제품 개발 일정에서 수개월을 단축하고 있다. 또한 앤더슨의 팀은 특정 요소, 특히 보안 통제가 AI 생성 코드에 항상 포함되도록 하는 ‘프리미티브’를 구축해 IT 인력의 업무 시간을 줄이고 있다.

앤더슨은 이러한 효율 향상에 대한 구체적인 투자 대비 효과(ROI)는 산출하지 않았지만, 결과적으로 더 적은 자원으로 더 많은 성과를 낼 수 있게 됐다고 강조했다. 그는 “예산을 유지한 상태에서도 이전보다 더 많은 기능과 결과물을 제공할 수 있다”고 말했다.

“현재 인력으로 더 많은 성과를”

CIO는 오랫동안 ‘적은 자원으로 더 많은 성과를 내야 한다’는 압박을 받아왔다. 그리고 그 압박은 지금 더욱 커지고 있다. 시장조사업체 가트너의 조사에 따르면 CIO의 57%는 생산성 향상, 52%는 비용 절감 요구에 직면해 있다.

동시에 CIO들은 기술, 특히 AI를 활용해 전사 워크플로우를 혁신하고 생산성과 효율성을 끌어올려야 하는 과제를 안고 있다. 이러한 변화는 IT 부서 내부에서도 동일하게 요구되고 있다.

네트워크 기업 익스트림 네트웍스의 최고 정보·고객 책임자 아니샤 바스와니는 “AI를 통해 IT 프로세스를 과감하게 재정의하고, 새로운 가능성을 탐색함으로써 가치를 창출해야 한다”고 말했다.

바스와니는 IT 워크플로우 혁신을 핵심 과제로 설정했다. 앤더슨과 마찬가지로 AI, 특히 클로드 코드를 활용해 코딩 속도를 높이고 있다. 또한 업무 방식을 재편해 직원들이 직접 코드를 작성하는 대신 프롬프트 설계, 결과 검토, 품질 관리에 집중하도록 전환했다.

아울러 다른 CIO들과 마찬가지로 헬프데스크 운영도 변화시키고 있다. AI와 자동화를 활용해 셀프서비스 비중을 확대하는 방식이다.

더 복잡한 워크플로우에서도 성과가 나타나고 있다. 바스와니는 AI를 활용해 테스트 전략을 생성하고 테스트를 자동화함으로써 IT의 QA 기능을 확장하고 있다. 그는 “수작업으로 수주가 걸리던 작업을 몇 분으로 줄일 가능성이 크다”고 설명했다. 또한 AI 활용을 통해 비용 증가 없이 처리 역량을 확대할 수 있을 것으로 보고 있다.

이와 함께 신규 제품 개발이나 기능 개선 과정에서 사용자 요구사항을 보다 효과적으로 수집하는 데 AI를 활용하는 방안도 모색하고 있다.

바스와니는 “IT가 비즈니스 파트너와 상호작용하는 방식을 재정의해 더 민첩하고 대응력 있게 만들고 싶다”라며 “이를 통해 더 자주 가치를 제공하고 고객 중심성을 강화할 수 있다”고 말했다. 이어 “목표는 현재 인력으로 더 많은 일을 해내는 것”이라며 “더 빠르게 혁신하고 더 많은 결과를 만들어내는 데 있다”고 강조했다.

변화 요구 커지는 IT, 핵심은 ‘워크플로우 재설계’

컨설팅 기업 웨스트 먼로의 디렉터 알렉스 와이어트는 IT 업무가 본질적으로 프로세스 중심이기 때문에 혁신 가능성이 큰 영역이라고 진단했다.

그는 “AI가 이 논의를 다시 촉발했다”라며 “현재 CIO들은 비용 절감 압박이 커지고 있고, 이사회는 ‘이 프로세스를 50% 더 효율적으로 만들어야 한다’고 요구하고 있다”고 말했다.

와이어트는 CIO들이 조직 전반과 마찬가지로 비교적 쉬운 과제부터 시작해 성과를 쌓고 역량을 확보한 뒤, 점차 난이도가 높은 영역으로 확장해야 한다고 조언했다. 그는 “워크플로우와 프로세스 최적화에는 여러 단계가 존재한다”고 설명했다.

초기 단계는 반복적인 업무를 AI로 자동화하고, 인력은 이를 감독하는 역할로 전환하는 것이다. 동시에 IT가 사용하는 도구와 기술에 이미 내장된 기능을 최대한 활용하는 것도 포함된다.

와이어트는 “이 단계가 가장 빠르게 성과를 낼 수 있는 영역이며, 투자 대비 효과도 가장 크다”라며 “이후 보다 고도화된 기회를 공략해야 한다”고 말했다.

다만 그는 AI가 유일한 해법은 아니라고 강조했다. “AI가 워크플로우와 프로세스 혁신을 부각시킨 것은 맞지만, 순수한 프로세스 개선 기회도 여전히 존재한다”며 린(Lean) 프로세스 설계를 예로 들었다.

이어 “비효율적인 프로세스를 그대로 자동화할 위험이 있기 때문에, 원하는 결과와 핵심성과지표(KPI)를 기준으로 업무를 어떻게 재구성할지 고민해야 한다”며 “단순히 도구를 늘린다고 효율이 높아지는 것이 아니라, 워크플로우와 업무 방식 자체를 재정의해야 한다”고 강조했다.

이 같은 조언은 오랜 기간 검증된 접근법이다. 와이어트는 “공격적인 워크플로우 및 프로세스 재설계를 통해 50% 이상의 성과 개선도 가능하다”라며 “이를 위해서는 업무 수행 방식 자체를 근본적으로 재검토하고 이를 뒷받침할 시스템을 구축해야 한다”고 말했다.

모든 CIO가 이러한 수준의 혁신을 추진할 수 있는 것은 아니지만, 그렇다고 시도를 미뤄서는 안 된다는 점도 짚었다. 그는 “기본적인 프로세스 재설계만으로도 10~20% 수준의 개선 효과를 얻을 수 있으며, 여기에 AI를 더하면 추가적인 성과를 기대할 수 있다”고 설명했다.

에베레스트 그룹의 파트너이자 CIO 리서치 및 자문을 총괄하는 로스 티스노프스키 역시 ‘변환’의 중요성을 강조했다. 그는 “워크플로우 재설계 없이 자동화만 진행할 경우 문제가 발생할 수 있다”고 지적했다.

티스노프스키는 코딩과 테스트 영역을 사례로 들었다. AI 도입으로 코딩 생산성은 70% 이상 향상된 반면, 테스트 효율은 약 30% 수준에 머물고 있다는 것이다. 이로 인해 워크플로우를 재구성하지 않은 조직에서는 코드 생성 속도가 테스트 처리 능력을 앞지르는 불균형이 발생한다.

그는 “많은 AI 프로젝트가 기대만큼의 가치를 창출하지 못하는 이유는 워크플로우를 함께 재설계하지 않았기 때문”이라고 분석했다.

이중 압박 속 IT 혁신, 핵심은 명확한 목표 설정

알렉스 와이어트는 CIO들이 IT 조직의 업무 방식을 혁신하는 과정에서 또 다른 도전에 직면한다고 지적했다.

우선 CIO와 IT 조직은 이미 기업 내 다른 부서들의 혁신 작업을 지원하는 데 상당한 자원을 투입하고 있다. 특히 매출 확대, 시장 점유율 상승, 고객 유지율 개선 등 직접적인 성과로 이어지는 영역이 우선순위를 차지하는 경우가 많다.

와이어트는 “IT는 독특한 위치에 있다. 전사 차원의 혁신을 추진하는 동시에 IT 내부 혁신도 요구받고 있다”라며 “이중의 압박을 받고 있는 셈”이라고 설명했다.

이로 인해 CIO들은 IT 워크플로우를 재설계하는 데 필요한 자원과 역량을 충분히 배분하기 어려운 상황에 놓이게 된다.

기존에 자리 잡은 워크플로우를 바꾸는 것도 쉽지 않다. 와이어트는 “처음부터 새로 구축한다면 전혀 다른 방식으로 설계할 수 있는 업무가 많지만, 레거시 워크플로우는 변화의 동력을 확보하기 어렵다”며 “재설계에는 상당한 시간과 비용이 필요하다”고 말했다.

그는 선도적인 CIO들이 이러한 문제를 극복하는 방식도 다른 경영진과 크게 다르지 않다고 설명했다. 변화의 필요성을 입증하는 비즈니스 케이스를 수립하고, 달성하고자 하는 목표를 명확히 하며, 그 결과가 가져올 가치를 구체적으로 제시해 필요한 자원을 확보한다는 것이다.

또한 “기회가 생길 때마다 워크플로우를 재정비하는 접근도 병행한다”고 덧붙였다.

로스 티스노프스키는 이러한 흐름이 이어지면서 CIO들이 점차 더 복잡한 영역으로 혁신을 확대하게 될 것으로 내다봤다. 그는 인프라 운영과 IT 지식 관리 체계 등 고도화된 워크플로우 영역에서도 변화가 본격화될 것이라고 전망했다.

직원 주도형 업무 혁신 확산

기술 기업 베이전의 CIO 패트릭 필립스는 AI만으로는 최대 효율을 달성할 수 없다는 판단 아래, 프로세스 개선 경험과 직원들의 현장 인사이트를 결합해 IT 워크플로우 혁신에 나서고 있다.

그는 “기존 프로세스에 AI를 덧붙이는 방식이 아니라, 프로세스 자체를 완전히 다시 정의해야 한다”라며 “만약 지금 AI 네이티브 도구로 처음부터 프로세스를 설계한다면 어떤 모습일지를 고민하고 있다”고 말했다.

필립스는 반복적이고 표준화된 업무가 많은 워크플로우를 중심으로, 직원들이 직접 혁신 대상 영역을 발굴하도록 했다.

그는 “직원들이 자신의 업무 방식을 어떻게 바꿀지 고민할 것으로 기대한다”며 “이를 위해 필요한 도구와 교육, 그리고 워크플로우를 처음부터 다시 설계할 수 있는 권한을 제공하는 것이 우리의 역할”이라고 설명했다.

대표 사례로 헬프데스크 조직이 있다. 필립스는 이 팀에 AI 기반 코드 에디터 ‘커서(Cursor)’를 도입하고, “이상적인 헬프데스크를 직접 설계해보라”고 주문했다.

필립스는 “직원들은 무엇이 불편한지 이미 알고 있었고, 업무를 더 쉽게 만들 동기가 충분했다”라며 “단순한 비밀번호 재설정보다 더 가치 있고 흥미로운 일을 하고 싶어 하기 때문에 스스로 효율성을 높이려 한다”고 말했다.

그 결과 헬프데스크 팀은 워크플로우를 재구성해 효율을 높였고, 확보된 시간을 기획 회의 참여 등 보다 부가가치가 높은 업무에 투입할 수 있게 됐다.

지속적 개선 문화로 전환

데이터 보호 및 사이버 복원력 플랫폼 기업 컴볼트의 CIO 하 호앙은 이러한 변화가 CIO에게 필수 과제라고 강조했다.

그는 “그동안 CIO들은 매출, 재무, 고객 지원 등 ROI가 명확한 비즈니스 워크플로우 혁신에 집중해왔다”라며 “하지만 IT는 상대적으로 소외된 영역이었다. 이제는 달라져야 한다”고 말했다.

이어 “IT가 조직의 혁신을 주도하려면 스스로 모범을 보여야 한다”며 “자동화와 AI, 효율성을 강조하면서 정작 내부는 티켓 처리, 수작업 중심 프로세스, 반복적인 업무 전환에 묶여 있어서는 안 된다”고 지적했다.

그는 “따라서 CIO들은 내부 IT 워크플로우에도 동일한 수준의 엄격함을 적용해야 한다”며 “이는 신뢰 확보와 비용 효율성을 동시에 달성할 수 있는 가장 빠른 방법”이라고 강조했다.

하 호앙과 그의 팀은 AI, 생성형 AI, 그리고 에이전트 기반 기능을 계기로 단순 최적화를 넘어 워크플로우 자체를 근본적으로 재검토하고 있다.

그는 “과거에는 ‘이 프로세스를 어떻게 더 빠르게 만들 것인가’를 고민했다면, 이제는 ‘이 프로세스가 왜 존재하는가’를 묻는다”고 설명했다.

이러한 접근은 다양한 영역에서 변화를 이끌고 있다.

우선 헬프데스크부터 혁신을 시작했다. AI 기반 셀프서비스와 가상 에이전트를 도입하고, 티켓 분류와 라우팅을 자동화했으며, 반복적인 문제는 자동으로 해결되도록 했다. 그 결과 IT 인력에게 전달되는 티켓 수가 줄고, 처리 속도는 빨라졌으며, 업무는 사후 대응 중심에서 보다 가치 중심으로 전환됐다.

이후 핵심 IT 워크플로우 전반으로 혁신을 확대하고 있다.

예를 들어 정책 기반 접근 권한을 자동으로 부여하는 체계를 구축해 수작업 승인 절차를 최소화하고 있다. 또한 데이터와 AI를 활용해 위험도가 낮은 변경 작업을 간소화하고, 변경 관리 과정의 병목을 줄이고 있다. 아울러 AI 기반 검색과 어시스턴트를 도입해 지식 관리와 문제 해결 과정에서 발생하는 사일로를 제거하고 대응 속도를 높이고 있다.

하 호앙은 이러한 변화가 시작에 불과하다고 강조했다.

그는 “이제 워크플로우 혁신은 일회성이 아니라 지속적으로 이어지는 활동이 됐다”라며 “가장 큰 변화는 기술이 아니라 사고방식의 전환”이라고 말했다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • Your AI agent is ready to go. Is your infrastructure?
    IDC estimates there were over 28 million AI agents deployed by the end of last year, and predicts there’ll be over 1 billion actively deployed by 2029, executing 217 billion actions per day. It’s easy to build an AI agent POC, says Venkat Achanta, chief technology, data, and analytics officer at TransUnion, a global credit reporting company with $4.6 billion in revenues. But governing, securing, and scaling it are a whole other challenge, especially for companies in high
     

Your AI agent is ready to go. Is your infrastructure?

29 de Abril de 2026, 07:00

IDC estimates there were over 28 million AI agents deployed by the end of last year, and predicts there’ll be over 1 billion actively deployed by 2029, executing 217 billion actions per day.

It’s easy to build an AI agent POC, says Venkat Achanta, chief technology, data, and analytics officer at TransUnion, a global credit reporting company with $4.6 billion in revenues. But governing, securing, and scaling it are a whole other challenge, especially for companies in highly regulated industries such as financial services and healthcare.

To address the problem, TransUnion spent the last three years building its agentic AI platform, OneTru. The goal was to make something as reliable and deterministic as the old, scripted, expert-style systems but as flexible as gen AI, and as easy to interact with as a chatbot.

The trick, however, was to combine the best of both worlds by using old-school systems for core processes where explainability and reliability are key, and layering in gen AI functionality in limited ways for the tasks it was uniquely suited for. And since the infrastructure to do this wasn’t available, TransUnion built its own, allocating $145 million to the project.

That was a big investment in an unproven technology, but it’s already led to $200 million in cost savings. More than that, once the platform was built, TransUnion used it to build customer-facing solutions.

In March this year, for example, TransUnion released its AI Analytics Orchestrator Agent, built using the OneTru platform and powered by Google’s Gemini models. The agent is already being used by TransUnion internally to improve analytics, and can also be used by customers to run sophisticated data analysis without the need for data scientists.

Many clients use TransUnion’s data but don’t use other solutions and platforms, Achanta says. The new orchestrator agent has the potential to help customers get more value out of the data, and unlock new revenue streams for the company.

And more agents are in the works, Achanta says. The key to making them work is the orchestration, governance, and security layers. Just making an agent do something is very easy for anyone, he says, and can take just a few days. The company can also create agents quickly. “But I have the foundation and guardrails, and the agent sitting on my platform uses all of them,” he says. “That’s what gives us power.”

The secret to making AI agents behave is to separate the layers of the task and assign each layer to a different system, each one operating under a set of constraints. This approach limits the damage any particular agent can do, creates a system of checks and balances, and restricts the riskiest activities to a pre-gen AI technology.

For example, at TransUnion, the core decision-making is performed by an updated version of an expert system. It operates under a set of well-defined, auditable rules and works predictably, cost-effectively, and at low latency. When it encounters a situation it hasn’t seen before, an LLM is used to analyze the problem, a different agent might then turn it into a new rule, and then a human might be called in to review the results before the new rule is added to the expert system. There are different agents that understand the semantic layer, interact with humans, and perform other tasks.

“With the neural reasoning layer — the LLM — we put humans in the loop,” he says. “When it’s a symbolic reasoning layer, which is logic and machine-learning-driven, we let it be automated.”

So when each agent operates within very narrow constraints, on just the limited data it needs for that one task, and is limited to what it can do, the entire system becomes much more governable and reliable.

It’s like the difference between an assembly line, where multiple workers each do a single, distinct task, instead of a workshop where a single artisan does everything. The assembly line can do work faster and more reliably but today, many enterprises deploy their AI agents as if they were craftsmen. The latter approach can result in creative, unique products, but this isn’t always what a company needs.

Nicholas Mattei, chair of the ACM special interest group on AI and professor at Tulane University, suggests that companies focus on building in extra security at points where different parts of the agentic system connect.

“Make sure you have security at the seams,” he says. For example, if an agent sends requests to an email service, set up a checkpoint between the two. “Around the gaps between the unreliable agents and where the traditional software lives, that’s where you want to focus your security processes,” he says.

Building a security foundation for agentic AI

In a Jitterbit survey of 1,500 IT leaders released in March, AI accountability — security, auditability, traceability, and guardrails — is the biggest factor when it comes to the final AI purchase decision, ahead of speed of implementation, vendor reputation, and even TCO. Security, governance, and data privacy risks were also top issues preventing AI initiatives from moving to production, ahead of costs and integration challenges. And they’re right to be worried.

Earlier this year, researchers at cybersecurity firm CodeWall were able to breach McKinsey’s new AI platform, Lilli. Using an AI tool of their own, the researchers said they could access 47 million chat messages, 728,000 files, 384,000 AI assistants, 94,000 workspaces, 217,000 agent messages, nearly 4 million RAG document chunks, and 95 system prompts and AI model configurations.

“This is decades of proprietary McKinsey research, frameworks, and methodologies — the firm’s intellectual crown jewels sitting in a database anyone could read,” the researchers wrote.

The reason? Out of over 200 publicly exposed API endpoints, 22 required no authentication. It took just two hours for the researchers to get full read and write access to Lilli’s entire production database. McKinsey responded quickly to the alert, patched the unauthenticated endpoints, and took other security measures.

“Our investigation, supported by a leading third-party forensics firm, identified no evidence that client data or client confidential information were accessed by this researcher or any other unauthorized third party,” the firm said in a statement.

IDC says the incident underscores just how dangerous the breach of an AI system can be to an enterprise.

“Most companies are still thinking about AI risk in yesterday’s terms: data leakage, bad outputs, and brand reputation damage,” says Alessandro Perilli, IDC’s VP for AI research. “Those are serious issues, but the bigger risk becomes delegating authority to AI systems.”

By getting access to an agentic AI platform, an attacker can’t just see something they’re not supposed to, but also covertly change how the company acts. And securing enterprise-scale agentic AI systems like Lilli is only half the challenge. According to Gartner, 69% of organizations suspect employees use prohibited AI tools, and 40% will experience security or compliance incidents by 2030 as a result.

But available discovery tools aren’t fully ready to find AI agents, Gartner says.

“If I asked you how many agents run in your enterprise right now, where are you going to go look it up?” asks Swaminathan Chandrasekaran, global head of AI and data labs at KPMG, which now has several thousand AI agents in production. “Have they all been onboarded and have identities? Have they gone through a proper authentication process and who’s in charge of them? That piece of infrastructure doesn’t exist.”

Tools are just starting to emerge, however, or companies are creating DIY solutions, he says. “That’s what’s going to give CIOs peace of mind,” he says.

We’re already seeing public examples of individual employees deploying powerful agentic AI to negative consequences. Summer Yue, Meta’s alignment director, recently decided to use OpenClaw, a viral open-source agentic AI tool, to help handle her inbox. After it worked in a test inbox, she deployed it for real.

“Nothing humbles you like telling your OpenClaw to confirm before acting and watching it speedrun deleting your inbox,” she wrote on X. “I couldn’t stop it from my phone. I had to run to my Mac mini like I was defusing a bomb.”

In the past, an employee might upload sensitive information to a chatbot or ask it to write a report that they’d then copy and paste, and pass off as their own. As these chatbots evolve into full-on agentic systems, the agents now have the ability to do anything a user has privileges to do, including accessing corporate systems.

To manage this new security risk, companies will need to move past role- and identity-based controls to intent-based ones, says Rakesh Malhotra, principal in digital and emerging technologies at EY.

It’s not enough to ask whether an agent has permission to access a system to make a change to a record, he says. Companies have to be able to ask why are you changing this. That’s a big challenge right now.

“The observability stacks don’t capture the intent of why the agent did something,” he says. “And that’s really important to understand. Trust is based on intent, and there’s no way for any of these systems to capture intent.”

If a human employee tries refactor the entire code base, they’d be asked to provide a good reason for doing that. “And if you’re refactoring without any specific reason, maybe you shouldn’t do it,” Malhotra says. “With people, there are ways for this to be adjudicated. I don’t know how to do this with agents.”

Building a semantic data foundation for agentic AI

TransUnion’s Achanta repeatedly mentioned the semantic foundation of the company’s OneTru platform. Such an understanding of information helps systems understand not just what the data is, but what it means, and how it relates to other data. Gartner says developing a semantic layer is now a must-do for companies deploying AI.

“It’s the only way to improve accuracy, manage costs, substantially cut AI debt, align multi-agent systems, and stop costly inconsistencies before they spread,” the firm says.

By 2030, universal semantic layers will be treated as critical infrastructure, alongside data platforms and cybersecurity, Gartner predicts. And agents need context to be able to do anything meaningful with data, says KPMG’s Chandrasekaran. That’s where a company’s knowledge is contained.

“That’s your new IP for the enterprise,” he says. “Context is the new moat.”

For John Arsneault, CIO at Goulston & Storrs, creating a solid data foundation is also a way to avoid vendor lock-in.

“If you’re buying things and moving your data into them to create workflow automation or agentic work assistants, you’ll have a hard time getting out of it,” he says. “But if you take a data-centric approach, you can at least move from one to the other if there’s a shift in the marketplace.”

The law firm has migrated its client-oriented work products into NetDocuments, a document management system specifically focused on the legal industry. And for the rest of the data the company collects, it goes into Entegrata’s legal data lakehouse.

“Our goal is to have all our other applications eventually point at that data lake,” he says. “Then we’ll have these two environments where all the firm’s data exists, which will allow us to put any AI tool we use on top.”

It’ll also make the data flows easier to manage, he adds, and will enable the firm to adapt quickly to whatever AI technology comes next. “Whether gen AI, agentic, or Anthropic stuff, with the Cowork legal plugin, it’s very difficult to keep up with,” he says. “And it changes every six months.”

Agentic orchestration

The last part of the agentic infrastructure puzzle, after getting security guardrails in place and creating a usable data layer, is orchestration. Agentic AI systems require agents talk to each other and human users, and interact with data sources and tools. It’s a complicated challenge, and this technology is still very much in its infancy, though moving quickly. MCP is one such example, and is a key piece of solving the orchestration puzzle. AI vendors have been remarkably willing to cooperate here.

“When social networks were born, and Facebook and Twitter were discussing a standard protocol for interacting, nobody wanted to adopt their competitors’ protocol,” says Agustin Huerta, SVP of digital innovation and VP of technology at Globant, a digital transformation company. “Now everyone is going through MCP and maturing it as a standard protocol.”

But that’s not to say agentic integration has been solved. According to a Docker survey of more than 800 IT decision makers and developers, the operational complexity of orchestrating multiple components is the biggest challenge when it comes to building agents.

In particular, 37% of respondents say orchestration frameworks are too brittle or immature for production use, and 30% report testing and visibility gaps in complex orchestrations.

In addition, while 85% of teams are familiar with MCP, most say there are significant security, configuration, and manageability issues that prevent deployment in production. And there are other integration issues enterprises have to deal with.

“One problem yet to be solved is how to get a proper dashboard to control all these agents, to know exactly what’s going on with each of them,” says Huerta. “One dashboard will let you monitor agents built with OpenAI, and one is for agents that live on Salesforce, but none can expose telemetry in a central dashboard for control, auditing, and logging.”

For companies just starting to deploy agents, or who are sticking to a single platform, this isn’t yet an issue, he adds, but as they leverage a larger network of agents, they’ll start to experience the challenges. Globant itself is building its own internal dashboard for agentic AI, for instance.

And at Brownstein Hyatt Farber Schreck, a 50-year-old law firm with about 700 employees and clients around the US, there are several areas where AI is being deployed, including a proposal generator system.

Normally, it can take several people days to review a client’s request for proposal, go through hand-written notes or meeting transcripts, and pull together other relevant materials, says Andrew Johnson, the firm’s CIO.

“We can feed all that information into a computer and extract key criteria to produce a quality first draft in minutes,” he says.

Multiple agents are required for different parts of the process — one to extract success criteria or staffing requirements, one to look for precedents and lessons learned, and others for pricing and the brand standards. “Each of those agents is autonomous and needs to be orchestrated so the outputs of each are fed into the next step,” Johnson says. For the most part, that means a RAG system, since most of the legacy platforms the firm uses have yet to incorporate an MCP layer.

Depending on the task, individual agents may be powered by different models, which is another layer of orchestration that needs to be managed.

Then there’s cost monitoring. If an AI agent or group of agents gets into an infinite feedback loop, the inference costs can quickly rise.

“We’re aware of the concern, though we have yet to see it manifest,” says Johnson. “So we have monitoring in place. If we exceed thresholds, we react to it.”

Regardless of strategies or measures to absorb setbacks, everything having to do with AI is changing faster than anything else companies have seen.

“I’ve been in technology for 25 years and I’ve never seen anything like this,” says EY’s Malhotra. “The fastest growing companies in the history of companies have all been created in the last three to four years. The growth in adoption is just unprecedented. And I talk to clients all the time implementing technologies that were highly relevant nine or 10 months ago, and everyone’s moved on.”

  • ✇Security | CIO
  • Why I, the CEO, am personally building our AI strategy
    Last year, I did something that raised a few eyebrows on my leadership team: I made myself the de facto product manager for our AI strategy. Not because we don’t have talented product people — we do. But because I’m convinced that AI is one of those rare inflection points where the CEO needs to be hands-on, not just “aligned.” I’ve been experimenting with AI tools since ChatGPT landed in 2023. Unlike other tech hype cycles — I remember an investor asking me about our cry
     

Why I, the CEO, am personally building our AI strategy

29 de Abril de 2026, 07:00

Last year, I did something that raised a few eyebrows on my leadership team: I made myself the de facto product manager for our AI strategy. Not because we don’t have talented product people — we do. But because I’m convinced that AI is one of those rare inflection points where the CEO needs to be hands-on, not just “aligned.”

I’ve been experimenting with AI tools since ChatGPT landed in 2023. Unlike other tech hype cycles — I remember an investor asking me about our cryptocurrency strategy back in 2021, and my answer was “none” — this one felt different from the start. Over the 2024 holidays, I went deep: prototyping with Cursor, brainstorming product ideas with ChatGPT, using voice mode to dictate rough drafts while away from my desk. What I took away was a conviction that AI was no longer a “keep an eye on it” item. It was clear then that AI was a right-now, every-department, every-person transformation.

Through 2025, AI became a core part of all my workflows. Now, AI is an organizational accelerator across the board at NetBox Labs. Personally, I’m shipping more product and technology than I have in over a decade — not because I’m working harder, but because AI tooling has fundamentally changed what one person can do.

And transformations like that can’t be managed by committee.

Why AI is too important to delegate

Here’s what I’ve learned watching strategy play out across startups and enterprises: the things that get delegated early tend to get domesticated early. They get scoped into a neat workstream, assigned a quarterly OKR and slowly lose their disruptive potential. AI doesn’t fit in a box right now and forcing it into one is a mistake.

But this goes beyond product management. I’m not just PM’ing our AI strategy — I’m directly building products and features, driving internal experimentation and pushing AI into every corner of how we operate. AI tooling and capabilities are evolving too fast, in too many dimensions, to delegate. The landscape shifts week to week: new models, new capabilities, new paradigms for how humans and AI work together.

As a CEO, you need to develop your own feel for the impact of AI — what it can and can’t do today, where it’s headed, how it changes the work — and bring those learnings across teams. You can’t get that from a briefing doc.

I sit at the intersection of product, engineering, go-to-market and operations. I see the connections that individual teams can’t — where an AI experiment in marketing could reshape how we think about our product onboarding, or where an engineering prototype could unlock a whole new customer conversation. When I’m personally involved, building with these tools myself, those dots get connected faster.

This isn’t about ego or micromanagement. It’s about pattern recognition at the speed the moment demands. The companies that will win are the ones whose leadership treats AI not as a departmental tool but as a strategic capability.

Speed now matters more than perfection

If there’s one lesson the last two years have drilled into me, it’s this: the cycle time for strategy has compressed from years to months, or even weeks. The AI tools available today are meaningfully better than what existed six months ago. The tools six months from now will make today’s tools look primitive. Waiting for the “right” moment to build a comprehensive AI strategy is a guaranteed way to fall behind.

At NetBox Labs, we’ve adopted a bias toward shipping. Since 2024, we’ve shipped many AI features — some of which have gained rapid adoption, like NetBox Copilot or the NetBox MCP server, and others that haven’t resonated or were overtaken by the evolving AI landscape. That’s fine. The point is to stake the ground, learn and iterate — not sit in a conference room perfecting a roadmap that’ll be obsolete by the time you execute it. Recently, one of our customers asked us for better “best practices” content for working with NetBox’s APIs. In less than a day, we shipped a set of skills for AI agents that has seen quick interest and adoption. That kind of cycle time — customer request to shipped product in hours — is what AI enables.

This same principle applies internally. I tell my team: don’t wait until you’ve mastered a tool to start using it. Use it now, stumble, figure out what works and share what you learn. There are no “rules” for how AI fits into workflows yet. We’re all learning as we go, and the patterns will look different by this time next year.

Key takeaways: How to encourage company-wide AI adoption

If you’re a CEO or senior leader thinking about how to drive AI adoption across your organization, here’s what I’ve found works.

  • Make it personal, starting at the top. I don’t just endorse AI use — I show my work. When I build a feature using Claude Code or prototype a product idea by feeding requirements into Claude, I talk about it openly. I even presented my use cases at our recent company offsite. Leaders using AI tools visibly and vocally give everyone else permission to do the same.
  • Reframe the culture around AI. One thing I noticed early on is that people feel sheepish about using AI, like it’s cheating. We had to actively dismantle that mindset and frankly, are still working on dispelling it. Using a calculator isn’t cheating at math. Our job is to get stuff done, and teams that make effective use of these tools will outperform teams that don’t. I want to hear how people are using AI and how it’s helped them succeed — not whispered confessions, but demos and stories shared in team workshops.
  • Lower the barrier to experimentation. If someone on your team finds a tool that could accelerate their work, don’t make them write a business case. Let them spend a few dollars and a few hours trying it. You can’t be wasteful and you can’t get distracted, but don’t be shy about trying new things. The cost of a missed opportunity dwarfs the cost of a failed experiment.
  • Show people where to start. “I’m just not sure how to get started” is the most common thing I hear. So I share concrete examples from my own workflow: using agents to research and brainstorm to validate product feasibility, building detailed PRDs with Claude Code and then feeding them back in to generate implementation plans, continuing further to directly build and ship features, even analyzing data by connecting Claude Cowork with our CRM, Linear and call transcripts. These aren’t exotic use cases. They’re everyday work, done faster.
  • Carve out time and space for people to experiment. At our recent company offsite, we dedicated an entire afternoon to learning from one another in-person. This helped everyone learn from “experts” within the company who could help them get started or uplevel their use case. Prioritizing making time for this experimentation demonstrated to the team that this truly is a company priority — not a fad.
  • Connect it to individual growth. This is something I care about deeply beyond the business case. Most of us will have careers that extend well beyond our current roles. Being AI-native is going to be a defining professional skillset. I want NetBox Labs to be a place where everyone develops that skillset — not just because it helps our company, but because it matters for their careers.

The real risk is inaction

Companies are already splitting into two camps: those that fully embrace AI and weave it into the fabric of their business, and those that fall behind. The advantages in speed, innovation and operational efficiency are creating a gap that only widens with time.

So yes, I’m building our AI strategy — hands on the keyboard, not just in the boardroom. Not forever — eventually this will be so deeply embedded in how we operate that it won’t need a dedicated champion. But right now, in this window, it needs a CEO who’s willing to prototype on a Saturday, demo an imperfect feature on a Monday and keep pushing the whole organization to move faster than feels comfortable.

If you’re a CEO in 2026, that’s the job.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌
❌