Visualização normal

Ontem — 8 de Maio de 2026Stream principal
  • ✇Security | CIO
  • Your CEO just got AI FOMO. Here are 6 tips on what to do next.
    Every CIO I know has had some version of this conversation: their CEO comes back from a golf trip with their buddy, or a conference with peers, and is told AI is about to automate everything at their company, from HR to marketing and finance. No humans in the loop, just AI. The CEO then calls an all-hands Monday morning, and the CIO is suddenly on the hook to make it all happen. The instinct for CEOs to chase unsubstantiated claims is understandable since they’re respon
     

Your CEO just got AI FOMO. Here are 6 tips on what to do next.

8 de Maio de 2026, 07:00

Every CIO I know has had some version of this conversation: their CEO comes back from a golf trip with their buddy, or a conference with peers, and is told AI is about to automate everything at their company, from HR to marketing and finance. No humans in the loop, just AI. The CEO then calls an all-hands Monday morning, and the CIO is suddenly on the hook to make it all happen.

The instinct for CEOs to chase unsubstantiated claims is understandable since they’re responding to competitive pressure. But that leaves CIOs responsible to close the gap between ambition and reality. Making AI work in an organization with decades of accumulated process, permission frameworks, and cultural inertia is very different from deploying it in a demo.

The best response isn’t to push back on the ambition, but redirect it. Translate the CEOs vision into an honest map of what has to happen for the organization to get there, including the infrastructure, governance, and training. That helps to convert the kneejerk compulsion to move faster into a concrete plan that leadership can get behind.

Here’s what CIOs should actually be focused on to get where their CEOs want them to go, regardless of what’s discussed on the links.

1. Start where AI can build its own credibility

The hype machine wants you to climb Everest on day one. Instead, identify the repetitive tasks where AI can prove itself on familiar ground — the workflows your team already knows well, where results are easy to verify and the bar for trust is attainable.

The goal is the Eureka moment when a skeptic on your team sees a real result and becomes a believer. Those moments compound. When someone has seen AI make their work easier in a context they understand, they’re more likely to help you move things forward. You can’t force that change, but you can engineer the conditions for it.

2. Models will commoditize. Context will not.

Every few months, a new model claims to be smarter, faster, and cheaper than the last one. Don’t be distracted by that race. The lasting advantage in enterprise AI doesn’t just come from which model you’re running, it’s in the quality, governance, and semantic clarity of the data feeding it. Enterprises that invest in consistent business definitions, well-structured data, and clear lineage will outperform those that don’t, regardless of which model is in fashion. Context is your competitive moat. Focus on building that.

3. Nail down the permissions

In a world of dashboards, you know exactly what data will appear on a given page, so you can set permissions in advance for who can access it. In an AI world, the system can generate outputs that were never pre-designed. So how do you determine who has the right to see a result that was never anticipated?

Before deploying any agent that acts on someone’s behalf, such as filing a request, surfacing payroll data, or populating a record, first determine whether your existing permissions and access control frameworks can handle outputs that were never planned for. Most can’t. This is a prerequisite of what your CEO is asking for: the unglamorous infrastructure work that determines whether your AI is trustworthy in production. It needs to happen before you scale, not after.

4. Build an editing culture, not a writing one

For decades, engineers, analysts, and operations teams have been trained to write code, build reports, and define new processes. AI upends that. The skill now is editing — auditing what the system produces, catching what it got wrong, and knowing where to push back.

The truth is most people aren’t naturally good at editing because they’ve never had to be. That’s a skills gap that needs to be closed early on. Invest in helping engineers, analysts, and managers develop the judgment to evaluate AI outputs, not just generate them. Editing must become a core enterprise competency.

5. Measure behavior change, not tool adoption

Login data is a vanity metric. If your engineers are accessing AI coding tools but aren’t changing how they build, you haven’t adopted anything. The metric that makes more sense is productivity output. In agile terms, a team that completes 20 story points per sprint should hit about 28 with AI, not because the tools are magic, but because the repetitive work gets faster. If you’re not seeing that, you’re measuring the wrong thing. Pay attention to output, not usage metrics.

6. Reframe your organization’s relationship with failure

The instinct to de-risk everything made sense when software deployments were expensive and slow to reverse. AI works differently. The outputs are probabilistic, the iteration cycles are fast, and being overly cautious can cost valuable time. CIOs need to give teams permission to experiment in ways that feel uncomfortable by traditional enterprise standards, all while building the feedback loops that make fast failure safe. That culture shift has to be modeled from the top.

FOMO isn’t going away

CEOs will keep getting pulled into cycles of urgency and FOMO, and that pressure will keep landing on CIOs. The organizations that make real progress will be the ones that redirect that energy into infrastructure that makes AI trustworthy, measurement systems that show what’s working, and cultural changes that make adoption stick. That’s the agenda that’ll move your organization forward.

Antes de ontemStream principal
  • ✇Security | CIO
  • Intel, behind in AI chips, bets on quantum and neuromorphic processors
    Intel for years chopped critical products including CPUs, GPUs and networking gear to cut corporate fat and get back into shape. Many cuts pre-date the appointment last year of Lip-Bu Tan as CEO. Now, Tan is placing a long-term bet beyond the current crop of AI chips and doubling down on quantum processors and neuromorphic chips, which survived Intel’s earlier product cuts. Tan has now tapped company veteran Pushkar Ranade to be Intel’s new chief technology officer,
     

Intel, behind in AI chips, bets on quantum and neuromorphic processors

6 de Maio de 2026, 14:27

Intel for years chopped critical products including CPUs, GPUs and networking gear to cut corporate fat and get back into shape.

Many cuts pre-date the appointment last year of Lip-Bu Tan as CEO. Now, Tan is placing a long-term bet beyond the current crop of AI chips and doubling down on quantum processors and neuromorphic chips, which survived Intel’s earlier product cuts.

Tan has now tapped company veteran Pushkar Ranade to be Intel’s new chief technology officer, with a mission to drive developments in “quantum computing, neuromorphic computing, photonics, and novel materials,” the chipmaker announced this week.

The move is a longer-term bet, according to Dylan Patel, CEO of semiconductor research firm SemiAnalysis. “It’s a bit further out stuff he is doing, so it wouldn’t help with the next two years of products,” he said, adding that Ranade is an excellent choice for Intel’s move into future computing models.

Multiple analysts said Intel’s quantum group has been hindered by limited funding and resources and hurt by staff turnover. Former CEO Pat Gelsinger and CTO Greg Lavender departed the company last year.

Quantum uncertainty

There’s very little known about Intel’s quantum computing efforts. The company’s most recent quantum chip, Tunnel Falls, was announced back in 2023.

But there’s leadership continuity, with quantum hardware leader James Clarke and quantum systems and software leader Anne Matsuura still at the company.

“Maybe this means Lip-Bu wants to [reorient] Intel’s focus and investment in quantum computing,” said Jim McGregor, principal analyst at Tirias Research.

Intel has a solid record of success with technology moonshots, and its neuromorphic chip development is the best in the business, said Ian Cutress, chief analyst at semiconductor consulting firm More Than Moore.

“Intel’s [quantum] approach, since [former CEO Robert] Swan took over, to be honest, has been a lot less public. They would need to match — if not surpass — to develop their current quantum technologies beyond their competitors,” he said.

One of those competitors, IBM, is far ahead with its quantum efforts. The company has a quantum cloud available for rental now and a mature product plan for the next several years. IBM has “an open roadmap to 2033, which they’ve been working on since 2022, and every year they’ve been hitting their targets like clockwork,” Cutress said.

Intel’s investment arm, Intel Capital, recently invested $178 million in quantum processor company QuantWare. But an investment by Intel Capital doesn’t always mean Intel adopts a technology.

Nonetheless, enterprises should take a measured view of Intel’s pivot and “always take emerging technology talk with a grain of salt,” said Cutress. He argued that the long legacy of digital computing architecture is difficult to unseat.

“The reality is that any technology that comes from this side of R&D is going to work alongside current high-performance hardware, not replace it,” he said.

The hardware stack will likely look like a combination of CPU, GPU, and quantum computing chips in a datacenter, not just a quantum processor working on its own, Cutress said.

“IBM, Google, Microsoft have realized this and are pivoting those messages,” he said.

Quantum processors and AI supercomputers naturally complement each other, said Pranav Gokhale, co-founder and CTO of Infleqtion, which makes quantum processors. “Quantum computers can access physics that is difficult for classical machines to emulate, while GPUs provide the scale and throughput needed for control and learning.”

Intel’s spin-bit quantum technology, which differs from IBM’s supercomputing qubit, may be interesting technology, but many companies — including Quantum Motion, Silicon Quantum Computing, Photonic, and CEA-Leti — are pursuing similar approaches.

Quantum advantage

Still, Intel has a manufacturing advantage.

“Intel’s approach to CMOS spin qubits has one advantage over many other solutions — you can put millions on a wafer, and Intel has reliable manufacturing to do so,” Cutress said.

The appointment of Ranade, who has served in key manufacturing roles, to CTO is another clear sign that the foundry is Intel’s future. Former CTO Greg Lavender was seen more as a software person.

“He’s a process node guy, he knows what the process needs to work for customers, both internal and external,” Cutress said of Ranade.

Intel did not immediately respond to a request for comment on its plans.

This article first appeared on Network World.

  • ✇Security | CIO
  • AI is spreading decision-making, but not accountability
    On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised. As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks sugg
     

AI is spreading decision-making, but not accountability

6 de Maio de 2026, 07:00

On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.

As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.

AI, after all, may diffuse decision-making, but not legal liability.

AI doesn’t show up in court — people do

Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.

“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jessica Eaves Mathews, founder, Leverage Legal Group

LLG

“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”

In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.

The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.

CIO as the system’s last line of defense

If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.

“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.

The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Drumgoole, president, global infrastructure services, DXC Technology

DXC

The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.

“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.

In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.

The illusion of distributed accountability

Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Ojas Rege, SVP and GM, privacy and data governance, OneTrust

OneTrust

Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”

But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”

In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Simon Elcham, CAIO, Trustpair

Trustpair

And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.

Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.

The emergence and limits of the CAIO

To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.

At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”

The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jonathan Tushman, CTO, Hi Marley

Hi Marley

This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.

When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.

The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.

The widening gap between deployment and governance

If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”

Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.

The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.

At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.

Where the buck stops

For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.

When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.

AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.

  • ✇Security | CIO
  • How UKG puts AI to work for frontline employees
    As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation fram
     

How UKG puts AI to work for frontline employees

6 de Maio de 2026, 07:00

As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation framework to transform the frontline worker experience, and why the CIO-CHRO partnership is critical to making it stick.

How do you leverage AI for growth and transformation at UKG?

UKG is one of the largest HR, pay, and workforce management tech platforms in the market, and our expertise is in creating solutions for frontline workers, which account for 80% of the world’s workforce. This is important because when companies rebrand themselves as AI for knowledge workers, they’re not talking about frontline workers. But people in retail, manufacturing, healthcare, and so on also benefit from AI capabilities.

So the richness of our data sets, and our long history with the frontline workforce, positions us well for AI driven workforce transformation. 

What are some examples?

We use agentic AI for dynamic workforce operations, which shows us real-time labor demand. Our customers employ thousands of frontline workers, and the timely market insights and suggested actions we give them are new and valuable.

We also provide voice agents. Traditionally, when a frontline worker requests a shift, managers would review availability, fill out paperwork or update scheduling software, and eventually offer an appropriate job. With voice agents, AI works directly with the frontline worker, going through background and skills validation, communication, and even workflow execution. The worker can also ask if they can swap shifts or even get advice on how to make more money in a particular month. This is where AI changes the entire frontline worker experience.

We also launched People Assist, an autonomous employee support agent. Typically, when an employee is onboarded, IT and HR need to trigger and approve workflows. People Assist  not only tracks workflows, but also performs those necessary IT and HR onboarding activities so new employees are productive from day one.

What framework do you use to create these new capabilities?

For internal AI usage for our own employee experience, we use an idea-to-implementation framework, which involves a community of UKG power users who are subject matter experts in their area. Ideas can come from anybody, and since we started nine months ago, more than 800 ideas have been submitted. The power users set our priorities by choosing the ideas that will make the most impact.

Rather than funneling ideas through a small central team — a linear process that kills momentum — we’ve democratized innovation across the business. We give teams the governance frameworks, change models, and risk guardrails they need to move quickly.  With AI, the most important thing isn’t to launch, but to land.

But before we adopted the framework, we defined internal personas so we could collaborate with different employee groups across the company, from sales to finance.

With the personas and the framework, we can prioritize ideas by persona, which also facilitates crowd sourcing. You’re asking an entire persona which of these 10 ideas will make their lives better, rather than senior leaders making those decisions for them.

Why do so many CIOs focus on personas for their AI engine?

Across the enterprise, every function has a role to play. We hire marketing, sales, and finance for a particular purpose. Before AI, we gave generic packaged tools to everyone. AI allows us to build capabilities to make a specific job more effective. Even our generic AI tools are delivered by persona. Its impact on specific roles is the reason personas are so important right now. Our focus is on the actual jobs, the people who do them, the skills and tasks needed, and the outcomes they want to achieve.

We know our framework and persona focus work from employee data. In our most recent global employee engagement survey, 90% said they’re getting the right AI tools to be effective. For the AI tools we’ve launched broadly across the company, eight out of 10 employees use them. For me, AI isn’t about launching 10,000 tools, because if no one uses them, it’s just additional cost for the CIO and the company.

Is the build or buy question more challenging in this nascent stage of AI?

The lifecycle of technology has moved from three years to three hours, so whenever we build at UKG, we use an open architecture, which allows us to build with a commercial product if one comes on the market.

Given the speed of innovation, we lean toward augmentation rather than build. There are areas, like our own native products, where a dedicated engineering team makes sense. But for most of our AI capabilities — customer support and voice agents, for example — we work with our vendor partners. We test and learn with multiple vendors, and decide on one usually within two weeks.

This is what AI is giving all CIOs: flexibility, rapid adoption, interoperability, and the ability to quickly switch vendors. It’s IT that’s very different from what it used to be.

Given the shift to augmentation, how will the role of the software engineer change?

For software builders, business acumen — the ability to understand context — is no longer optional. In the past, the business user would own the business context, and the developer, who owns the technology, brings that business idea to life. Going forward, the builder has the business context to create the right prompts to let AI do the building, and the human in the loop is no longer the technology builder, but the provider of context, prompts, and validation of the work. So the engineer doesn’t go away, however they now finish a three-week scope of work in hours. With AI, engineers operate at a different altitude. The SDLC stays, but agility increases where a two-week concept compresses into two days.

At UKG, you’re directly connected to the CHRO community. What should they be thinking about how the workforce is changing with AI?

The best CHROs are thinking about the skills they’ll need for the future, and how to train existing talent to be ready. They’re not questioning whether we’ll need people, but how to sharpen our teams for new roles. The runbooks for both IT and HR are evolving, which is why the CIO-CHRO partnership has never been more critical to create the right culture for AI transformation.

CIOs can deliver a wealth of employee data like roles, skillsets, and how people spend their time. And as HR leaders help business leaders think through their roadmap for talent —  both human and AI — IT leaders can equip them with exactly that intelligence.

What advice would you give to CIOs driving AI adoption?

Invest in AI fluency, not just AI tools. Your people don’t need to become data scientists, but they do need a new kind of literacy — the ability to work alongside AI, question its outputs, and know when to override it. That’s a training and culture investment, not a software investment.

And redesign work before you redeploy people. Don’t just drop AI into existing workflows. Use this moment to ask what work really matters. AI is forcing us to have the job design conversations we should’ve had years ago, so it’s important to be transparent about the journey. What’s killing workforce trust now is ambiguity. Your people can handle hard truths but not silence. Leaders who communicate openly about where AI is taking the organization will retain the talent they need to get there.

  • ✇Security | CIO
  • What is data analytics? Transforming data into better decisions
    What is data analytics? Data analytics focuses on gleaning insights from data. It comprises the processes, tools, and techniques of data analysis and management, and its chief aim is to apply statistical analysis and technologies on data to find trends and solve problems. Data analytics has become increasingly important in the enterprise to shape business processes and improve decision-making and business results. Data analytics draws from a range of disciplines, incl
     

What is data analytics? Transforming data into better decisions

5 de Maio de 2026, 07:00

What is data analytics?

Data analytics focuses on gleaning insights from data. It comprises the processes, tools, and techniques of data analysis and management, and its chief aim is to apply statistical analysis and technologies on data to find trends and solve problems. Data analytics has become increasingly important in the enterprise to shape business processes and improve decision-making and business results.

Data analytics draws from a range of disciplines, including computer programming, mathematics, and statistics, to perform analysis on data in an effort to describe, predict, and improve performance. So to ensure robust analysis, data analytics teams leverage a range of data management techniques, including data mining, data cleansing, data transformation, data modeling, and more.

What is AI data analytics?

AI data analytics is a rapidly growing specialty within data analytics that applies AI to support, automate, and simplify data analysis. It leverages ML, natural language processing, and data mining, along with foundational models and chat assistance for predictive analytics, sentiment analysis, and AI-enhanced business intelligence. AI tools can be used for data collection and data preparation, while ML models can be trained to extract insights and patterns.

The four types of data analytics

Analytics breaks down broadly into four types: descriptive analytics attempts to describe what has transpired at a particular time; diagnostic analytics assesses why something has happened; predictive analytics ascertains the likelihood of something happening in the future; and prescriptive analytics provides recommended actions to take to achieve a desired outcome.

To explore these more specifically, descriptive analytics uses historical and current data from multiple sources to describe the present state, or a specified historical state, by identifying trends and patterns. Business analytics is the purview of business intelligence (BI). Diagnostic analytics uses data, often generated via descriptive analytics, to discover the factors or reasons for past performance. Predictive analytics applies techniques such as statistical modeling, forecasting, and ML to the output of descriptive and diagnostic analytics to make predictions about future outcomes. Predictive analytics is often considered a type of advanced analytics, and frequently depends on ML and/or deep learning. And prescriptive analytics is another type of advanced analytics that involves the application of testing and other techniques to recommend specific solutions that will deliver outcomes. In business, predictive analytics uses ML, business rules, and algorithms.

Data analytics methods and techniques

Data analysts use a number of methods and techniques to analyze data. According to Emily Stevens, managing editor at CareerFoundry, seven of the most popular include:

  1. Regression analysis: A set of statistical processes used to estimate the relationships between variables to determine how changes to one or more might affect another, like how social media spending might affect sales.
  2. Monte Carlo simulation: A mathematical technique, frequently used for risk analysis, that relies on repeated random sampling to determine the probability of various outcomes of an event that can’t otherwise be readily predicted due to degrees of uncertainty in its inputs.
  3. Factor analysis: A statistical method for taking a massive data set and reducing it to a smaller, more manageable one to uncover hidden patterns, like when analyzing customer loyalty.
  4. Cohort analysis: A form of analysis in which a dataset is broken into groups that share common characteristics, or cohorts, for analysis like understanding customer segments.
  5. Cluster analysis: A statistical method in which items are classified and organized into clusters in an effort to reveal structures in data. Insurance firms might use cluster analysis to investigate why certain locations are associated with particular insurance claims, for instance.
  6. Time series analysis: A statistical technique in which data in set time periods or intervals is analyzed to identify trends over time, such as weekly sales numbers or quarterly sales forecasting.
  7. Sentiment analysis: A technique that uses natural language processing, text analysis, computational linguistics, and other tools to understand sentiments expressed in data, such as how customers feel about a brand or product based on responses in customer forums. While the previous six methods seek to analyze quantitative or measurable data, sentiment analysis seeks to interpret and classify qualitative data by organizing it all into themes.

Data analytics tools

Data analysts use a range of tools to aid them surface insights from data. Some of the most popular include: 

  • Apache Spark: An open source data science platform to process big data and create cluster computing engines. 
  • AskEnola AI: A conversational analytics tool for business users.
  • Data analysis with ChatGPT: OpenAI’s chatbot can generate code to perform data analysis, transformation, and visualization tasks using Python.
  • dbt: An open source analytics engineering tool for data analysts and engineers.
  • Domo Analytics: A BI SaaS platform to gather and transform data.  
  • Excel: Microsoft’s spreadsheet software for mathematical analysis and tabular reporting. 
  • Julius AI: An AI assistant to analyze spreadsheets and databases.
  • Knime: A free and open source data cleaning and analysis tool for data mining.
  • Looker: Google’s data analytics and BI platform. 
  • MySQL: An open source relational database management system to store application data used in data mining.
  • Observable: A data analysis platform with AI tools for exploratory data analysis and data visualization.
  • Orange: A data mining tool ideal for smaller projects.
  • Power BI: Microsoft’s data visualization and analysis tool to create and distribute reports and dashboards. 
  • Python: An open source programming language popular among data scientists to extract, summarize, and visualize data. 
  • Qlik: A suite of tools to explore data and create data visualizations. 
  • R: An open source data analytics tool for statistical analysis and graphical modeling. 
  • RapidMiner: A data science platform that includes a visual workflow designer. 
  • SAS: An analytics platform for business intelligence and data mining. 
  • Sisense: A popular self-service BI platform. 
  • Tableau: Data analysis software from Salesforce to create data dashboards and visualizations.

Data analytics vs. data science

Data analytics is a component of data science used to understand what an organization’s data looks like. Generally, the output of data analytics are reports and visualizations. Data science takes the output of analytics to study and solve problems. The difference between data analytics and data science is often about timescale. Data analytics describes the current or historical state of reality, whereas data science uses that data to predict and/or understand the future.

Data analytics vs. data analysis

While the terms data analytics and data analysis are frequently used interchangeably, data analysis is a subset of data analytics concerned with examining, cleansing, transforming, and modeling data to derive conclusions. Data analytics includes the tools and techniques used to perform data analysis.

Data analytics vs. business analytics

Business analytics is another subset of data analytics. It uses data analytics techniques, including data mining, statistical analysis, and predictive modeling, to drive better business decisions. Gartner defines business analytics as solutions used to build analysis models and simulations to create scenarios, understand realities, and predict future states.

Data analytics examples

Organizations across all industries leverage data analytics to improve operations, increase revenue, and facilitate digital transformations. Here are three examples:

UPS transforms air cargo operations with data, AI: UPS’s Gateway Technology Automation Platform (GTAP) uses AI and digital asset tracking to reduce costs, improve on-time performance, and enhance operational safety at its Worldport air hub.

NFL leverages AI and predictive analytics to reduce injuries: The NFL’s Digital Athlete platform leverages AI and ML to run millions of simulations of in-game scenarios, using video and player tracking data to identify the highest risk of injury during plays, and develop individualized injury prevention courses.

Fresenius Medical Care anticipates complications with predictive analytics: Fresenius Medical Care, which specializes in providing kidney dialysis services, is pioneering the use of a combination of near real-time IoT data and clinical data to predict when kidney dialysis patients might suffer a potentially life-threatening complication called intradialytic hypotension (IDH).

Data analytics salaries

According to data from PayScale, the average annual salary for a data analyst is $70,384, with a reported range from $51,000 to $95,000. Salary data on similar positions include:

JOB TITLESALARY RANGEAVERAGE SALARY
Analytics manager$79,000 to $140,000$110,581
Business analyst, IT$58,000 to $114,000$80,610
Data scientist$73,000 to $145,000$103,441
Quantitative analyst$74,000 to $161,000$109,421
Senior business analyst$72,000 to $127,000$95,484
Statistician$61,000 to $139,000$97,082

PayScale also identifies cities where data analysts earn salaries that are higher than the national average. These include San Francisco (24.2%), Seattle (10.2%), and New York (9.5%).

  • ✇Security | CIO
  • The rise of the double agent CIO
    CIOs of B2B SaaS companies are just as responsible to represent technology as they are to run it. In an environment where the buyer is often another CIO, however, the role becomes something fundamentally different. It’s no longer confined to internal execution. It extends into the market, customer conversations, and the moments that ultimately shape revenue, trust, and long-term relationships. So the modern SaaS CIO operates as a true double agent, running the business fro
     

The rise of the double agent CIO

4 de Maio de 2026, 07:00

CIOs of B2B SaaS companies are just as responsible to represent technology as they are to run it. In an environment where the buyer is often another CIO, however, the role becomes something fundamentally different. It’s no longer confined to internal execution. It extends into the market, customer conversations, and the moments that ultimately shape revenue, trust, and long-term relationships. So the modern SaaS CIO operates as a true double agent, running the business from within while representing it to the market.

Box CIO Ravi Malick sits squarely in that duality. After serving as CIO of Vistra Energy, a company defined by legacy systems and industrial scale, he stepped into a digitally native, founder-led SaaS business in 2021 where technology is inseparable from the business itself. He now leads internal tech while engaging directly with CIOs of companies evaluating Box, bringing a perspective shaped by both worlds. What stands out in Malick’s perspective isn’t how different the role is, but how much more expansive it’s become.

What stays the same, what evolves

The core tension of the CIO role hasn’t changed. “There’s always more demand than you have the capacity or funding for,” Malick says. Prioritization, alignment to business strategy, and the constant need to modernize while operating at scale still define the job. The difference, however, is the environment in which those challenges now exist.

At Box, Malick operates inside a workforce where technology fluency is high and expectations are even higher. “I partner with 3,000 technologists who love to solve problems with technology,” he says. That creates a powerful advantage, but also a new kind of pressure. Demand for tools, platforms, and innovation is constant, and AI has only accelerated it.

That dynamic is further shaped by Box’s leadership. As a founder-led company, technology conversations extend well beyond the CIO’s organization. “It’s a different dynamic when your CEO is a founder and a technologist,” Malick says. “You’re as much a steward of incoming ideas as you are a generator of them.” That relationship creates both pace and perspective, requiring the CIO to operate as both orchestrator and partner in shaping how technology evolves across the business.

In that context, the CIO is leading within a highly informed, highly engaged organization where expectations for speed and innovation are constant. The challenge isn’t modernization as a one-time effort, but ensuring the tech stack continuously evolves and scales with the business.

Balancing the internal mandate with external pull

What truly differentiates the role in SaaS is what happens outside the enterprise, and the pressure that comes with it. The CIO is still accountable for running IT, ensuring security, and maintaining operational excellence. At the same time, there’s growing expectation to show up externally, engage customers, and directly support revenue.

Malick doesn’t present that balance as seamless. “It’s a daily challenge,” he says. “But sometimes not balanced so well.” There’s a constant push and pull between internal priorities and external demands, and in many cases, revenue pulls hard. The opportunity to influence deals, build relationships, and contribute to growth elevates the strategic importance of the role, but it doesn’t remove the responsibility for the day job.

What allows Malick to operate effectively in both worlds is the strength of the foundation behind him. He points to the maturity of his leadership team, operating model, and internal processes as critical enablers. With clear structures, strong leaders, and disciplined execution in place, he has the bandwidth to spend meaningful time externally. It isn’t always a perfect balance, but it’s a deliberate one.

From operator to peer in the market

Through Box’s customer zero program, Box on Box, Malick operates as both CIO and practitioner, bringing firsthand experience into customer conversations. “I can take how we build at Box to customer conversations,” he says. That perspective shifts the dialogue away from product positioning, and toward the realities of execution.

In a market where CIOs are constantly being pitched, that distinction carries weight. “They want to know how it works from the perspective of someone managing it,” he says, adding he leans into that by being transparent about both successes and missteps. “We share the challenges and false starts we’ve managed through.”

That candor builds credibility, and credibility builds trust. After all, people buy from people they trust, and in enterprise technology, says Malick, peer-to-peer conversations are a faster path to trust than demos. 

The external dimension of the role also holds a symbiotic relationship with internal responsibilities. Malick brings customer conversations back into Box, using them to inform how he thinks about technology decisions and broader strategy. He describes the CIO community as uniquely open, even therapeutic, where leaders candidly share challenges and exchange ideas. That openness creates a feedback loop where external insights sharpen internal execution, and internal experience strengthens external credibility.

What this means for the CIO role

What makes Malick’s perspective especially relevant is that the lesson isn’t limited to SaaS. As technology becomes more central to growth, customer experience, and business model change, CIOs in every industry are being pulled closer to the front office. The shift is about becoming more fluent in how technology translates into trust, speed, and commercial impact, not just becoming more visible.

For Malick, one of the biggest lessons is the role now demands a different kind of leadership than many CIOs were originally trained for. “Don’t make assumptions, and don’t assume something’s easy or intuitive,” Malick says. In a world where technology is reshaping how people work in real time, communication becomes a strategic discipline. CIOs have to explain change, absorb feedback, and keep translating between technical possibility and business reality.

The rise of AI adds another dimension to the double agent role. CIOs are building the content foundation that AI needs to be effective, and ensuring the organization can experiment with AI without sacrificing compliance or control. In a fast-paced technology company, ideas, opinions, and new technologies come from every direction. So the CIO isn’t simply the expert with the answers but often the one managing velocity itself, deciding where to push and where to hold.

“You have to figure out when you need to be in the fast lane and when you don’t,” Malick says. That kind of judgment is becoming more critical as technology moves to the center of the business, and it’s another reason CIOs are stepping into CEO and COO roles.

As AI accelerates the pace of change and creates the potential to decouple revenue growth from headcount growth, that ability to manage speed, scale, and tradeoffs becomes a defining leadership capability. That’s why the SaaS CIO should matter to leaders far beyond software. With AI transforming every industry, the role is becoming a preview of where the profession is headed — not just to run technology, but help shape how the company grows, how it shows up in the market, and how it earns trust. The double agent CIO may sound like a SaaS phenomenon. Increasingly, though, it looks more like the future of the job.

  • ✇Security | CIO
  • 전 세계 AI 에이전트 2,800만 개 시대…기업 경쟁력은 ‘인프라’에 달렸다
    IDC에 따르면 지난해 말 기준 2,800만 개 이상의 AI 에이전트가 배포됐으며, 2029년에는 10억 개 이상이 실제 운영 환경에서 활용되면서 하루 2,170억 건의 작업을 수행할 것으로 전망된다. 매출 46억 달러(약 6조 7,500억 원) 규모의 글로벌 신용평가 기업 트랜스유니온(TransUnion)의 최고 기술·데이터·분석 책임자 벤캇 아찬타는 “AI 에이전트 PoC(Proof of Concept)을 구축하는 것은 쉽다”라며 “하지만 이를 통제하고, 보안을 확보하며, 확장하는 것은 완전히 다른 차원의 과제”라고 말했다. 특히 금융 서비스와 헬스케어처럼 규제가 엄격한 산업일수록 이러한 어려움은 더욱 크다고 설명했다. 이 문제를 해결하기 위해 트랜스유니온은 지난 3년간 에이전틱 AI 플랫폼 ‘원트루(OneTru)’를 구축했다. 목표는 기존의 규칙 기반 전문가 시스템처럼 신뢰성과 예측 가능성을 확보하면서도, 생성형 AI처럼 유연
     

전 세계 AI 에이전트 2,800만 개 시대…기업 경쟁력은 ‘인프라’에 달렸다

4 de Maio de 2026, 05:14

IDC에 따르면 지난해 말 기준 2,800만 개 이상의 AI 에이전트가 배포됐으며, 2029년에는 10억 개 이상이 실제 운영 환경에서 활용되면서 하루 2,170억 건의 작업을 수행할 것으로 전망된다.

매출 46억 달러(약 6조 7,500억 원) 규모의 글로벌 신용평가 기업 트랜스유니온(TransUnion)의 최고 기술·데이터·분석 책임자 벤캇 아찬타는 “AI 에이전트 PoC(Proof of Concept)을 구축하는 것은 쉽다”라며 “하지만 이를 통제하고, 보안을 확보하며, 확장하는 것은 완전히 다른 차원의 과제”라고 말했다. 특히 금융 서비스와 헬스케어처럼 규제가 엄격한 산업일수록 이러한 어려움은 더욱 크다고 설명했다.

이 문제를 해결하기 위해 트랜스유니온은 지난 3년간 에이전틱 AI 플랫폼 ‘원트루(OneTru)’를 구축했다. 목표는 기존의 규칙 기반 전문가 시스템처럼 신뢰성과 예측 가능성을 확보하면서도, 생성형 AI처럼 유연하고 챗봇처럼 쉽게 사용할 수 있는 환경을 만드는 것이었다.

핵심은 두 접근 방식의 장점을 결합하는 데 있었다. 설명 가능성과 안정성이 중요한 핵심 업무는 전통적인 시스템이 담당하고, 생성형 AI는 특화된 작업에 한해 제한적으로 적용하는 방식이다. 이를 구현할 인프라가 시장에 존재하지 않았던 만큼, 트랜스유니온은 약 1억 4,500만 달러(약 2,100억 원)를 투자해 자체 구축에 나섰다.

검증되지 않은 기술에 대한 대규모 투자였지만, 이미 약 2억 달러(약 2,800억 원)의 비용 절감 효과를 거뒀다. 더 나아가 해당 플랫폼을 기반으로 고객용 솔루션까지 개발했다.

대표적으로 올해 3월, 트랜스유니온은 구글 제미나이 모델을 기반으로 원트루 플랫폼에서 구축한 ‘AI 애널리틱스 오케스트레이터 에이전트’를 공개했다. 이 에이전트는 내부 분석 효율을 높이는 데 활용되고 있으며, 고객 역시 데이터 과학자 없이 고급 데이터 분석을 수행할 수 있도록 지원한다.

아찬타는 “많은 고객이 트랜스유니온 데이터를 사용하면서도 다른 솔루션이나 플랫폼은 활용하지 않는다”라며 “이번 오케스트레이터 에이전트는 데이터 활용 가치를 높이고 새로운 수익원을 창출할 가능성이 있다”고 말했다.

현재 추가적인 에이전트도 개발 중이다. 아찬타는 “에이전트의 성능을 좌우하는 핵심은 오케스트레이션, 거버넌스, 보안 계층”이라며 “단순히 에이전트를 만드는 것은 며칠이면 가능하지만, 이를 안정적으로 운영하는 기반과 통제 장치가 진짜 경쟁력”이라고 강조했다. 이어 “플랫폼 위의 에이전트는 모든 가드레일과 기반을 활용하도록 설계돼 있으며, 이것이 우리의 힘”이라고 덧붙였다.

AI 에이전트를 효과적으로 통제하기 위한 핵심 전략은 작업을 여러 계층으로 분리하고, 각 계층을 서로 다른 시스템에 할당하는 것이다. 각 시스템은 일정한 제약 조건 아래 동작하며, 이를 통해 개별 에이전트의 영향 범위를 제한하고 전체 시스템에 견제와 균형 구조를 만든다. 또한 위험도가 높은 작업은 생성형 AI 이전 기술에 맡겨 리스크를 낮춘다.

트랜스유니온의 경우 핵심 의사결정은 업그레이드된 전문가 시스템이 담당한다. 이 시스템은 명확하게 정의되고 감사 가능한 규칙에 따라 동작하며, 예측 가능하고 비용 효율적이며 지연 시간도 낮다. 새로운 상황이 발생하면 LLM이 이를 분석하고, 다른 에이전트가 이를 새로운 규칙으로 변환한 뒤 인간이 검토해 최종적으로 전문가 시스템에 반영한다. 이 외에도 시맨틱 계층을 이해하거나 인간과 상호작용하는 등 다양한 역할을 수행하는 에이전트가 존재한다.

아찬타는 “신경망 기반 추론 계층인 LLM에는 인간을 개입시키고, 논리와 머신러닝 기반의 상징적 추론 계층은 자동화한다”고 설명했다.

이처럼 각 에이전트가 제한된 데이터와 역할 내에서 엄격한 제약을 가지고 동작하면, 전체 시스템은 훨씬 더 통제 가능하고 신뢰성 높은 구조로 발전한다.

이는 하나의 장인이 모든 작업을 수행하는 공방보다, 여러 작업자가 각자 역할을 나눠 수행하는 생산 라인에 비유할 수 있다. 생산 라인은 더 빠르고 안정적으로 작업을 수행할 수 있지만, 현재 많은 기업은 여전히 AI 에이전트를 장인처럼 운영하고 있다. 이러한 방식은 창의적인 결과를 만들 수 있지만, 기업 환경에서는 항상 적합한 선택은 아니다.

툴레인대학교 교수이자 ACM AI 특별 관심 그룹 의장인 니콜라스 마테이는 에이전트 시스템 간 연결 지점에서 보안을 강화해야 한다고 조언했다.

그는 “시스템 간 연결 지점마다 보안을 확보해야 한다”라며 “예를 들어 에이전트가 이메일 서비스에 요청을 보내는 경우, 두 시스템 사이에 검증 단계(체크포인트)를 두는 것이 필요하다”고 말했다. 이어 “신뢰하기 어려운 에이전트와 기존 소프트웨어가 만나는 경계 지점이 바로 보안 통제를 집중해야 할 영역”이라고 강조했다.

에이전틱 AI를 위한 보안 기반 구축

자동화 솔루션 기업 지터빗(Jitterbit)이 올해 3월 공개한 설문조사에 따르면, 1,500명의 IT 리더들은 AI 도입 최종 결정에서 가장 중요한 요소로 ‘AI 책임성’을 꼽았다. 이는 보안, 감사 가능성, 추적성, 가드레일 등을 포함하는 개념으로, 구현 속도나 벤더 평판, 심지어 총소유비용(TCO)보다도 높은 우선순위를 차지했다. 또한 보안, 거버넌스, 데이터 프라이버시 리스크는 비용이나 통합 문제보다도 AI 프로젝트의 운영 전환을 가로막는 주요 요인으로 나타났다. 이러한 우려는 충분히 근거가 있다.

실제 올해 초 사이버 보안 기업 코드월(CodeWall) 연구진은 맥킨지의 신규 AI 플랫폼 ‘릴리(Lilli)’를 침해하는 데 성공했다. 연구진은 자체 AI 도구를 활용해 4,700만 건의 채팅 메시지, 72만 8,000개 파일, 38만 4,000개의 AI 어시스턴트, 9만 4,000개 워크스페이스, 21만 7,000건의 에이전트 메시지, 약 400만 개에 달하는 RAG 문서 조각, 그리고 95개의 시스템 프롬프트 및 AI 모델 설정 정보에 접근할 수 있었다고 밝혔다.

연구진은 “수십 년간 축적된 맥킨지의 독점 연구와 프레임워크, 방법론이 누구나 읽을 수 있는 데이터베이스에 그대로 노출돼 있었다”며 “기업의 핵심 지식 자산이 사실상 무방비 상태였다”고 지적했다.

문제의 원인은 단순했다. 200개가 넘는 공개 API 엔드포인트 가운데 22개가 인증 절차 없이 열려 있었던 것이다. 연구진은 단 2시간 만에 릴리의 전체 운영 데이터베이스에 읽기 및 쓰기 권한을 확보했다. 이후 맥킨지는 즉각 대응에 나서 인증되지 않은 엔드포인트를 차단하고 추가 보안 조치를 시행했다.

맥킨지는 공식 성명을 통해 “외부 포렌식 전문기관과 함께 진행한 조사 결과, 해당 연구자나 다른 비인가 제3자가 고객 데이터 또는 기밀 정보를 실제로 열람했다는 증거는 발견되지 않았다”고 밝혔다.

IDC는 이번 사건이 AI 시스템 보안 침해가 기업에 얼마나 치명적인 영향을 미칠 수 있는지를 보여주는 사례라고 분석했다.

IDC AI 리서치 부문 부사장 알레산드로 페릴리는 “대부분의 기업은 여전히 데이터 유출, 잘못된 출력, 브랜드 평판 훼손 등 기존 관점에서 AI 리스크를 바라보고 있다”라며 “물론 중요한 문제지만, 더 큰 위험은 AI 시스템에 의사결정 권한을 위임하는 데 있다”고 강조했다.

에이전틱 AI 플랫폼에 대한 접근 권한을 확보할 경우, 공격자는 단순히 비인가 정보를 열람하는 데 그치지 않고 기업의 행동 방식 자체를 은밀하게 바꿀 수 있다. 또한 릴리(Lilli)와 같은 엔터프라이즈급 에이전틱 AI 시스템을 보호하는 것은 전체 과제의 절반에 불과하다. 가트너에 따르면 69%의 조직이 직원들이 금지된 AI 도구를 사용하고 있다고 의심하고 있으며, 이로 인해 2030년까지 40%의 조직이 보안 또는 규정 준수 사고를 겪을 것으로 예상된다.

그러나 현재의 탐지 도구만으로는 AI 에이전트를 충분히 식별하기 어렵다고 가트너는 지적한다.

현재 수천 개의 AI 에이전트를 운영 중인 KPMG의 글로벌 AI 및 데이터 랩 총괄 스와미나단 찬드라세카란은 “지금 기업 내에서 얼마나 많은 에이전트가 실행되고 있는지 묻는다면 어디에서 확인할 수 있겠느냐”라며 “이들이 모두 온보딩돼 정체성을 부여받았는지, 적절한 인증 절차를 거쳤는지, 누가 관리하는지 확인할 수 있는 인프라는 아직 존재하지 않는다”고 말했다.

그는 이어 “관련 도구들이 이제 막 등장하고 있거나 기업들이 자체적으로 구축하는 단계”라며 “이러한 체계가 CIO에게 안정감을 제공하게 될 것”이라고 덧붙였다.

이미 개인 직원이 강력한 에이전틱 AI를 도입해 부정적인 결과를 초래한 사례도 공개되고 있다. 메타(Meta)의 얼라인먼트 디렉터 서머 위는 최근 오픈소스 에이전틱 AI 도구 ‘오픈클로(OpenClaw)’를 이메일 관리에 활용하기로 결정했다. 테스트 환경에서 정상적으로 작동한 이후 실제 업무에 적용했다.

메타의 서머 위는 지난 2월 “작업 전 확인을 하도록 설정했음에도, 순식간에 받은 편지함을 삭제하는 모습을 보며 크게 당황했다”라며 “휴대폰으로는 중단할 수 없어 폭탄을 해체하듯 맥 미니로 달려가야 했다”고 X를 통해 전했다.

과거에는 직원이 민감한 정보를 챗봇에 입력하거나 보고서를 작성하게 한 뒤 이를 복사해 사용하는 수준에 머물렀다. 그러나 챗봇이 완전한 에이전트형 시스템으로 발전하면서 이제 에이전트는 사용자 권한 범위 내에서 가능한 모든 작업을 수행할 수 있으며, 기업 시스템에 접근하는 것까지 가능해졌다.

EY의 디지털 및 신기술 부문 책임자 라케시 말호트라는 이러한 새로운 보안 리스크를 관리하기 위해 기업들이 기존의 역할 기반 및 신원 기반 통제를 넘어 ‘의도 기반 통제’로 전환해야 한다고 강조했다.

그는 “에이전트가 시스템에 접근해 데이터를 변경할 권한이 있는지만 확인하는 것으로는 충분하지 않다”라며 “왜 그 변경을 수행하는지까지 확인할 수 있어야 한다”고 설명했다.

이어 “현재 관측 시스템은 에이전트의 행동 의도를 포착하지 못한다”라며 “신뢰는 의도에서 비롯되지만, 이를 측정할 수 있는 방법이 없는 상황”이라고 지적했다.

또 “만약 사람이 전체 코드베이스를 리팩토링하려 한다면 그 이유를 설명해야 한다”라며 “명확한 이유 없이 그런 작업을 진행해서는 안 된다. 사람의 경우 이를 판단할 방법이 있지만, 에이전트에는 아직 그런 체계가 없다”고 덧붙였다.

에이전틱 AI를 위한 시맨틱 데이터 기반 구축

트랜스유니온의 벤캇 아찬타는 자사의 원트루(OneTru) 플랫폼에서 ‘시맨틱 기반’의 중요성을 반복적으로 강조했다. 시맨틱 기반은 데이터가 무엇인지뿐 아니라 그 의미와 다른 데이터와의 관계까지 이해하도록 돕는 구조다. 가트너는 AI를 도입하는 기업이라면 시맨틱 레이어 구축이 이제 필수 과제라고 지적한다.

가트너는 “시맨틱 레이어는 정확도를 높이고 비용을 관리하며 AI 부채를 크게 줄이는 동시에, 멀티 에이전트 시스템을 정렬하고 비용이 큰 불일치를 사전에 차단할 수 있는 유일한 방법”이라고 설명했다.

또한 가트너는 2030년까지 범용 시맨틱 레이어가 데이터 플랫폼, 사이버 보안과 함께 핵심 인프라로 자리 잡을 것으로 전망했다. KPMG의 스와미나단 찬드라세카란은 “에이전트가 데이터를 활용해 의미 있는 작업을 수행하려면 맥락이 필수적”이라며 “그 안에 기업의 지식이 담겨 있다”고 말했다.

그는 이어 “이것이 기업의 새로운 지식재산(IP)”이라며 “맥락이 곧 새로운 경쟁력”이라고 강조했다.

미 법률 회사 굴스턴앤스토어스(Goulston & Storrs)의 CIO 존 아르스노는 견고한 데이터 기반 구축이 벤더 종속을 피하는 방법이기도 하다고 설명했다.

그는 “워크플로 자동화나 에이전틱 업무 지원을 위해 특정 솔루션에 데이터를 옮겨 넣으면, 이후 빠져나오기 매우 어려워진다”라며 “반면 데이터 중심 접근 방식을 취하면 시장 변화에 따라 다른 솔루션으로 유연하게 이동할 수 있다”고 말했다.

이 로펌은 고객 관련 업무 데이터를 법률 특화 문서 관리 시스템인 넷도큐먼츠(NetDocuments)로 이전했으며, 기타 데이터는 엔테그라타(Entegrata)의 법률 데이터 레이크하우스에 저장하고 있다.

아르스노는 “궁극적으로 모든 애플리케이션이 이 데이터 레이크를 중심으로 연결되도록 하는 것이 목표”라며 “이렇게 되면 회사의 모든 데이터가 두 개의 환경에 통합되고, 그 위에 어떤 AI 도구든 자유롭게 적용할 수 있다”고 설명했다.

이어 “데이터 흐름 관리도 훨씬 쉬워지고, 향후 등장할 AI 기술에도 빠르게 대응할 수 있다”라며 “생성형 AI든, 에이전틱 AI든, 혹은 앤트로픽 기반 기술이든 변화 속도가 너무 빨라 따라잡기 어렵다. 실제로 6개월마다 상황이 달라지고 있다”고 덧붙였다.

에이전트 오케스트레이션

보안 가드레일을 구축하고 활용 가능한 데이터 레이어를 마련한 이후, 에이전트 인프라 퍼즐의 마지막 단계는 ‘오케스트레이션’이다. 에이전틱 AI 시스템은 에이전트 간 상호작용은 물론, 인간 사용자와의 협업, 다양한 데이터 소스 및 도구와의 연동이 필요하다. 이는 매우 복잡한 과제로, 기술은 빠르게 발전하고 있지만 아직 초기 단계에 머물러 있다. MCP(Model Context Protocol)는 이러한 오케스트레이션 문제를 해결하기 위한 핵심 요소 중 하나로 꼽히며, AI 벤더들도 이 분야에서 협력적인 태도를 보이고 있다.

디지털 전환 기업 글로번트(Globant)의 디지털 혁신 수석 부사장이자 기술 담당 부사장인 아구스틴 우에르타는 “소셜 네트워크 초기, 페이스북과 트위터가 상호작용 표준을 논의할 때는 경쟁사의 프로토콜을 채택하려는 기업이 없었다”라며 “하지만 지금은 모두가 MCP를 중심으로 표준을 발전시키고 있다”고 말했다.

그러나 에이전트 통합 문제가 완전히 해결된 것은 아니다. 800명 이상의 IT 의사결정자와 개발자를 대상으로 한 도커(Docker) 설문조사에 따르면, 여러 구성 요소를 조율하는 운영 복잡성이 에이전트 구축의 가장 큰 과제로 나타났다.

구체적으로 응답자의 37%는 오케스트레이션 프레임워크가 운영 환경에 적용하기에는 아직 불안정하거나 미성숙하다고 답했으며, 30%는 복잡한 오케스트레이션 환경에서 테스트 및 가시성 부족을 문제로 지적했다.

또한 85%의 팀이 MCP를 인지하고 있음에도 불구하고, 실제 운영 환경 적용을 가로막는 보안, 구성, 관리 측면의 문제도 여전히 존재하는 것으로 나타났다. 이 외에도 기업이 해결해야 할 통합 과제는 적지 않다.

우에르타는 “아직 해결되지 않은 문제 중 하나는 모든 에이전트를 통합적으로 제어하고 상태를 파악할 수 있는 대시보드”라며 “오픈AI 기반 에이전트를 모니터링하는 도구와 세일즈포스 기반 에이전트를 관리하는 도구는 각각 존재하지만, 제어·감사·로깅을 위한 텔레메트리를 하나의 중앙 대시보드에서 통합 제공하는 솔루션은 없다”고 지적했다.

그는 이어 “단일 플랫폼에서 에이전트를 운영하거나 도입 초기 단계에서는 큰 문제가 아니지만, 에이전트 네트워크가 확장될수록 이러한 한계가 본격적으로 드러난다”고 설명했다. 실제로 글로번트는 자체적인 에이전트 AI 통합 대시보드를 개발 중이다.

한편 미국 전역에 고객을 둔 약 700명 규모의 로펌 브라운스타인 하얏트 파버 슈렉(Brownstein Hyatt Farber Schreck)은 제안서 생성 시스템 등 다양한 영역에 AI를 적용하고 있다.

이 회사의 CIO 앤드루 존슨은 “기존에는 고객 제안요청서(RFP)를 검토하고, 수기 메모나 회의 기록을 분석한 뒤 관련 자료를 정리하는 데 며칠이 걸렸다”라며 “이제는 모든 정보를 시스템에 입력해 핵심 기준을 추출하고 몇 분 만에 수준 높은 초안을 생성할 수 있다”고 말했다.

이 과정에는 여러 에이전트가 협력한다. 성공 기준이나 인력 요건을 추출하는 에이전트, 과거 사례와 교훈을 분석하는 에이전트, 가격 책정과 브랜드 기준을 담당하는 에이전트 등이 각각 역할을 수행한다. 존슨은 “각 에이전트는 독립적으로 동작하지만, 결과물이 다음 단계로 이어지도록 반드시 오케스트레이션이 필요하다”고 설명했다. 현재는 대부분 기존 시스템에 MCP 레이어가 없기 때문에 RAG 기반 구조를 활용하고 있다.

또한 작업에 따라 서로 다른 AI 모델이 사용되기도 하는데, 이 역시 추가적인 오케스트레이션 관리 요소로 작용한다.

비용 관리도 중요한 이슈다. AI 에이전트가 무한 피드백 루프에 빠질 경우 추론 비용이 급격히 증가할 수 있기 때문이다.

존슨은 “이러한 가능성을 인지하고 있으며, 아직 실제로 발생한 사례는 없지만 모니터링 체계를 구축해 임계치를 초과할 경우 즉각 대응하도록 하고 있다”고 말했다.

이처럼 다양한 대응 전략에도 불구하고, AI를 둘러싼 변화 속도는 기업이 경험한 그 어떤 기술보다 빠르다.

EY의 말호트라는 “25년간 기술 업계에 있었지만 지금과 같은 변화는 처음”이라며 “역사상 가장 빠르게 성장한 기업들이 최근 3~4년 사이에 등장했고, 기술 도입 속도 역시 전례가 없다”고 말했다. 이어 “불과 9~10개월 전까지만 해도 핵심이었던 기술이 이미 지나간 사례도 많다”고 덧붙였다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • The cloud migration fulfilling FC Bayern Munich’s AI ambitions
    Management for Germany’s record-holding football championship team aims to optimize processes and provide new digital services using AI. Here, CIO Michael Fichtner discusses what the club’s IT department has implemented, and what advantages they’ll bring to the company internally, and to fans around the world. Why did FC Bayern migrate to SAP Cloud ERP Private? Migrating to the cloud gives us access to innovation and other developments. Some SAP services are only ava
     

The cloud migration fulfilling FC Bayern Munich’s AI ambitions

1 de Maio de 2026, 07:00

Management for Germany’s record-holding football championship team aims to optimize processes and provide new digital services using AI. Here, CIO Michael Fichtner discusses what the club’s IT department has implemented, and what advantages they’ll bring to the company internally, and to fans around the world.

Why did FC Bayern migrate to SAP Cloud ERP Private?

Migrating to the cloud gives us access to innovation and other developments. Some SAP services are only available in the cloud environment, so these are now accessible to us. An important aspect was the simplified integration of other technologies or services predominantly or exclusively provided as cloud services.

Another important aspect was the realignment within IT. The migration allows us to focus more on process, application, and business innovation, and therefore on topics that’ll further develop and future-proof our company.

The use of highly available cloud infrastructures also provides us with additional security since in critical situations, we’ll benefit from professional backup and disaster recovery strategies. With all the dedication our employees have shown so far, this will be a further step toward professionalizing operations and further reducing risks.

In addition to security, scalability and flexibility are always important to us. Computing power, storage, and network resources can be scaled more quickly with a cloud provider. This is particularly significant in the frequent peak situations of our business model. For our projects, new systems like sandbox, test, and POC systems can be deployed faster and in a more standardized way, without requiring any investment or new equipment. Plus, security and compliance are becoming increasingly important for us. So migration allows us to leverage our partner’s established security features, and centrally managed access and authorization concepts simplify our operations. Certified data centers also directly support us to meet regulatory, association, and official requirements.

SAP’s strategy is consistently moving toward the cloud, and migration has allowed us to eliminate the risk of eventually having to rely on an outdated on-premise technology so we were able to eliminate legacy tech through migration as well as upgrade to modern, high-performance hardware.

How many applications or systems have been migrated to the cloud?

We migrated our multi-tiered SAP S/4HANA system. But before the migration, we worked together to consolidate our system landscape, merging 52 systems carrying fan data into S/4. There, the central fan database was established, the Golden Fan Record was built, and the data was combined into a redundancy-free, 360-degree view. So this approach was a significant milestone to implement our sovereign cloud strategy.

So we’ve only migrated one system physically, but in abstract terms, our phased approach allowed us to migrate data from all 52 systems to the cloud through consolidation, thus taking a big step toward controlled and consistent data sovereignty.

Which digital innovations does FC Bayern want to implement with the cloud?

Our business model is heavily influenced by peak situations like knockout phases in sporting competitions, live broadcasts, and special sales activities. In these situations, we need to not only scale technically, but provide innovative process solutions that reliably support peak loads.

Consider the short timeframes of ticket requests that must be processed during knockout stages. Or the launch of jerseys, where fans, even during peak periods, have the right to expect that goods will be delivered as quickly as possible. So in departments experiencing significant annual peaks in volume, it’s crucial employees receive highly automated support. Handling these seasonal peaks would otherwise be impossible.

We rely heavily on solutions supported by AI and digital agents, so developing them is always a joint initiative with our specialist departments.

What digital services and personalization strategies is FC Bayern planning to use to reach fans worldwide with the help of the new cloud platform?

Our aim is to address our fans in an individual, personalized way. The way forward is to move away from mass communication and large target groups or segments, and toward a personal approach, specifically tailored to the needs of each fan.

For this, we need the relevant data and ability to process large amounts of data in compliance with data protection regulations. This isn’t feasible without the appropriate infrastructure and scalability. We see personalized communication as a crucial element to remain relevant to our fans in the future. Mass mailings to fans via email, push notifications, or standardized content without specific relevance to the individual fan won’t help us remain attractive to them.

By providing targeted, relevant content, we want to further increase the attractiveness of FC Bayern Munich, and ensure the relationship with fans for the future.

What advantages do you expect from SAP Cloud ERP Private and AI?

A crucial factor in our decision to migrate was the conviction that we could significantly optimize our internal processes by using AI approaches. Specifically, we’re working on corresponding implementations in HR using SAP’s SuccessFactors and Concur. Initial approaches have also been developed and are being put in logistics and financial accounting. We expect this will allow us to increasingly automate more activities, freeing up colleagues in specialist departments to focus on specific tasks that require a particular approach or interaction. Ultimately, this will enable us to provide better service to fans as we gain time to address other issues.

What role did digital sovereignty or data sovereignty play in the decision to migrate to the SAP cloud?

Digital sovereignty, and control over our data and the data of our fans, have been of paramount importance for many years, and have guided our actions for just as long. Driven by this principle, we’ve developed and operated our key applications ourselves.

With the capabilities our partners have made available to us, we could implement these requirements in a sovereign cloud environment without compromising standards. So we’re confident we’ve not created any dependencies and will remain operational in the years to come. We’re convinced that the de facto and legal control of our critical data is sustainably ensured in our chosen setup.

  • ✇Security | CIO
  • How NOV is moving from FOMO to calculated scaling
    For decades, the industrial sector has operated on the simple mantra to live by automation, die by automation. In the oil and gas industry, where precision is measured in millimeters and safety in lives, automation is a necessity, not just nice to have. But as gen AI sweeps through the enterprise, a new challenge has emerged in how a global leader in energy services should transition from experimental chatbots to industrial-grade AI without compromising safety or security.
     

How NOV is moving from FOMO to calculated scaling

30 de Abril de 2026, 07:00

For decades, the industrial sector has operated on the simple mantra to live by automation, die by automation. In the oil and gas industry, where precision is measured in millimeters and safety in lives, automation is a necessity, not just nice to have. But as gen AI sweeps through the enterprise, a new challenge has emerged in how a global leader in energy services should transition from experimental chatbots to industrial-grade AI without compromising safety or security.

Here, Alex Philips, CIO of NOV, formerly National Oilwell Varco, discusses implementing OpenAI and securing it with zero trust for 25,000 employees, and why the next phase of agentic AI requires a fundamental shift in how to view human expertise and digital safeguards.

From FOMO to ROI

Like many global companies, NOV’s initial move into gen AI was driven by executive pressure fueled by fear of missing out. Philips remembers the early talks with his CEO about the investment.

“I said we have this opportunity, and it costs this much,” he says. “He asked about the ROI and I replied that’s something I couldn’t calculate, nor what it’d replace or what it’d displace in cost, but I couldn’t say any of that for email either.”

Just as no modern business can function without email, even without a direct line-item ROI, Philips argues that LLMs will soon become the standard for employee productivity. Currently, NOV reports about 50% of its workforce actively use the tool to enhance productivity.

The results, though qualitative, are profound. Philips says that response times for urgent customer requests, for instance, have plummeted, language barriers are crumbling, and employees are tackling complex analyses once considered out of reach.

The six-month validation lesson

One example Philips details involves an engineer who spent six months mastering a highly specialized skill. With ChatGPT, the engineer was able to replicate that six-month learning process in just 10 minutes.

And while his initial response was to think he wasted six months of his life, the response was to show him he spent six months to validate what the AI told him. “This is a great example of why humans are still needed in the AI loop,” says Philips. “AI execution without human validation can lead to errors that cost companies significant time and money.”

This underscores the crucial pillar of NOV’s AI strategy of human accountability because in an industrial setting, AI dictating terms is never an acceptable excuse. Whether designing a drill bit or automating a workflow, the end user remains responsible for the output.

Securing the Wild West of shadow AI

As AI becomes more widespread, shadow AI poses a significant security risk. To address this, NOV uses Zscaler to route all traffic, and ensure visibility and control. And by doing so, the company can:

  • Redirect users: If an employee tries to use a non-approved LLM, they’re redirected to a page that explains NOV’s policy, and directed to the approved enterprise OpenAI instance.
  • Monitor SaaS evolution: Many authorized SaaS applications are now adding agentic features during contract periods. Zscaler provides the visibility needed to identify these changes before sensitive IP is fed into an unvetted model.
  • Enforce data privacy: Preventing intellectual property from leaking into public training sets is the first step in any industrial AI deployment.

The shift to agentic AI

In software development, NOV already benefits from AI-assisted coding, where AI works alongside developers who accept about 32% of AI suggestions. “We’re now beginning to explore the next evolution of full agentic coding,” says Philips, adding that this next stage truly supercharges teams, enabling them to move faster and better meet customer demand for innovation.

However, this efficiency feeds the dilemma of a widening talent gap. The challenge moving forward is if all the low-level, entry-level tasks can be automated, and what’s the best way to develop skilled workers. “I don’t know how we’ll adapt to it, but we’ll figure it out,” he says.

Safety first

In the oil field, some processes are too critical to be left entirely to a black-box algorithm. Philips is adamant that for safety issues, AI remains an advisor, not a decider. NOV uses AI-powered vision to monitor red zones, or dangerous areas on a drilling rig. If the AI detects a person in a restricted area, it can trigger an emergency stop. However, for actual drilling operations, the final call remains with an onsite human operator. “You can’t have a hallucination,” he says. “You can’t say it’s right 90% of the time. It has to be all the time.”

NOV’s journey shows that transitioning to industrial-grade AI isn’t just about choosing the best model but building a framework of trust, transparency, and responsibility. By using Zscaler for governance and GitHub Advanced Security for code validation, NOV is moving toward a future where AI becomes more essential to the oil industry.

“Development teams should produce twice the output with half the people in half the time,” he says. “The only remaining question is how do we train the next generation of developer experts to control the machines that do the work.”

  • ✇Security | CIO
  • Your AI agent is ready to go. Is your infrastructure?
    IDC estimates there were over 28 million AI agents deployed by the end of last year, and predicts there’ll be over 1 billion actively deployed by 2029, executing 217 billion actions per day. It’s easy to build an AI agent POC, says Venkat Achanta, chief technology, data, and analytics officer at TransUnion, a global credit reporting company with $4.6 billion in revenues. But governing, securing, and scaling it are a whole other challenge, especially for companies in high
     

Your AI agent is ready to go. Is your infrastructure?

29 de Abril de 2026, 07:00

IDC estimates there were over 28 million AI agents deployed by the end of last year, and predicts there’ll be over 1 billion actively deployed by 2029, executing 217 billion actions per day.

It’s easy to build an AI agent POC, says Venkat Achanta, chief technology, data, and analytics officer at TransUnion, a global credit reporting company with $4.6 billion in revenues. But governing, securing, and scaling it are a whole other challenge, especially for companies in highly regulated industries such as financial services and healthcare.

To address the problem, TransUnion spent the last three years building its agentic AI platform, OneTru. The goal was to make something as reliable and deterministic as the old, scripted, expert-style systems but as flexible as gen AI, and as easy to interact with as a chatbot.

The trick, however, was to combine the best of both worlds by using old-school systems for core processes where explainability and reliability are key, and layering in gen AI functionality in limited ways for the tasks it was uniquely suited for. And since the infrastructure to do this wasn’t available, TransUnion built its own, allocating $145 million to the project.

That was a big investment in an unproven technology, but it’s already led to $200 million in cost savings. More than that, once the platform was built, TransUnion used it to build customer-facing solutions.

In March this year, for example, TransUnion released its AI Analytics Orchestrator Agent, built using the OneTru platform and powered by Google’s Gemini models. The agent is already being used by TransUnion internally to improve analytics, and can also be used by customers to run sophisticated data analysis without the need for data scientists.

Many clients use TransUnion’s data but don’t use other solutions and platforms, Achanta says. The new orchestrator agent has the potential to help customers get more value out of the data, and unlock new revenue streams for the company.

And more agents are in the works, Achanta says. The key to making them work is the orchestration, governance, and security layers. Just making an agent do something is very easy for anyone, he says, and can take just a few days. The company can also create agents quickly. “But I have the foundation and guardrails, and the agent sitting on my platform uses all of them,” he says. “That’s what gives us power.”

The secret to making AI agents behave is to separate the layers of the task and assign each layer to a different system, each one operating under a set of constraints. This approach limits the damage any particular agent can do, creates a system of checks and balances, and restricts the riskiest activities to a pre-gen AI technology.

For example, at TransUnion, the core decision-making is performed by an updated version of an expert system. It operates under a set of well-defined, auditable rules and works predictably, cost-effectively, and at low latency. When it encounters a situation it hasn’t seen before, an LLM is used to analyze the problem, a different agent might then turn it into a new rule, and then a human might be called in to review the results before the new rule is added to the expert system. There are different agents that understand the semantic layer, interact with humans, and perform other tasks.

“With the neural reasoning layer — the LLM — we put humans in the loop,” he says. “When it’s a symbolic reasoning layer, which is logic and machine-learning-driven, we let it be automated.”

So when each agent operates within very narrow constraints, on just the limited data it needs for that one task, and is limited to what it can do, the entire system becomes much more governable and reliable.

It’s like the difference between an assembly line, where multiple workers each do a single, distinct task, instead of a workshop where a single artisan does everything. The assembly line can do work faster and more reliably but today, many enterprises deploy their AI agents as if they were craftsmen. The latter approach can result in creative, unique products, but this isn’t always what a company needs.

Nicholas Mattei, chair of the ACM special interest group on AI and professor at Tulane University, suggests that companies focus on building in extra security at points where different parts of the agentic system connect.

“Make sure you have security at the seams,” he says. For example, if an agent sends requests to an email service, set up a checkpoint between the two. “Around the gaps between the unreliable agents and where the traditional software lives, that’s where you want to focus your security processes,” he says.

Building a security foundation for agentic AI

In a Jitterbit survey of 1,500 IT leaders released in March, AI accountability — security, auditability, traceability, and guardrails — is the biggest factor when it comes to the final AI purchase decision, ahead of speed of implementation, vendor reputation, and even TCO. Security, governance, and data privacy risks were also top issues preventing AI initiatives from moving to production, ahead of costs and integration challenges. And they’re right to be worried.

Earlier this year, researchers at cybersecurity firm CodeWall were able to breach McKinsey’s new AI platform, Lilli. Using an AI tool of their own, the researchers said they could access 47 million chat messages, 728,000 files, 384,000 AI assistants, 94,000 workspaces, 217,000 agent messages, nearly 4 million RAG document chunks, and 95 system prompts and AI model configurations.

“This is decades of proprietary McKinsey research, frameworks, and methodologies — the firm’s intellectual crown jewels sitting in a database anyone could read,” the researchers wrote.

The reason? Out of over 200 publicly exposed API endpoints, 22 required no authentication. It took just two hours for the researchers to get full read and write access to Lilli’s entire production database. McKinsey responded quickly to the alert, patched the unauthenticated endpoints, and took other security measures.

“Our investigation, supported by a leading third-party forensics firm, identified no evidence that client data or client confidential information were accessed by this researcher or any other unauthorized third party,” the firm said in a statement.

IDC says the incident underscores just how dangerous the breach of an AI system can be to an enterprise.

“Most companies are still thinking about AI risk in yesterday’s terms: data leakage, bad outputs, and brand reputation damage,” says Alessandro Perilli, IDC’s VP for AI research. “Those are serious issues, but the bigger risk becomes delegating authority to AI systems.”

By getting access to an agentic AI platform, an attacker can’t just see something they’re not supposed to, but also covertly change how the company acts. And securing enterprise-scale agentic AI systems like Lilli is only half the challenge. According to Gartner, 69% of organizations suspect employees use prohibited AI tools, and 40% will experience security or compliance incidents by 2030 as a result.

But available discovery tools aren’t fully ready to find AI agents, Gartner says.

“If I asked you how many agents run in your enterprise right now, where are you going to go look it up?” asks Swaminathan Chandrasekaran, global head of AI and data labs at KPMG, which now has several thousand AI agents in production. “Have they all been onboarded and have identities? Have they gone through a proper authentication process and who’s in charge of them? That piece of infrastructure doesn’t exist.”

Tools are just starting to emerge, however, or companies are creating DIY solutions, he says. “That’s what’s going to give CIOs peace of mind,” he says.

We’re already seeing public examples of individual employees deploying powerful agentic AI to negative consequences. Summer Yue, Meta’s alignment director, recently decided to use OpenClaw, a viral open-source agentic AI tool, to help handle her inbox. After it worked in a test inbox, she deployed it for real.

“Nothing humbles you like telling your OpenClaw to confirm before acting and watching it speedrun deleting your inbox,” she wrote on X. “I couldn’t stop it from my phone. I had to run to my Mac mini like I was defusing a bomb.”

In the past, an employee might upload sensitive information to a chatbot or ask it to write a report that they’d then copy and paste, and pass off as their own. As these chatbots evolve into full-on agentic systems, the agents now have the ability to do anything a user has privileges to do, including accessing corporate systems.

To manage this new security risk, companies will need to move past role- and identity-based controls to intent-based ones, says Rakesh Malhotra, principal in digital and emerging technologies at EY.

It’s not enough to ask whether an agent has permission to access a system to make a change to a record, he says. Companies have to be able to ask why are you changing this. That’s a big challenge right now.

“The observability stacks don’t capture the intent of why the agent did something,” he says. “And that’s really important to understand. Trust is based on intent, and there’s no way for any of these systems to capture intent.”

If a human employee tries refactor the entire code base, they’d be asked to provide a good reason for doing that. “And if you’re refactoring without any specific reason, maybe you shouldn’t do it,” Malhotra says. “With people, there are ways for this to be adjudicated. I don’t know how to do this with agents.”

Building a semantic data foundation for agentic AI

TransUnion’s Achanta repeatedly mentioned the semantic foundation of the company’s OneTru platform. Such an understanding of information helps systems understand not just what the data is, but what it means, and how it relates to other data. Gartner says developing a semantic layer is now a must-do for companies deploying AI.

“It’s the only way to improve accuracy, manage costs, substantially cut AI debt, align multi-agent systems, and stop costly inconsistencies before they spread,” the firm says.

By 2030, universal semantic layers will be treated as critical infrastructure, alongside data platforms and cybersecurity, Gartner predicts. And agents need context to be able to do anything meaningful with data, says KPMG’s Chandrasekaran. That’s where a company’s knowledge is contained.

“That’s your new IP for the enterprise,” he says. “Context is the new moat.”

For John Arsneault, CIO at Goulston & Storrs, creating a solid data foundation is also a way to avoid vendor lock-in.

“If you’re buying things and moving your data into them to create workflow automation or agentic work assistants, you’ll have a hard time getting out of it,” he says. “But if you take a data-centric approach, you can at least move from one to the other if there’s a shift in the marketplace.”

The law firm has migrated its client-oriented work products into NetDocuments, a document management system specifically focused on the legal industry. And for the rest of the data the company collects, it goes into Entegrata’s legal data lakehouse.

“Our goal is to have all our other applications eventually point at that data lake,” he says. “Then we’ll have these two environments where all the firm’s data exists, which will allow us to put any AI tool we use on top.”

It’ll also make the data flows easier to manage, he adds, and will enable the firm to adapt quickly to whatever AI technology comes next. “Whether gen AI, agentic, or Anthropic stuff, with the Cowork legal plugin, it’s very difficult to keep up with,” he says. “And it changes every six months.”

Agentic orchestration

The last part of the agentic infrastructure puzzle, after getting security guardrails in place and creating a usable data layer, is orchestration. Agentic AI systems require agents talk to each other and human users, and interact with data sources and tools. It’s a complicated challenge, and this technology is still very much in its infancy, though moving quickly. MCP is one such example, and is a key piece of solving the orchestration puzzle. AI vendors have been remarkably willing to cooperate here.

“When social networks were born, and Facebook and Twitter were discussing a standard protocol for interacting, nobody wanted to adopt their competitors’ protocol,” says Agustin Huerta, SVP of digital innovation and VP of technology at Globant, a digital transformation company. “Now everyone is going through MCP and maturing it as a standard protocol.”

But that’s not to say agentic integration has been solved. According to a Docker survey of more than 800 IT decision makers and developers, the operational complexity of orchestrating multiple components is the biggest challenge when it comes to building agents.

In particular, 37% of respondents say orchestration frameworks are too brittle or immature for production use, and 30% report testing and visibility gaps in complex orchestrations.

In addition, while 85% of teams are familiar with MCP, most say there are significant security, configuration, and manageability issues that prevent deployment in production. And there are other integration issues enterprises have to deal with.

“One problem yet to be solved is how to get a proper dashboard to control all these agents, to know exactly what’s going on with each of them,” says Huerta. “One dashboard will let you monitor agents built with OpenAI, and one is for agents that live on Salesforce, but none can expose telemetry in a central dashboard for control, auditing, and logging.”

For companies just starting to deploy agents, or who are sticking to a single platform, this isn’t yet an issue, he adds, but as they leverage a larger network of agents, they’ll start to experience the challenges. Globant itself is building its own internal dashboard for agentic AI, for instance.

And at Brownstein Hyatt Farber Schreck, a 50-year-old law firm with about 700 employees and clients around the US, there are several areas where AI is being deployed, including a proposal generator system.

Normally, it can take several people days to review a client’s request for proposal, go through hand-written notes or meeting transcripts, and pull together other relevant materials, says Andrew Johnson, the firm’s CIO.

“We can feed all that information into a computer and extract key criteria to produce a quality first draft in minutes,” he says.

Multiple agents are required for different parts of the process — one to extract success criteria or staffing requirements, one to look for precedents and lessons learned, and others for pricing and the brand standards. “Each of those agents is autonomous and needs to be orchestrated so the outputs of each are fed into the next step,” Johnson says. For the most part, that means a RAG system, since most of the legacy platforms the firm uses have yet to incorporate an MCP layer.

Depending on the task, individual agents may be powered by different models, which is another layer of orchestration that needs to be managed.

Then there’s cost monitoring. If an AI agent or group of agents gets into an infinite feedback loop, the inference costs can quickly rise.

“We’re aware of the concern, though we have yet to see it manifest,” says Johnson. “So we have monitoring in place. If we exceed thresholds, we react to it.”

Regardless of strategies or measures to absorb setbacks, everything having to do with AI is changing faster than anything else companies have seen.

“I’ve been in technology for 25 years and I’ve never seen anything like this,” says EY’s Malhotra. “The fastest growing companies in the history of companies have all been created in the last three to four years. The growth in adoption is just unprecedented. And I talk to clients all the time implementing technologies that were highly relevant nine or 10 months ago, and everyone’s moved on.”

  • ✇Security | CIO
  • 칼럼 | AI ROI의 진짜 변수는 기술 아닌 ‘조직 설계’
    AI에 가장 많은 투자를 하는 조직일수록 오히려 창출하는 가치는 가장 적은 경우가 많다. 이 같은 역설은 AI가 실제로 가치를 만들어내는지에 대한 논쟁을 키우고 있다. 그러나 이는 본질적인 질문이 아니다. 업무 단위에서는 이미 충분한 근거가 축적돼 있다. 코딩, 글쓰기, 분석, 고객 지원 등 다양한 영역에서 AI가 측정 가능한 생산성 향상을 만들어낸다는 사실이 반복적으로 입증되고 있다. 다만 이러한 성과가 기업 전체의 재무 성과로 이어지지 않는 것이 문제다. MIT 연구에 따르면 전체 AI 파일럿 프로젝트의 95%가 초기 단계에서 손익계산서(P&L)에 의미 있는 영향을 주지 못한 것으로 나타났다. 맥킨지 역시 전체 응답 기업 가운데 약 6%에 해당하는 고성과 기업만이 AI를 통해 EBIT의 5% 이상을 창출했다고 분석했다. 보스턴컨설팅그룹(BCG)은 AI 전환 프로젝트의 약 60%가 제한적이거나 실질적인 가치 창출에 실패했다고 분
     

칼럼 | AI ROI의 진짜 변수는 기술 아닌 ‘조직 설계’

23 de Abril de 2026, 23:15

AI에 가장 많은 투자를 하는 조직일수록 오히려 창출하는 가치는 가장 적은 경우가 많다. 이 같은 역설은 AI가 실제로 가치를 만들어내는지에 대한 논쟁을 키우고 있다. 그러나 이는 본질적인 질문이 아니다. 업무 단위에서는 이미 충분한 근거가 축적돼 있다. 코딩, 글쓰기, 분석, 고객 지원 등 다양한 영역에서 AI가 측정 가능한 생산성 향상을 만들어낸다는 사실이 반복적으로 입증되고 있다.

다만 이러한 성과가 기업 전체의 재무 성과로 이어지지 않는 것이 문제다. MIT 연구에 따르면 전체 AI 파일럿 프로젝트의 95%가 초기 단계에서 손익계산서(P&L)에 의미 있는 영향을 주지 못한 것으로 나타났다. 맥킨지 역시 전체 응답 기업 가운데 약 6%에 해당하는 고성과 기업만이 AI를 통해 EBIT의 5% 이상을 창출했다고 분석했다. 보스턴컨설팅그룹(BCG)은 AI 전환 프로젝트의 약 60%가 제한적이거나 실질적인 가치 창출에 실패했다고 분석했다.

결국 공통된 흐름은 분명하다. 파일럿 단계에서는 성과가 확인되지만, 이를 조직 전체로 확장하는 과정에서 가치가 제대로 이어지지 않는다.

한편 대기업과 중소기업 간 AI 도입 격차는 빠르게 좁혀지고 있다. 미국 중소기업청(SBA) 데이터에 따르면 2023년 11월부터 2025년 8월까지 AI 도입률은 양측 모두에서 꾸준히 증가했다. 대기업은 6% 미만에서 12% 이상으로, 중소기업은 약 4%에서 8% 이상으로 상승했다. 여전히 대기업이 높은 수준을 유지하고 있지만, 중소기업의 도입 속도가 빨라지면서 격차는 점차 축소되는 추세다.

‘엣지’에선 작동, ‘코어’에선 정체되는 AI

대기업에서 AI 도입률이 빠르게 높아지고 있음에도 불구하고, 실제 운영 환경에 들어간 AI는 쉽게 자리 잡지 못한다. 수십 년간 축적된 시스템과 규제 체계, 다층적인 거버넌스, 그리고 부서 간 복잡한 의존성이 얽혀 있기 때문이다. AI는 도입 이후 보안 검토, 구매 절차, 법률 심사, 아키텍처 위원회 검토, 레거시 시스템 연동 제약 등 다양한 관문을 통과해야 한다. 각각의 절차는 필요에 의해 존재하지만, 이들이 결합되면 변화 속도를 늦추고 효과를 분산시키는 요인이 된다.

특정 부서 단위에서는 AI 파일럿이 성과를 보일 수 있다. 그러나 이를 조직 전체로 확장하려는 순간 기존 운영 모델과 충돌한다. 데이터 소유권, 책임 구조, 의사결정 권한이 명확하지 않을 경우 확장 비용은 더욱 커진다. 결국 제한된 환경에서 효과를 보였던 AI는 조직 단위로 확장되는 과정에서 멈춰 서고, 기대했던 가치는 규모화 과정에서 사라진다.

중소기업 역시 나름의 어려움을 안고 있다. 자금 흐름 제약, 제한된 인력, 고객 리스크 등이 대표적이다. 다만 의사결정 과정에서의 ‘거부 지점’은 상대적으로 적다. 예를 들어 창업자가 AI 기반 견적 자동화나 후속 고객 응대 시스템을 실험하기 위해 별도의 범부서 위원회를 구성하는 경우는 드물다.

의사결정은 빠르게 이뤄지고, 피드백도 짧은 주기로 반복된다. 직원 한 명이 전체 생산성에서 차지하는 비중이 크기 때문에 변화의 효과도 즉각적으로 드러난다. 5명 규모 기업이 행정 업무의 20%를 자동화하면, 그 성과는 곧바로 측정 가능하다.

이들의 구조적 강점은 ‘단순함’이다. 레거시 시스템이 적고 의사결정 경로가 짧으며, 다층적인 거버넌스가 상대적으로 덜하다. SaaS 기반 솔루션도 빠르게 도입하고 큰 마찰 없이 통합할 수 있다. 이러한 특성이 더 나은 의사결정을 보장하는 것은 아니지만, 실행 속도를 높이는 데는 분명한 이점으로 작용한다.

AI ROI의 본질은 ‘조직의 준비 상태’

반대로 대기업은 높은 수준의 시스템 통합 요구와 정교한 거버넌스, 분산된 책임 구조를 갖추고 있다. 이는 운영 리스크를 줄이는 데는 효과적이지만, 새로운 기술 역량을 실제 재무 성과로 전환하는 속도를 늦추는 요인이 된다. AI 파일럿이 기술적으로는 충분한 가능성을 입증하더라도, 기업 전체의 경제적 성과를 끌어올리는 데는 실패하는 이유다.

이 때문에 경영진은 본질적인 선택에 직면하게 된다. 그러나 많은 경우 이를 회피하려 한다. AI ROI를 단순한 기술 문제로 규정하면 IT 조직이나 데이터 팀, 혁신 부서에 위임할 수 있기 때문이다. 반면 조직 설계의 문제로 접근하면 이야기가 달라진다. 이는 전사적인 변화 없이는 해결할 수 없다.

AI는 구조적 문제를 해소하기보다 오히려 증폭시키는 성격을 갖는다. 의사결정 권한이 불명확하면 그 문제가 더욱 선명해지고, 데이터 거버넌스가 취약하면 리스크는 확대된다. 보상 체계가 어긋나 있다면 그 불균형 역시 더 빠르게 심화된다. 결국 업무 단위에서의 생산성 향상이 기업 전체의 수익성 개선으로 자동 연결되지 않는 이유가 여기에 있다.

이 같은 현상은 새로운 것이 아니다. 인터넷 도입 초기에도 유사한 흐름이 나타났다. 기술 자체는 정상적으로 작동했지만, 기존 구조 위에 이를 덧붙인 기업이 아니라 조직을 재설계한 기업이 더 큰 성과를 거뒀다.

현재 AI 역시 같은 패턴을 보이고 있다. ROI를 제한하는 요인은 모델 성능이 아니라, 변화를 수용하고 확장할 수 있는 조직의 준비 상태다.

따라서 질문은 “AI가 왜 ROI를 만들지 못하는가”가 아니다. ROI는 AI가 아니라 조직이 만들어내는 것이기 때문이다. 진짜 질문은 조직이 일하는 방식과 의사결정 구조, 거버넌스, 성과 측정 방식을 재설계할 준비가 되어 있는가에 있다.

이러한 변화가 없다면 AI는 주변 업무의 생산성을 높이는 도구에 머무를 가능성이 크다. 반대로 조직이 이를 수용할 수 있다면, AI는 지속 가능한 경제적 가치를 창출하는 핵심 동력으로 자리 잡게 된다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • LIV Golf engages fans with agentic AI
    When the LIV Golf professional men’s golf tour launched in 2022 as a rival to the PGA Tour, it set out to capture a younger, global fan base. The international league’s four-day, 72-hole format competitions, featuring both individual and team components, boast a festival-like atmosphere, with concerts and other events geared toward younger audiences. “We definitely see interaction with younger fans, people who enjoy the cultural value around sport, as well as the event
     

LIV Golf engages fans with agentic AI

23 de Abril de 2026, 07:00

When the LIV Golf professional men’s golf tour launched in 2022 as a rival to the PGA Tour, it set out to capture a younger, global fan base. The international league’s four-day, 72-hole format competitions, featuring both individual and team components, boast a festival-like atmosphere, with concerts and other events geared toward younger audiences.

“We definitely see interaction with younger fans, people who enjoy the cultural value around sport, as well as the event itself,” says Nick Connor, SVP of technology at LIV Golf.

The tour, financed by Saudi Arabia’s sovereign wealth fund PIF, is seeking to transform itself into an agentic enterprise as part of its efforts to further engage with those fans.

“Engagement really is the gateway to monetization,” Connor says. “The more fans we have using our products, the more opportunities we have to interact with them, create moments they treasure, and build that fandom and their affinity with us as a brand.”

A change in course

Monetizing that fan engagement has become even more critical since the Public Investment Fund announced earlier this month it would cut funding for the tour after the 2026 season, throwing the league’s future into doubt.

PIF, having invested more than $5.3 billion in LIV Golf over the past four years, has said it’s reducing its international investments and putting greater emphasis on promoting domestic sports.

Emergence of agents

Regardless of what lies ahead, however, the tour unveiled in February some fruits of its agentic AI efforts with new agents Fan Caddie (aka ‘Chip’) and Agent Caddie.

Chip is an agentic AI golf companion intended as a real-time concierge for golf fans. It’s a second-screen experience that can serve up stats and insights, or even help attendees navigate event logistics. Meanwhile, Agent Caddie is a broadcaster-facing agent available via Slack to provide the tour’s commentators with real-time shot intelligence and statistics to support their storytelling.

Chip’s features include:

  • Real-time tournament insights and replays. Fans can ask the agent to provide hole-by-hole scoring insights, player comparisons, and stat cards.
  • An interactive leaderboard that provides live data from the content team to help fans better understand what’s happening on a course.
  • Integration with LIV Golf’s online retail offering, Shop the Look, giving fans the ability to explore and purchase gear worn by their favorite players and teams.
  • On-course utility, offering ticket and hospitality information to fans attending LIV Golf events.
  • Customization of content according to fan tastes.

Agent Caddie relies on much of the same data as Fan Caddie but presents stats and insights in a way that’s more suitable to broadcasters.

“Fan Caddie has a bit more summarization and is more graphical,” Connor explains. “Agent Caddie is deployed in Slack, and it’s got a narrower scope in the sense that it’s more about stats retrieval for storytelling on broadcast.”

Behind the scenes

LIV Golf has worked closely with Salesforce to build out its agentic capabilities. The agents are powered by Agentforce 360, though Connor explains that much of the early work was around Data Cloud as the team sought to build out its data architecture, clean up its data, and solve for issues like data ingestion.

“In the context of Fan Caddie, we have a big emphasis on statistics and ensuring people can ask questions about the players,” Connor says. “Some of that did exist in terms of data ingestion, but we had to do quite a bit of work to tidy that data up and bring it into a structure that was actionable for reporting and agentic AI.” The important first step, he adds, is to get the data structured right and ensure the data quality is good.

LIV Golf will spend the coming year further iterating on Fan Caddie and Agent Caddie, but Connor says it’ll build out new agents to support other parts of the organization, including marketing, retail, legal, and finance. “We’re invested in doing this across our organization to improve our daily workflows, and to make our organization as effective as it can be,” he says.

  • ✇Security | CIO
  • Ways CIOs can prove to boards that AI projects will deliver
    There’s been a wake-up call for CIOs. All the talk about perceived productivity boosts that have previously dominated conversations about AI has been replaced with a demand for measurable value from investments in emerging tech. As MIT states that project failure rates are as high as 95%, executive boards are starting to question when AI will pay dividends. PWC’s Global CEO Survey shows that more than half of companies have seen neither higher revenues nor lower costs fr
     

Ways CIOs can prove to boards that AI projects will deliver

22 de Abril de 2026, 07:00

There’s been a wake-up call for CIOs. All the talk about perceived productivity boosts that have previously dominated conversations about AI has been replaced with a demand for measurable value from investments in emerging tech.

As MIT states that project failure rates are as high as 95%, executive boards are starting to question when AI will pay dividends. PWC’s Global CEO Survey shows that more than half of companies have seen neither higher revenues nor lower costs from AI, and only one in eight have achieved positive outcomes.

While Gartner predicts significant growth in AI spending this year, John-David Lovelock, distinguished VP analyst at the research firm, says the lack of tangible returns means digital leaders are changing tack. Rather than hoping their AI explorations will produce returns, CIOs are switching to more targeted initiatives.

“The projects growing quickly are the ones doing business, and those initiatives include AI,” he says. “CIOs are starting to de-emphasize AI and re-emphasize business. These projects are about AI enhancing existing work and moving away from moonshot transformational projects.”

Lenovo’s CIO Playbook for 2026, produced with tech analyst IDC, also suggests enterprises will get serious about AI deployments this year, with explorations replaced by production-level services that drive business transformation. With boards exerting pressure for measurable returns, Ewa Zborowska, research director at IDC, says more digital leaders want to use AI to enhance, innovate, and reinvent their organizations.

“CIOs aren’t just considering AI out of curiosity, they want to see what they can get out of it to grow the business,” she says. “AI adoption is much more about doing new things or taking a fresh approach to creating value rather than becoming more efficient at cost-cutting.”

Such is the clamor for value that Richard Corbridge, CIO at property specialist Segro, suggests that returns from AI are a main digital leadership priority: “If you discover, for example, that everyone in the organization used Copilot 10 times today, that might mean they’ve been more efficient,” he says. “But what have they actually done with the time they saved? How has saving time created value?”

CIOs will grapple with these questions during the next 12 months. With CEOs and boards becoming impatient for returns, digital leaders are working more with their bosses to define value. Successful CIOs fine-tune their arguments to ensure their projects are backed, and then demonstrate the value of their AI initiatives to the board.

Defining a valuable AI project

What’s clear is CIOs can’t deliver outputs from AI projects without input from their enterprise peers. IDC’s Zborowska says tighter cooperation across project ownership and KPIs ensure emerging technology investments are targeted at the right places.

This increased interaction between digital and business leaders also changes project aims. As stakeholders work closely together to generate value from AI, Zborowska expects executives to seek KPIs that stretch across operational concerns.

“I’d bet we see more non-financial aims over the next few years,” she says. “Executives will consider things such as are employees more engaged, has their work improved in any way, are AI implementations impacting customer experiences, and are internal decisions being made more efficiently.”

Martin Hardy, cyber portfolio and architecture director at the UK’s Royal Mail, agrees that defining valuable AI projects is all about finding the right focus. Effective deployments target processes in distinct areas, and business stakeholders must be part of the value-defining process.

“If we’re making decisions about legal documentation, AI is probably not there yet,” he says. “But if we can use AI to approve holidays, for instance, that might be something because if you have rules that say no more than two people off at a time, you could use AI to check about booking holidays without having to ask everyone in the office.”

For CIOs seeking value-generating use cases, Gartner’s Lovelock suggests AI can deliver results in key business areas such as boosting revenue, supporting decision-making, engaging staff, and improving experiences. He says the right path to AI exploitation correlates with Gartner’s enterprise technology adoption profiles, which group companies into a range of categories.

“The folks who are furthest forward, what we call the agile leaders in technology, are much more likely to drive AI to change the business,” he says. “The laggards on the other side are more likely to take on the technology that’s given to them by incumbent software providers, and use it in a prescriptive manner.”

Fine-tuning the use case

The challenge now is for digital leaders to work with their business peers to determine a more refined approach to AI deployment. For some CIOs, the value of AI is clear but the potential risks must be considered.

Take Dan Keyworth, executive director of performance technology and systems at McLaren Racing, whose focus is operational stability and race-day reliability. While he says being aware of developments in generative and agentic AI is crucial, the priority is tried-and-tested technologies rather than innovations that put performance at risk.

“Formula One is grounded in traditional machine learning and simulation,” he says. “Developing models has been a big part of our performance journey, and since the engine already existed, gen AI is the turbo that’s bolted on with more investment in AI.”

For other digital leaders, like Barry Panayi, group chief data officer at insurance firm Howden, success depends on keeping the human in the loop. Yes, automation can improve customer service, but rather than replacing staff, he wants to use AI to ensure Howden’s professionals have the right insight when they interact with clients.

“There’s absolutely no desire to use data to drive productivity by automating what we do with our customers,” he says. “This is a business where people speak to people. Our brokers need information that can give them an edge, and prove to their clients they understand the risks and can give them the best deals.”

Nick Pearson, CIO at technology specialist Ricoh Europe, adds that the use case for AI at his firm is two-fold: boosting operational productivity and improving customer processes. So he’s established a tri-party AI council with the head of service operations and the commercial manager in Spain. This council explores opportunities to buy, build, and reuse emerging tech.

“We’ve got a strategy that looks at where AI matters, which means exploring the technology we already have to boost internal productivity,” he says. “We’ve got a lot of people who know how to code and build things in Copilot Studio and other platforms, so let’s use that to increase productivity.”

Showing returns to the board

For Gartner’s Lovelock, the key lesson for CIOs eager to generate value from AI is to work with their peers and set desired outcomes before investing. “Most people start with the idea that more is more, and if you do that, you won’t get to the idea of quality,” he says.

That sentiment resonates with Segro’s Corbridge, who encourages digital leaders to start conversations with other professionals by focusing on value. Ask people how investing in an AI implementation will create value for them personally, for the wider business, and the customers the organization serves.

He says CIOs shouldn’t try to prove that AI works, but rather concentrate on how emerging tech adds value. That definition is so critical to Segro’s way of working that the organization uses the phrase proof of value rather than proof of concept.

“Most things work, but they might be more expensive,” he says. “For example, you might be able to use AI to transform how the organization uses spreadsheets, but that project might cost you $300,000. And if you’re currently paying someone $40,000 to do that work, and they’re happy doing it, then you have to question the value.”

Lessons are being learned, says IDC’s Zborowska, whose firm’s research suggests that half of AI POCs now transition into production. While some people might think this success rate isn’t impressive, the quantity a year ago was 10%. After several years of AI exploration, it appears CIOs and their businesses are now firmly focused on real returns.

“These numbers speak to the fact that companies are being more mature and mindful in how they allocate budgets,” she says. “They also support the main theme that we’re on a journey to transformation and a maturing market for AI adoption.”

  • ✇Security | CIO
  • 5 lessons from Everest for high-risk AI projects
    The recent new regulations for climbing Mount Everest give us some surprising parallels, lessons learned, and best practices between the physical risks of mountaineering and the governance risks of high-stakes AI. The new and stringent regulations related to Everest center around mandatory use of local guides and prior experience, electronic tracking, strict health certifications, and waste management — a clear focus on experience, real-time observability, safety, and s
     

5 lessons from Everest for high-risk AI projects

22 de Abril de 2026, 07:00

The recent new regulations for climbing Mount Everest give us some surprising parallels, lessons learned, and best practices between the physical risks of mountaineering and the governance risks of high-stakes AI.

The new and stringent regulations related to Everest center around mandatory use of local guides and prior experience, electronic tracking, strict health certifications, and waste management — a clear focus on experience, real-time observability, safety, and sustainability.

High-risk AI systems, those defined as so by the EU AI Act regarding their potential impact on health, safety, or fundamental rights, are classified this way if they either fall under EU product safety legislation or used in sensitive areas such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, or justice.

So to help CIOs deal with high-risk AI implementations, here are five lessons from the top of the world.

Proof of acclimatization

In recent seasons, Everest experienced a surge in aspirational climbers who lacked basic high-altitude skills and equipment knowledge. Those factors, and refusals to turn around at hard time stops, resulted in several deaths.

So under the 2025/2026 Tourism Bill, climbers must now provide a verified certificate proving they’ve summited at least one peak above 7,000 meters in Nepal before they can apply for an Everest permit. Why 7,000? Because this altitude represents the transition from high to extreme altitude and presents a critical physiological and technical threshold.

For CIOs, this situation mirrors shadow AI and AI sprawl, where teams may lack the experience to mitigate the underlying risks of their implementations. To resolve it, it’s important teams working on high-risk AI projects have proven experience with at least moderate risk implementations, and understand the governance requirements of the higher-risk projects they’re about to tackle.

This experience rule should apply to the technologies involved as well. Teams working on projects and tech involved all need to be fit for task. For example, CIOs may decide to prohibit the deployment of autonomous systems in core financial or customer-facing workflows unless the underlying model and its orchestration layer have successfully passed a pilot with documented safety metrics. According to KPMG’s Q1 2026 AI Pulse Survey, these types of restrictions are well underway with 43% of organizations identifying high-risk use cases where autonomous agent decision-making isn’t allowed.

Mandatory black box and tracking

On Everest, all climbers are now required to rent some kind of GPS tracking chip that’s sewn into their jackets to expedite search and rescue operations, if needed.

“On Everest, tracking isn’t optional, it’s survival,” says Steven Pivnik, an entrepreneur and advisor who utilizes an endurance mindset built from years of Ironman racing and mountaineering, including Mt. Everest. “In high-risk AI, if you can’t see how decisions are made or trace outcomes. You don’t have control, you have exposure.”

In the AI world, this tracking requirement translates to real-time agentic observability. Every high-risk AI project should include a dedicated observability budget typically 10 to 15% of total project cost. Teams should also implement trust verification frameworks that provide a real-time heartbeat of agent intent, ensuring that if an agent drifts into a non-compliant decision path, it’s located and paused before it can execute.

Certified local guides — the Sherpa requirement

On Everest, solo climbing is now strictly prohibited. Every climber must be accompanied by at least one certified Nepali guide or high-altitude worker. This ensures local knowledge and safety are prioritized.

The business lesson is to move away from generalist AI teams and toward specialist, hybrid ones with necessary technical, contextual, and compliance-related expertise. This includes team members with deep, industry- specific domain knowledge, dedicated compliance or ethics officers, cybersecurity specialists, and external partners as needed.

“Enterprises considering the implementation of complex AI projects should integrate cybersecurity early in their planning process,” says Jude Sunderbruch, MD at cybersecurity consulting firm OakTruss Group. “Some organizations have the necessary skills in house but in other cases, it’s advisable to leverage outside partners with relevant experience.”

The KPMG AI Pulse Survey also found that when it comes to managing agent risk in the next six to 12 months, 48% of organizations are looking to deploy AI agents developed by trusted tech providers versus going it alone.

Strict health certification

Climbers must submit a medical fitness certificate issued within 30 days of an expedition start date. And for those over 50, tests like an ECG and stress test may be required too.

In the AI world, there’s an expansive number of vendor and tool-specific certifications available to validate expertise. Organizations such as Thinkers360 offer holistic ones that cover an expert’s lifetime body of work in specific domains by examining their authored content and experience. In a world exploding with self-proclaimed AI experts, reviewing third-party credentials can be a useful way for CIOs and their teams to review vendor and practitioner capabilities.  

An additional way to conduct the medical check-up for your AI project is to run a formal impact assessment to identify potential health risks to the organization or the public before a single line of code is deployed. Having a pre-defined incident response and liability plan can also help establish the requisite financial and legal insurance for added protection.

Sustainability and waste management

Climbers are now mandated to use government-sanctioned biodegradable waste, alleviation, and gelling (WAG) bags to carry their waste down from higher camps to base camp for proper disposal.

In the AI world, this translates to a similar environmental focus as boards and executives increasingly turn their attention to the sustainability impact of AI data centers. With global data center investment projected to exceed $3 trillion over the next five years to meet AI-driven demand, some organizations are already reporting AI-related infrastructure costs and emissions doubling month-over-month as experimentation and pilots expand. 

To manage this aggregate energy consumption, CIOs need to work better with their sustainability teams to set goals for the environmental footprint of their sovereign data centers, as well as those of their partners. They can achieve this by looking for technologies designed to address this challenge at the architectural level.

By paying attention to lessons learned from Everest, and new regulations focused on quality over quantity, you’ll be in a stronger position to mitigate risk in your next high-stakes AI project.  

  • ✇Security | CIO
  • Increased AI expectations without guidance leads to employee burnout
    Burnout in the tech industry has nearly doubled in the past year, with 46% of workers expressing feeling burnt out and almost 25% saying they’re very burned out, according to recent data from Dice. Alongside that uptick, daily AI use has quadrupled, layoffs have impacted nearly two-thirds of the workforce, and overall confidence in the long-term future of tech dropped from 80% to 60%. Tech employees most likely to experience burnout are millennials, those with 10 to 19
     

Increased AI expectations without guidance leads to employee burnout

21 de Abril de 2026, 07:00

Burnout in the tech industry has nearly doubled in the past year, with 46% of workers expressing feeling burnt out and almost 25% saying they’re very burned out, according to recent data from Dice. Alongside that uptick, daily AI use has quadrupled, layoffs have impacted nearly two-thirds of the workforce, and overall confidence in the long-term future of tech dropped from 80% to 60%.

Tech employees most likely to experience burnout are millennials, those with 10 to 19 years of experience, or those at small companies with fewer than 250 employees already worried about layoffs. 

These growing frustrations arrive on the heels of several years of ups and downs in the industry, so it’s critical that employers demonstrate stability for employees. That means emphasizing AI governance and transparency, financial health, clear policies, and transparency from leadership acknowledging market strains, according to Dice.

“You can identify AI burnout the same way as failed AI value, by looking at rework and outcomes,” says Laura Stash, EVP at iTech AG. “If error rates are rising, review cycles are increasing, or employees are spending more time validating outputs, that’s a sign AI is creating more work.”

Where AI-induced burnout crops up

Burnout surrounding AI is typically tied to friction rather than traditional overwork, as well as usage patterns, says Paul Farnsworth, president of Dice. Daily AI users are more likely to express higher levels of burnout, with over half of AI users reporting burnout compared to only a third of those who never use AI, according to Dice.

“Increased exposure to AI without the right support can amplify rather than reduce workplace stress,” says Farnsworth. “In an AI setting, burnout tends to appear as increased rework, lower confidence in outputs, and frustration tied to unclear expectations or lack of training. If employees spend more time correcting or validating work than benefiting from efficiency gains, that’s usually the earliest and clearest signal.”

AI also contributes to more subtle forms of burnout tied to the constant change and uncertainty of AI. This can create a new type of fatigue that employees experience switching between multiple tools, feeling pressure to keep up with new AI capabilities, and the need to recheck outputs.

“These challenges are compounded in environments where expectations are unclear or evolving quickly,” adds Farnsworth. “Over time, that combination can lead to disengagement if employees feel the pace is unsustainable.”

Stash agrees that a lot of AI burnout starts to show up where there isn’t clear guidance on how to use AI tools. You’ll find employees switch between different tools, or reuse outputs across systems, repeating unnecessary work, and therefore possibly lose important context between different applications, she says.

Companies should rather focus on embedding AI tools directly into the day-to-day tools, services, and software employees already use. That way, they become part of the workflow instead of another tool that requires constant re-prompting and context switching.

“The goal shouldn’t be to give employees more AI tools, but simplify the experience,” says Stash. “Fewer tools, clearer use cases, and AI embedded into existing workflows is what reduces friction and prevents burnout.”

Increased expectations ramp up burnout

A report from the Upwork Institute found that around 71% of full-time employees say they are burned out and 65% report struggling with employer demands on their productivity. And executives seem aware of this shift, with 81% of C-suite leaders saying they acknowledge they’ve increased their demands on employees over the past year, and 96% saying they expect AI tools will boost productivity in the organization.

However, nearly half of all employees using AI say they have no idea how to achieve the productivity gains their employers expect, and 77% say AI tools have decreased their productivity and added to their workload.

“A common issue is that AI is introduced faster than it’s operationalized,” says Farnsworth. “When employees are expected to navigate multiple tools without clear guidance, it adds complexity.”

Respondents in the Upwork survey say they now spend more time reviewing and moderating AI-generated content (39%), investing time into learning new AI tools (23%), and are still being asked to do more work than before (21%). Overall, 40% say they feel their company is asking too much of them when it comes to AI.

Farnsworth suggests that leaders focus on narrowing toolsets, defining specific use cases, and providing role-based training to help reduce that burden, as well as emphasizing and setting the expectation that AI is meant to improve how work gets done, not simply increase the volume or pace of output.

Expectations vs reality for AI productivity

Executives express high confidence around employee skills, with 37% of C-suite leaders at companies that use AI saying their workforce is highly skilled and comfortable with AI tools. But this perception doesn’t match the 17% of workers who say they feel skilled and comfortable using AI tools.

Additionally, 38% of employees say they feel overwhelmed about using AI at work and that it’s adding to their workload, suggesting too many leaders are moving forward implementing AI without realistic expectations of what workers can do, especially without proper training and upskilling.

And while that 96% of C-suite executives say they expect AI tools to boost productivity, only 26% say they have proper AI training programs in place, and only 13% say they have a well-implemented AI strategy, according to Upwork.

Data from Upwork also reveals further imbalances in executive perception and employee experience, with 69% of C-suite leaders admitting they’re aware of the current struggles employees face regarding productivity demands, and 84% are adamant their organizations value employee well-being over productivity. But only 60% of full-time employees say their employer prioritizes that despite mostly agreeing their employers provide flexibility and greater clarity on strategic goals. In addition, the report points out that employees who perceive their company to value productivity over well-being report higher rates of feeling overwhelmed by their workload.

AI burnout can quickly lead to disengagement and even trigger an exodus of talent. So leaders need to take stock of AI strategies and ensure they align with realistic training and upskilling opportunities for employees. Expectations around AI should be delivered to employees clearly and timely, without leaving room for question or interpretation.

“This kind of change management is not new, and we should use tools and techniques that have helped before to help mitigate burnout,” says Farnsworth. “Creating cross-functional working teams, highlighting best practices, reducing redundancy in tools, and understanding the goals of an organization and then applying tools on top are all ways to help tech professionals who struggle with AI burnout.”

  • ✇Security | CIO
  • AI doesn’t create ROI. Organizations do.
    Organizations that invest the most in AI often capture the least value from it. That paradox is driving a growing debate about whether AI delivers value. But that’s the wrong debate. At the task level, the evidence is clear with studies consistently showing measurable productivity gains in coding, writing, analysis, and customer support. MIT researchers have found that 95% of AI pilots fail to generate measurable P&L impact at the pilot stage. McKinsey also reports
     

AI doesn’t create ROI. Organizations do.

20 de Abril de 2026, 07:00

Organizations that invest the most in AI often capture the least value from it. That paradox is driving a growing debate about whether AI delivers value. But that’s the wrong debate. At the task level, the evidence is clear with studies consistently showing measurable productivity gains in coding, writing, analysis, and customer support.

MIT researchers have found that 95% of AI pilots fail to generate measurable P&L impact at the pilot stage. McKinsey also reports only the high performers, about 6% of respondents, attribute 5% or more of EBIT to AI. And BCG estimates that roughly 60% of AI transformation efforts deliver limited or no material value. The pattern is consistent: pilots succeed locally, but value rarely scales systemically.

Meanwhile, the adoption gap between large enterprises and SMBs has narrowed sharply. US Small Business Administration data shows that between November 2023 and August 2025, AI adoption rose steadily across both, with larger firms increasing from under 6% to over 12%, and smaller ones from about 4% to over 8%, signalling that while the former still lead, the adoption gap is narrowing as the latter accelerates adoption.

AI works at the edge, but struggles at the core

Despite rising adoption rates among large enterprises, AI doesn’t simply deploy when it enters this environment, with their decades of accumulated systems, compliance layers, governance checkpoints, and cross-functional dependencies. Once in, AI must negotiate with security reviews, procurement cycles, legal assessments, architecture boards, and legacy integration constraints. And while each layer exists for a reason, together, they slow adaptation and dilute impact.

Inside a function, an AI pilot may show promise, but when it attempts to scale, it encounters the operating model. Unclear data ownership, accountability, and decision rights further increase scaling costs. So what worked in a contained environment stalls in aggregation, and the value disappears at scale.

SMBs have their own challenges. They face cash flow constraints, limited staff, and customer risk, but fewer veto points. Furthermore, a founder doesn’t convene a cross-functional steering committee to experiment with AI-assisted quoting or automated follow-ups. Decisions move faster and feedback loops are shorter. Impact is visible quickly because each employee represents a meaningful percentage of total capacity. When a five-person firm automates 20% of its administrative workload, the effect is immediate and measurable.

Simplicity is their structural advantage and with fewer legacy systems, shorter decision paths, and less layered governance, they can adopt SaaS solutions quickly and integrate them with minimal friction. While this doesn’t guarantee better decisions, it increases speed.

The bigger picture

On the flip side, large enterprises have deep integration requirements, formalized governance, and distributed accountability, which reduce operational risk but also slow the conversion of new capabilities into financial outcomes. AI pilots can demonstrate technical feasibility but still fail to move the needle on enterprise economics.

Leadership teams, therefore, face a design choice they often prefer to avoid. As long as AI ROI is framed as a tech problem, it can be delegated to IT, data teams, or innovation labs. But an organizational design problem can’t. AI, after all, amplifies structural friction rather than eliminates it. If decision rights are unclear, AI exposes it. If data governance is weak, AI magnifies it. If incentives are misaligned, AI accelerates the misalignment. Productivity gains at the task level don’t automatically translate into margin expansion at the enterprise level.

This isn’t new. Early internet investments followed a similar pattern where the technology functioned, but the internet rewarded companies that reorganized around it, not those that layered it on top of existing structures.

The evidence today suggests a similar pattern. AI ROI isn’t constrained by model capability but by organizational readiness to absorb and scale change. So the question shouldn’t be where’s the AI ROI since organizations create ROI, not AI. The real question is can we redesign how we work, and decide, govern, and measure performance to capture it. Without that transformation, AI remains a productivity tool at the margins. With it, though, AI becomes a source of durable economic return.

  • ✇Security | CIO
  • No sólo la IA marca la transformación digital de los sectores clave: pulso a 5G, ‘edge’ y ‘cloud’
    Sí, la inteligencia artificial (IA) está ahí, pero todo lo que se trató ayer en cuanto a su desarrollo en el CIO ForwardTech & ThreatScape Spain celebrado en Madrid no sería posible sin infraestructuras como el 5G, las comunicaciones, la computación en el edge y los modelos cloud. Así lo pudieron de manifiesto Israel Devesa, director general digital & tecnólogo del Grupo Aldesa; Carlos Garriga, CIO del IE Business School; y Rubén Andrés Priego, director general de
     

No sólo la IA marca la transformación digital de los sectores clave: pulso a 5G, ‘edge’ y ‘cloud’

17 de Abril de 2026, 10:55

Sí, la inteligencia artificial (IA) está ahí, pero todo lo que se trató ayer en cuanto a su desarrollo en el CIO ForwardTech & ThreatScape Spain celebrado en Madrid no sería posible sin infraestructuras como el 5G, las comunicaciones, la computación en el edge y los modelos cloud. Así lo pudieron de manifiesto Israel Devesa, director general digital & tecnólogo del Grupo Aldesa; Carlos Garriga, CIO del IE Business School; y Rubén Andrés Priego, director general de Tecnología, Operaciones e Innovación de Singular Bank, en un panel moderado por Esther Macías, directora editorial de CIO y COMPUTERWORLD en España. Empresas las tres primeras muy diferentes, pero en las que aquellas tecnologías mencionadas juegan un papel fundamental.

Israel Devesa comenzó su intervención centrándose en la parte pública y en cómo se comunica la evolución del ámbito energético. “Resulta clave explicar de forma clara cómo se está transformando este sector. El enfoque tradicional, bastante centralizado, de la red eléctrica en cuanto a cómo debe producirse la energía dentro de España, está evolucionando hacia un modelo más distribuido, donde cobra importancia el edge. En este contexto, es fundamental contar con infraestructuras capaces de analizar la información en tiempo real directamente en el ‘borde’. En nuestro caso, los aerogeneradores o los paneles solares producen una enorme cantidad de datos que generan activos, a partir de los que se toman decisiones relacionadas con la producción, su optimización o incluso el movimiento de activos. Por tanto, no toda la información debe viajar a sistemas centrales; muchas decisiones deben tomarse localmente para ganar rapidez y eficiencia. Este es un claro ejemplo de cómo el 5G y el cloud computing forman un conjunto indivisible en la forma de gestionar la información dentro de las organizaciones”.

Grupo Aldesa

Israel Devesa, director general digital & tecnólogo del Grupo Aldesa

Garpress | Foundry.

Por su parte, Rubén Andrés Priego centró su análisis en dos vertientes. La primera que destacó es el desarrollo de soluciones basadas en los nuevos DNI electrónicos con tecnología NFC en el ámbito de la banca digital y el onboarding. “Este sistema permite, mediante un simple contacto con el móvil, leer los datos del documento, incluyendo certificados validados por la Policía Nacional, así como la imagen del DNI. Esto supone un cambio significativo respecto a los procesos tradicionales, que requerían grabar vídeos, tomar fotografías y pasar por validaciones manuales por parte de equipos de operaciones”, explicó.

En segundo lugar, citó la telemetría. “A través de los dispositivos móviles es posible recoger información comportamental sobre cómo operan los usuarios. Esto permite detectar situaciones anómalas, como operaciones que cambian de localización de forma imposible en pocos segundos, o identificar si la interacción la realiza un humano o una máquina. Gracias a esta información, se pueden ajustar dinámicamente los niveles de seguridad, reduciendo la fricción para el usuario en situaciones normales y reforzando los controles cuando se detecta un mayor riesgo”, añadió.

Carlos Garriga puso como ejemplo cómo la torre del IE Business School está sensorizada para detectar cómo es el uso de los espacios en el día a día. “Casi monitorizas experiencias. Esto empezó con el cocid, que permitió repensar y redefinir aulas y procesos”. En cuanto al 5G y edge, Garriga puso ejemplos como un proyecto piloto de gafas de realidad virtual para toda la actividad académica, “que permite llevar la educación a una nueva dimensión”.

La importancia de las conexiones satelitales

Preguntado por la cuestión, Priego explicó que “tras el corte eléctrico del año pasado, los centros de datos continuaron funcionando con normalidad, pero las oficinas físicas se quedaron sin conexión a internet. Como respuesta, se desplegó la solución de Starlink basada en conectividad satelital en toda la red de oficinas y sedes centrales. Esto permitió garantizar la resiliencia del sistema, mantener la operatividad en situaciones críticas y explorar soluciones combinadas de conectividad y autonomía energética”.

En este sentido, Devesa añadió lo importante que es contar con conectividad cuando el 5G falla. “En entornos de obra, que suelen estar en ubicaciones remotas, la conectividad es un reto clave para integrar sistemas, monitorizar operaciones y garantizar la seguridad. A diferencia de entornos urbanos, estas zonas carecen de infraestructuras como fibra o red estable, lo que dificulta la comunicación. Para resolverlo, se recurre cada vez más a soluciones como redes satelitales de baja órbita (por ejemplo, Starlink), que permiten mantener la conectividad durante largos periodos. Además, existe una complejidad adicional en el ámbito OT frente a IT, debido al uso de protocolos menos estandarizados como Modbus, lo que exige soluciones y equipos especializados”, por lo que “en este contexto, el principal desafío del 5G no es la tecnología en sí, sino su despliegue en este tipo de entornos”, añadió.

¿Se puede vivir sin 5G?, preguntó Esther Macías a Carlos Garriga. “Más que el 5G en sí, lo crítico son sus capacidades, especialmente la baja latencia, que resulta esencial para aplicaciones como IoT o experiencias inmersivas, donde el tiempo de respuesta es clave”, respondió. De igual manera, también quiso dar su opinión en cuanto a comunicaciones satelitales, aunque reconoció que “aún no están completamente desplegadas” en el IES Business School, “han cobrado relevancia tras situaciones de crisis, donde garantizar la continuidad de la comunicación y la toma de decisiones es fundamental, especialmente para mantener informados a equipos y usuarios”.

“En este contexto —dijo para concluir su opinión en este sentido—, soluciones emergentes como el 5G satelital directo a dispositivos se están explorando como una opción eficaz para asegurar la conectividad de personal crítico y la continuidad operativa, incluso en escenarios adversos”.

Singular Bank

A la derecha, Rubén Andrés Priego, director general de Tecnología, Operaciones e Innovación de Singular Bank.

Garpress | Foundry.

Con la ciberseguridad hemos topado

Israel Devesa certificó que la ciberseguridad es fundamental en este mundo hiperconectado. “La ciberseguridad se ha convertido en una preocupación creciente en el sector energético, especialmente tras incidentes recientes que evidencian la vulnerabilidad de las infraestructuras. En entornos como parques eólicos o fotovoltaicos, donde la operación es cada vez más remota, aumenta la exposición al riesgo, ya que el acceso a sistemas críticos se realiza a distancia”.

“A diferencia de otros sectores —precisó—, el ámbito OT ha ido más rezagado que IT en materia de seguridad, pero esto está cambiando, impulsado en gran parte por nuevas regulaciones que exigen mayores niveles de protección y responsabilidad, incluso a nivel de dirección”. En opinión de Devesa, aunque históricamente la inversión en ciberseguridad en renovables ha sido muy baja, la creciente conciencia del riesgo y el cambio regulatorio están impulsando un aumento progresivo. “La ciberseguridad pasa así de ser un aspecto secundario a convertirse en un elemento clave y estratégico dentro de las organizaciones del sector energético”, dijo.

En este caso concreto, Rubén Andrés Priego reconoció que su compañía está apostando con fuerza por la inteligencia artificial, con soluciones como asistentes basados en modelos de lenguaje para apoyar a los banqueros, que ya muestran una alta adopción en el día a día. Sin embargo, estos modelos presentan limitaciones en términos de disponibilidad y fiabilidad, lo que puede suponer un riesgo, especialmente en situaciones críticas como la interacción directa con clientes. Por ello, se plantea el uso de edge computing para desplegar modelos en local, reduciendo la dependencia de sistemas externos y mejorando la resiliencia y la continuidad del negocio. Además, se destaca que muchas tareas específicas podrían resolverse mejor con modelos locales en lugar de modelos generalistas, optimizando así el rendimiento y la eficiencia”.

IE Business School

Carlos Garriga, CIO del IE Business School

Garpress | Foundry.

El papel de la tecnología en el desarrollo de los sectores

Para finalizar el panel, Esther Macías preguntó a estos tres expertos por el papel que tendrá la tecnología en el desarrollo de los sectores donde están enclavadas sus empresas, así como su propia función. “Lo que vemos, y ha pasado ahora con la guerra de Irán, es que la infraestructura hay que diseñarla desde el peor escenario posible. El tema de las guerras o los ataques terroristas es una realidad. Tenemos que buscar formas de replicar nuestra infraestructura; por ejemplo, en Dubái, donde de hecho cayeron restos de un misil encima de un CPD de AWS y hubo varios bancos que lo pasaron bastante mal. Es decir, hay que tener infraestructura redundante, estar preparados para lo peor que pueda ocurrir. Y, por otro lado, ser atrevidos: lanzarnos con la IA y con todas estas tecnologías de última generación en España. Parece que hay mucho miedo alrededor de la IA. Yo creo que ese nerviosismo viene más del desconocimiento que de lo que realmente puede generar”, explicó Rubén Andrés Priego.

Por su parte, Andrés Devesa respondió que “lo fundamental es verlo como una oportunidad”. “En nuestra compañía —continuó— decimos mucho que el sector de la construcción, y en general el industrial, lleva años de retraso en comparación con sectores como la banca o incluso el educativo, como el Instituto de Empresa. Pero precisamente ese retraso es una oportunidad. Con la IA y las nuevas infraestructuras, se está reduciendo esa brecha entre el mundo industrial y otros sectores. Y en el caso de la construcción, probablemente el retraso sea aún mayor, lo que hace que el potencial de mejora sea significativo”. Y es que, dijo, “aunque la digitalización es necesaria, la inversión siempre se analiza mucho en este sector. Aun así, los costes tecnológicos están bajando, lo que facilita avanzar. Para nosotro, la clave es entender todo esto como una oportunidad.

Para finalizar, Carlos Garriga admitió que “no tanto a nivel de nuestra industria, la educación superior, sino más en el rol de la función tecnológica dentro de cualquier industria, creo que es una mezcla de oportunidad y reto”, para desarrollar así su explicación: “Estamos afrontando un periodo de fuerte redefinición de todas las industrias, pero especialmente del papel de la tecnología. ¿Cuál va a ser el rol de los departamentos de tecnología cuando el desarrollo tecnológico se está descentralizando? Probablemente, el foco se desplace más hacia la gestión de infraestructuras, la gestión de riesgos y el compliance. Muchas veces digo que el CIO o el CTO del futuro va a asumir muchas de esas funciones menos “atractivas”, pero críticas. Al mismo tiempo, vamos a ganar protagonismo como habilitadores de soluciones tecnológicas abiertas, que luego serán utilizadas por las áreas de negocio”.

  • ✇Security | CIO
  • How poor data foundations can undermine AI success
    The promise of AI is immense, but poor-quality data undermines every attempt to derive any value from it. Without the right inputs, AI produces unreliable, incomplete, and even misleading outcomes. For the average enterprise, data exists in many forms across many systems, says Brian Sathianathan, CTO at Iterate.ai, and integrating structured and unstructured data is harder than most AI pilots account for. “Structured data from operational systems is rarely as tidy as te
     

How poor data foundations can undermine AI success

17 de Abril de 2026, 07:00

The promise of AI is immense, but poor-quality data undermines every attempt to derive any value from it. Without the right inputs, AI produces unreliable, incomplete, and even misleading outcomes.

For the average enterprise, data exists in many forms across many systems, says Brian Sathianathan, CTO at Iterate.ai, and integrating structured and unstructured data is harder than most AI pilots account for. “Structured data from operational systems is rarely as tidy as teams are assuming, and unstructured data, like scanned documents and forms, requires a different preparation process before it can be matched and used effectively,” he says, adding this might explain why businesses hit a wall when trying to move beyond POC.

Organizations with impressive POCs typically succeed because they rely on curated datasets, manual workarounds, and tightly controlled environments, says Rhian Letts, head of group technology strategy at Investec. The real challenge lies in converting pilots into reliable, production-grade implementations. Scaling, she adds, requires resilient pipelines, consistent definitions, operational support, and integration into real workflows. It also raises the bar for governance.

“Many data governance frameworks were designed for human-paced consumption,” she says. “AI significantly increases both the speed and volume of data demand and introduces non-human consumers. Governance, therefore, needs to evolve to become more automated, real-time, and explicit about provenance and permissions.”

For Daniel Acton, CTO at technology firm ADG, too many organizations rush to do something with AI without properly analyzing what they actually want to do with it. “AI can be useful, but if you feed AI data that’s incomplete and inaccurate, or if it doesn’t have the data needed to teach the machine to do what you want it to do, the results will be underwhelming,” he says.

Another core issue is a lack of standardized, high-fidelity metadata. “The quality of metadata is the hardest challenge to overcome,” says Brett Pollak, executive director for workplace technology and infrastructure services at UC San Diego. “Metadata is the essential connective tissue that allows an AI agent to interpret a user’s prompt and map it correctly to the intersection of specific columns and rows. Most organizations have unique, institution-specific interpretations of data that are rarely documented properly or kept current.” This creates a translation gap where an agent might have access to the data but lacks the context to understand what a specific field represents in a business context.

Data, data everywhere

Just because obstacles exist, though, doesn’t mean progress needs to pause. “AI use should be aligned to current maturity,” says Letts. “Rather than treating imperfect data as a constraint, organizations can ask how AI might help improve and better connect the data they already have.” Sathianathan agrees, adding that within the new LLM world, even small amounts of accurate data can have significant value. “With traditional machine learning just a few years ago, you needed a lot of data to train models,” he says. “Today, since most LLMs come with highly pre-packaged knowledge, all you need is sufficient amounts of the right data to get it ready for your domain.”

For organizations that have already deployed structured data warehousing, the new barrier is the transition from human-centric storage to machine-actionable delivery, says Pollak. “Readiness now means ensuring your data is wrapped in specific metadata, exposed via modern protocols like MCP servers, and governed by a selective exposure strategy that ensures agents only act on what’s governed,” he says.

Shift your mindset around data

Today, many organizations want to quickly move from data disorder to being data-driven. But if that’s the end goal, CIOs and tech leaders need to be mindful of treating data like a first-class citizen within your organization. As part of this shift, data can no longer be seen as a by-product of business systems, but rather as a core output that should be managed with the same level of care as any other product or service. When this happens, business leaders can unlock insights and value they didn’t know existed.

Also, according to Letts, a use-case-led approach is critical. Trying to fix every dataset across an organization is neither practical nor necessary. Meaningful value can be unlocked even where data is imperfect by focusing on the right use cases. By prioritizing five to 10 high-value use cases and mapping the data required to deliver them in production, it’s easier to focus efforts. Foundations can then be strengthened to serve those priorities.

With AI, the threshold for what’s good enough has lowered for many use cases, particularly those focused on productivity and knowledge work, she adds. AI models can extract value from context and connect dots, even where data isn’t perfectly structured. But higher-stakes use cases demand higher quality and stronger controls. “The key is to be explicit about purpose, risk, and operational dependency,” she says. “Lower-risk use cases can move faster with well-described and well-governed context, while higher-risk applications require tighter thresholds.”

Prioritize ownership, governance, and security

All governance frameworks, policies, standards and procedures should be reviewed with AI in mind, adds Letts. Many were designed for human-paced consumption, whereas AI increases speed, scale, and integration across both structured and unstructured data. So validating ownership of critical data elements and establishing a shared business understanding of their meaning is essential to progress. Standardized definitions and metadata should also ensure questions like what it means and where did it come from can always be answered. “AI access must be secure by default,” she adds. “This means having least privilege, audit trails, handling of sensitive data, and strong controls around retrieval. It should always be demonstrable what a model can and cannot access.”

Additionally, organizations must be mindful of data privacy when using AI, too. “Agentic AI systems require a different level of data access than traditional enterprise apps,” says Sathianathan. “Data needs to be analyzed, not just queried, at scale. That’s a big change to privilege models, and IT and security leaders need to think carefully about where all that data is going and what access the AI system really requires.” The same is true, he adds, if the LLM processing that data is running within or outside an organization’s four walls, and such decisions should be considered before deployment, not after. 

Use AI to fill in the gaps

In areas where the business might be falling short, consider using AI to draft and update your organization-specific data definitions, suggests Pollak. “Prioritize establishing a rigorous human-in-the-loop process to ensure this connective tissue is accurate and current.” Additionally, it’s possible to use LLMs and smaller language models to clean up data in certain areas with restrictive prompts, adds Sathianathan. This way, you can process data efficiently and avoid wasting resources by pumping massive amounts of data into large cloud-based LLMs.

Being AI-ready isn’t a one-time milestone, says Letts. AI capabilities are evolving quickly, which means the threshold for readiness shifts over time. It’s essential to improve end-to-end lineage, build shared semantics and ontology so data is consistently understood, increase interoperability across platforms and domains, and tighten how AI systems access data so it remains secure, auditable, and fit for purpose. “Thresholds change as use cases evolve,” she says, “so data readiness must be treated as an ongoing discipline rather than a completed task.”

  • ✇Security | CIO
  • CIOs are caught between employee AI fatigue and leadership expectations
    In 2024, when cloud-based software company BlackLine implemented its Buckie AI agent, a knowledge base that employees could ask HR- or IT-related questions, the company didn’t expect to move away from the tool within a year. “The technology was moving so fast,” CIO Sumit Johar says, and the company needed a different system to scale for the future. By June the following year, BlackLine had migrated to Google Gemini enterprise, and today, employees organization-wide h
     

CIOs are caught between employee AI fatigue and leadership expectations

15 de Abril de 2026, 07:00

In 2024, when cloud-based software company BlackLine implemented its Buckie AI agent, a knowledge base that employees could ask HR- or IT-related questions, the company didn’t expect to move away from the tool within a year.

“The technology was moving so fast,” CIO Sumit Johar says, and the company needed a different system to scale for the future.

By June the following year, BlackLine had migrated to Google Gemini enterprise, and today, employees organization-wide have built nearly 300 AI agents themselves.

The rapid clip at which organizations are adopting AI is compounding challenges for CIOs. And for employees, being bombarded with new tools and processes is leading to AI fatigue, a feeling of burnout from added workflows and unmet promises of time savings.

At the same time, corporate boards are putting increased pressure on CEOs to deploy AI and deliver results. So CIOs are caught in the middle, balancing board and leadership expectations with employee reality on the ground. They’re pressured to move quickly — a strategy that, in reality, often backfires, according to Doug Gilbert, CIO and chief digital officer at global business technology consultancy Sutherland. He says AI implementations currently have up to a 90% failure rate. “Doing AI right may sound slower, but in the long run, it’s going to be faster,” he says.

Why employee fatigue happens

Riley Stricklin, founder and chief strategy officer at AI integration firm Cadre AI, agrees that AI fatigue is a growing problem across companies. It’s not necessarily because employees are anti-AI, but rather they’re overwhelmed with new tools, new expectations, and constant change, he says.

The initial steps to implement AI take time, temporarily adding to employee workloads before delivering promised time savings, a common complaint Johar hears. Then, the moment teams feel they’re settling in with a new technology, understanding how they can organize their business processes and maximize value, something new comes up that changes everything. “That’s why there’s exhaustion, because things are moving so fast,” he says.

Gilbert adds that AI fatigue most commonly arises when AI is clunky, when organizations bolt AI on top of an existing process, rather than implement it as an in-line solution. Employees could be asked, for instance, to copy and paste data from their programs into a separate LLM like ChatGPT. But the method doesn’t take. “You’re frustrating the heck out of the employee,” Gilbert says.

On top of that, he adds that when AI isn’t properly integrated with a company’s data, or it lacks broader organizational context, the LLM can hallucinate, delivering outputs that, as he puts it, are kind of crap.

Stricklin also says when AI is an added layer instead of an integrated solution, it compounds friction when the purpose is to reduce it.

So the most successful CIOs don’t simply plug AI into existing systems and expect transformation, he says. They rethink the entire workflow and build AI into operations. And in the most seamless AI integrations, Gilbert says employees don’t really think about AI; they simply use a process and get better, faster results.

CIOs pressured from all sides

Gilbert says the clunky approach often happens because of a top-down push that ripples throughout the organization. Boards and CEOs may see case studies or articles of what other companies are doing related to AI, and want to jump on the bandwagon. The AI request trickles to the CIO, who then feels pressure to deploy a solution quickly, rather than take the time to develop an in-line system.

“The reality is you’ll never meet the false expectations they have in their heads,” Gilbert says, adding that boards and CEOs often have a utopian mindset of AI capabilities. Likewise, company investors often expect AI to slash costs, which pressures leadership to demonstrate immediate ROI from AI, Johar adds.

“They don’t always understand that you have to incur the cost before you save any cost,” he says.

In fact, a recent McKinsey survey shows that of the companies that participated, only 39% reported AI‑related impact on their earnings at the enterprise level, suggesting the majority of AI programs have yet to deliver meaningful financial results.

In addition to top-down pressure, sometimes CIOs are feeling stress from the ground up. Despite employee fatigue around AI, Johar’s team at BlackLine has seen requests for AI-based tools from other departments increase by up to 25%.

The higher volume of requests creates fatigue for the IT team itself, as they evaluate myriad tools. Increasing the challenge, the fast pace of change with AI means the team’s processes to evaluate technology have to evolve, too. By the time IT makes a decision to procure a technology or select a supplier, it’s possible the tech is already obsolete, Johar says.

BlackLine has also trained employees on how to build their own agents for specific departmental functions, and to date, employees have built nearly 300 AI agents. CIOs and their teams bear the responsibility of bringing governance and structure to the flood of agents, as Johar puts it, ensuring they meet corporate policies around data privacy or security.

As tech features such as vibe coding continue to gain traction, Johar anticipates additional questions will arise for CIOs related to software oversight.

Framing the AI narrative

Delivering business value continues to be a top priority for tech leaders, and Stricklin says the most successful CIOs establish clear business objectives — whether it’s increasing revenue or margins, or reducing cycle time — before an AI deployment.

But when persuading employees to embrace AI that ultimately creates business value, CIOs may need a different tact than touting the benefits.

Johar says CIOs should frame AI’s benefits as compelling from an employee point of view, like helping employees do their jobs more effectively and building skillsets. “Once you position it that way, employees become a lot more accommodating to invest their time,” he says.

In this kind of climate, Gilbert says CIOs need to reassure employees that AI isn’t a means to headcount reduction but about flipping the narrative to how AI will work alongside employees, not replace them. Gilbert adds that humans should always be in the loop to fine-tune models and improve the accuracy of AI’s outputs over time.

Finding the right balance is key, given the gap that still exists between leader and employee sentiment around AI. Executives are 15% more likely to say AI has had a significant positive impact on their companies than their employees are, according to a survey commissioned by Google Workspace.

Stricklin also advises CIOs to have a focused strategy for how they adopt AI instead of trying to boil the ocean and immediately implement AI organization-wide. So they should pick two to three priority areas to use AI over the next six months, and get employees involved with the best course of action. “Trying to address everything simultaneously will cause more harm than wins,” Stricklin says, adding that equally important is selecting areas in which an organization won’t pursue AI.

Gilbert agrees that not every facet of a business is enhanced by gen AI. CIOs should be mindful of that and not be afraid to push back against CEOs or boards if they suspect an AI deployment is unnecessary. “Sometimes AI isn’t the answer,” Gilbert says.

  • ✇Security | CIO
  • Scaling AI at Union Pacific starts with people
    As AI moves from experimentation to scale, many organizations are discovering a hard truth. Despite heavy investment in technology and models, impact often falls short of expectations and the constraint isn’t the technology itself but the operating model, culture, and the workforce required to turn potential into performance. It takes a certain range of skills to articulate that distinction, and Rahul Jalali, CIO of Union Pacific Railroad, is leading tech through the co
     

Scaling AI at Union Pacific starts with people

15 de Abril de 2026, 07:00

As AI moves from experimentation to scale, many organizations are discovering a hard truth. Despite heavy investment in technology and models, impact often falls short of expectations and the constraint isn’t the technology itself but the operating model, culture, and the workforce required to turn potential into performance.

It takes a certain range of skills to articulate that distinction, and Rahul Jalali, CIO of Union Pacific Railroad, is leading tech through the company’s next phase of transformation.

After more than two decades at Walmart, where he helped shape technology inside one of the world’s most sophisticated retail platforms, Jalali made a move that surprised some peers by joining a 163-year-old railroad, founded by Abraham Lincoln in 1862.

“I’ve been lucky to work at two iconic companies that have stayed relevant as times and demand have changed,” he says. “The fundamentals of true digital transformation transfer across industries.”

Five years into his tenure at Union Pacific, and with a potential merger with Norfolk Southern on the horizon, his conviction has only strengthened in that sustainable transformation, with or without AI, is powered by people.

Platform thinking in an asset-intensive world

Retail and railroading may appear worlds apart. One moves merchandise from shelf to shopper while the other moves freight across thousands of miles of track to distribution centers. But Jalali sees common principles. “When people ask about the shift from retail to transportation, I go back to fundamentals,” he says. “The most successful companies think like platform companies. They create a set of products that work together to drive customer centricity.”

At Walmart, platform thinking enabled scale. At Union Pacific, it’s meant first investing in foundations. “We had to get the fundamentals right,” he says. “Create the vision, align the teams to it, build the right platforms and capabilities, and then watch the magic happen.” That foundation now underpins customer transparency, operational efficiency, and data-driven decision making. It also provides confidence as Union Pacific prepares for the possible Norfolk Southern merger, which would create the first unified, single-service transcontinental railroad in the US.

“We’re at a clear inflection point,” Jalali says. “We’ve had success delivering foundational platforms that position us well for the AI era and for the potential merger. Technology will be at the forefront of integrating the companies and enabling a seamless supply chain network.”

At the center of this strategy is a simple philosophy, that technology must be business and customer led, not technology for its own sake. “Gone are the days of build it and they will come,” he says. “We’re building with our customers and that creates transparency, which then creates stickiness.” He applies the same logic to suppliers. “We tell our partners to build with us, not just for us, so if we thrive, they thrive as well.”

Trust before accountability

For Jalali, platform thinking is necessary but insufficient on its own since culture determines whether transformation sticks. “You can’t build accountability without first building trust,” he says. When he joined Union Pacific during the early months of the pandemic, he began writing transparent weekly memos detailing whom he was meeting and what he was learning. “I started getting unsolicited responses from the team,” he says, adding that many said it ignited new ideas and connection points across the organization.

Those memos turned into structured tech-focused listening sessions across the organization, led by Jalali and his leadership team. The intent was straightforward: explain the why behind decisions and create space for dialogue. “When you explain the why,” he says, “you create relationships and build trust, and that drives accountability.”

The impact has been measurable. Amid significant transformation and a pending merger, Union Pacific’s technology organization recorded one of its highest engagement scores last year. Jalali sees that as more than a morale indicator. “If people feel heard and invested in, they lean into change,” he says. “That always matters, especially in the AI era.”

Merging institutional knowledge with new perspectives

Union Pacific’s workforce spans generations, from employees with decades of railroading experience to early-career technologists who think natively in data, platforms, and AI. Jalali views that intersection as a competitive advantage. “When you merge deep institutional knowledge with new thinking on modern tech platforms, the results are powerful,” he says.

He’s intentionally created cross-functional exposure so technologists understand the operational realities of rail yards and supply chains, while seasoned employees are exposed to new tools and emerging capabilities. “We ask what they want to learn, what tools they need, and what knowledge we can expose them to,” he says.

For Jalali personally, stepping into the complex world of railroading at the beginning required a deliberate reset. After so many years in retail, he knew credibility would come only through immersion. “If you want to be effective, you have to respect the industry and get into the nitty gritty,” he says. “When I first joined Union Pacific, I told my peers and leaders you’re my teachers and mentors. I needed to deeply understand supply chains and how we operate as a company.” Field visits reinforced that commitment, signaling to his team that technology would be shaped by operations, not layered on top of each other.

Such an approach reflects a broader evolution in the CIO role. “We’re part of the north star decision-making arm of the company, not just managing zeroes and ones,” he adds.

Scaling AI with shared accountability

As AI adoption accelerates, Jalali thinks many organizations are applying it to solve the wrong problem. “The biggest risk is thinking technology by itself creates value,” he says. “Tech is part of the solution, but value comes when business process and people change with it.”

At Union Pacific, that belief translates into structure. AI isn’t owned by IT alone. Jalali pairs tech and business leaders in a two-in-the-box model, with shared accountability for outcomes. “We start with the business problem, not the solution. We insert AI in a meaningful way across our organizations to drive value,” he says.

That discipline keeps the focus on measurable impact. “We start by understanding how product moves from A to B, and then we find ways to optimize,” he says. “From there, teams work to quickly deploy minimally viable solutions and refine them in the field. I want progress over perfection.”

The approach reflects a broader conviction that’s shaped his leadership since leaving retail for railroading in that technology only scales when the people closest to the work own the outcome. “You have to empower the teams that own the results,” he says.

As organizations enter what many are calling the year AI scales, Jalali is less focused on the sophistication of models than on whether leaders are redefining how their companies operate. The differentiator, after all, will be whether AI is treated as a workforce transformation rather than an algorithm or technology deployment.

❌
❌