Visualização de leitura

The rise of the double agent CIO

CIOs of B2B SaaS companies are just as responsible to represent technology as they are to run it. In an environment where the buyer is often another CIO, however, the role becomes something fundamentally different. It’s no longer confined to internal execution. It extends into the market, customer conversations, and the moments that ultimately shape revenue, trust, and long-term relationships. So the modern SaaS CIO operates as a true double agent, running the business from within while representing it to the market.

Box CIO Ravi Malick sits squarely in that duality. After serving as CIO of Vistra Energy, a company defined by legacy systems and industrial scale, he stepped into a digitally native, founder-led SaaS business in 2021 where technology is inseparable from the business itself. He now leads internal tech while engaging directly with CIOs of companies evaluating Box, bringing a perspective shaped by both worlds. What stands out in Malick’s perspective isn’t how different the role is, but how much more expansive it’s become.

What stays the same, what evolves

The core tension of the CIO role hasn’t changed. “There’s always more demand than you have the capacity or funding for,” Malick says. Prioritization, alignment to business strategy, and the constant need to modernize while operating at scale still define the job. The difference, however, is the environment in which those challenges now exist.

At Box, Malick operates inside a workforce where technology fluency is high and expectations are even higher. “I partner with 3,000 technologists who love to solve problems with technology,” he says. That creates a powerful advantage, but also a new kind of pressure. Demand for tools, platforms, and innovation is constant, and AI has only accelerated it.

That dynamic is further shaped by Box’s leadership. As a founder-led company, technology conversations extend well beyond the CIO’s organization. “It’s a different dynamic when your CEO is a founder and a technologist,” Malick says. “You’re as much a steward of incoming ideas as you are a generator of them.” That relationship creates both pace and perspective, requiring the CIO to operate as both orchestrator and partner in shaping how technology evolves across the business.

In that context, the CIO is leading within a highly informed, highly engaged organization where expectations for speed and innovation are constant. The challenge isn’t modernization as a one-time effort, but ensuring the tech stack continuously evolves and scales with the business.

Balancing the internal mandate with external pull

What truly differentiates the role in SaaS is what happens outside the enterprise, and the pressure that comes with it. The CIO is still accountable for running IT, ensuring security, and maintaining operational excellence. At the same time, there’s growing expectation to show up externally, engage customers, and directly support revenue.

Malick doesn’t present that balance as seamless. “It’s a daily challenge,” he says. “But sometimes not balanced so well.” There’s a constant push and pull between internal priorities and external demands, and in many cases, revenue pulls hard. The opportunity to influence deals, build relationships, and contribute to growth elevates the strategic importance of the role, but it doesn’t remove the responsibility for the day job.

What allows Malick to operate effectively in both worlds is the strength of the foundation behind him. He points to the maturity of his leadership team, operating model, and internal processes as critical enablers. With clear structures, strong leaders, and disciplined execution in place, he has the bandwidth to spend meaningful time externally. It isn’t always a perfect balance, but it’s a deliberate one.

From operator to peer in the market

Through Box’s customer zero program, Box on Box, Malick operates as both CIO and practitioner, bringing firsthand experience into customer conversations. “I can take how we build at Box to customer conversations,” he says. That perspective shifts the dialogue away from product positioning, and toward the realities of execution.

In a market where CIOs are constantly being pitched, that distinction carries weight. “They want to know how it works from the perspective of someone managing it,” he says, adding he leans into that by being transparent about both successes and missteps. “We share the challenges and false starts we’ve managed through.”

That candor builds credibility, and credibility builds trust. After all, people buy from people they trust, and in enterprise technology, says Malick, peer-to-peer conversations are a faster path to trust than demos. 

The external dimension of the role also holds a symbiotic relationship with internal responsibilities. Malick brings customer conversations back into Box, using them to inform how he thinks about technology decisions and broader strategy. He describes the CIO community as uniquely open, even therapeutic, where leaders candidly share challenges and exchange ideas. That openness creates a feedback loop where external insights sharpen internal execution, and internal experience strengthens external credibility.

What this means for the CIO role

What makes Malick’s perspective especially relevant is that the lesson isn’t limited to SaaS. As technology becomes more central to growth, customer experience, and business model change, CIOs in every industry are being pulled closer to the front office. The shift is about becoming more fluent in how technology translates into trust, speed, and commercial impact, not just becoming more visible.

For Malick, one of the biggest lessons is the role now demands a different kind of leadership than many CIOs were originally trained for. “Don’t make assumptions, and don’t assume something’s easy or intuitive,” Malick says. In a world where technology is reshaping how people work in real time, communication becomes a strategic discipline. CIOs have to explain change, absorb feedback, and keep translating between technical possibility and business reality.

The rise of AI adds another dimension to the double agent role. CIOs are building the content foundation that AI needs to be effective, and ensuring the organization can experiment with AI without sacrificing compliance or control. In a fast-paced technology company, ideas, opinions, and new technologies come from every direction. So the CIO isn’t simply the expert with the answers but often the one managing velocity itself, deciding where to push and where to hold.

“You have to figure out when you need to be in the fast lane and when you don’t,” Malick says. That kind of judgment is becoming more critical as technology moves to the center of the business, and it’s another reason CIOs are stepping into CEO and COO roles.

As AI accelerates the pace of change and creates the potential to decouple revenue growth from headcount growth, that ability to manage speed, scale, and tradeoffs becomes a defining leadership capability. That’s why the SaaS CIO should matter to leaders far beyond software. With AI transforming every industry, the role is becoming a preview of where the profession is headed — not just to run technology, but help shape how the company grows, how it shows up in the market, and how it earns trust. The double agent CIO may sound like a SaaS phenomenon. Increasingly, though, it looks more like the future of the job.

칼럼 | 멀티 벤더 프로젝트 실패, 대부분은 ‘거버넌스’에서 시작된다

벤더가 프로그램을 스스로 바로잡아주기를 기다리는 것은 전략이 아니다. 이는 조용히 누적되는 비용일 뿐이며, 회의실 안의 모두가 프로세스가 정상적으로 작동하고 있다는 착각을 유지하는 동안 그 부담은 계속 커진다.

필자는 두 가지 상황을 모두 경험했다. 하나는 고객이 이미 문제가 있음을 인지하고 행동에 나설 근거와 표현을 필요로 하는 경우, 다른 하나는 아직 문제를 인식하지 못한 상태다. 후자의 경우 프로그램은 관리 가능한 수준으로 보이고, 벤더는 전문적으로 보이며, 운영위원회 회의도 제시간에 진행된다. 그러나 경고 신호는 이미 곳곳에 드러나 있으며, 누군가 이를 짚어내기만을 기다리고 있다.

이 두 번째 상황이 더 중요하다. 아직 대응할 수 있는 시간이 남아 있기 때문이다. 하지만 대부분의 기업은 그 기회가 줄어들기 시작할 때까지 움직이지 않는다.

대부분의 기업이 놓치는 경고 신호는 설계 단계에서 나타난다

초기의 신호는 일정 지연이나 산출물 실패가 아니다. 오히려 ‘언어’에서 드러난다. 상태 보고서나 운영위원회 자료에 ‘프로젝트 정상화 방안(path to green)’이라는 표현이 등장하기 시작하면, 이는 이미 프로젝트가 정상 상태가 아님을 스스로 인정한 것이다. 실행을 관리하는 단계에서 ‘서사를 관리하는 단계’로 전환된 셈이다.

운영위원회가 실제로 무엇을 하고 있는지도 살펴봐야 한다. 회의가 다음 달 전망이 아닌 지난달 결과 보고에 집중된다면, 리더십은 의사결정 주체가 아니라 단순한 청중으로 전락한 것이다. 이 경우 벤더가 의제 설정, 메시지 구성, 정보 공개 주기를 사실상 통제하게 된다.

가장 심각한 신호는 프로그램 스폰서가 벤더가 아닌 내부 보고를 통해 주요 문제를 인지하는 경우다. 이는 단순한 커뮤니케이션 문제를 넘어, 어떤 정보를 공유할지에 대한 의도적인 선택이다. 이러한 패턴이 SAP, 오라클, 세일즈포스 기반 프로젝트에서 나타난다면, 거버넌스의 핵심인 신뢰는 이미 무너진 상태다.

이러한 신호가 보이면 다음 운영위원회를 기다릴 이유가 없다. 독립적으로 검증 가능한 데이터를 요구해야 하며, 벤더에게 ‘보고’가 아닌 ‘예측’을 요구해야 한다. 만약 60일 후 프로젝트 상태를 설명하지 못한다면, 벤더는 프로젝트가 아니라 고객의 인식을 관리하고 있는 것이다.

총괄 조정자 역할, 해결되지 않은 이해충돌 문제

액센추어, 딜로이트, PwC 등 다수의 글로벌 컨설팅 기업이 참여하는 멀티 벤더 프로젝트에서 반복적으로 나타나는 패턴이 있다. 총괄 조정자, 즉 프로그램 통합 코디네이터는 고객의 문제, 다른 벤더의 한계, 외부 의존성 지연 등을 빠르게 지적한다. 그러나 정작 자사 문제에 대해서는 같은 수준의 직설적인 언급을 거의 하지 않는다.

이는 개인의 성향 문제가 아니라 구조적인 이해충돌이다. 총괄 조정 역할을 맡은 기업은 소위 업무 범위 정의서로 불리는 자체 업무 범위(Statement Of Work, SOW)를 수행 책임도 동시에 지고 있다. 이 과정에서 거버넌스 권한으로 확보한 정보 접근성과 보고 권한을 바탕으로, 자사 리스크를 방어하는 방향으로 움직일 가능성이 크다.

이 때문에 총괄 조정자 역할은 반드시 벤더의 수행 역할과 구조적으로 분리해야 한다. 가장 이상적인 방식은 결과에 이해관계가 없는 독립적인 통합 관리 조직을 두는 것이다. 현실적으로는 기존 벤더 내에서 별도 조직이나 인력을 지정하되, 해당 조직이 자사 리더십이 아닌 고객에게 직접 보고하고 운영위원회에 책임을 지도록 해야 한다.

이 구조에는 완벽한 방화벽이 존재하지 않는다. 대신 행동으로 판단할 수 있는 기준이 있다. 해당 역할이나 팀이 자사에 불리한 정보를 어떻게 다루는지 살펴보는 것이다. 고객의 문제를 제기할 때와 동일한 긴급도로 이를 공유하거나 상향 보고하는지, 혹은 자사 이슈는 숨기고 타 조직의 문제만 부각하는지를 확인해야 한다.

자사 전달 조직의 실패까지도 적극적으로 공개하고 에스컬레이션하는 총괄 조정자라면 제 역할을 수행하고 있는 것이다. 반대로 고객과 타 벤더의 문제만 지적한다면, 이는 프로젝트가 아니라 자사 계약을 방어하고 있는 것에 가깝다.

다음 SOW 체결 이전에 이러한 구조를 명확히 해야 한다. 총괄 조정자 역할을 수행 역할과 분리해 정의하고, 담당 인력이나 조직을 명확히 지정해야 한다. 또한 보고 체계를 고객 직속으로 설정하고, 실제로 해당 역할이 수행되고 있는지 행동 기준을 통해 검증해야 한다.

기다림은 중립이 아니다

대응을 미루는 데 따른 비용은 많은 기업이 생각하는 것보다 훨씬 구체적이다. 여러 시스템 통합업체가 동시에 참여하는 멀티 벤더 환경에서는 일정이 한 달만 지연돼도 기업별로 수백만 달러에서 수천만 달러에 이르는 비용이 발생할 수 있다. 이는 범위 확장이 아니라, 거버넌스가 일정 관리를 제대로 하지 못한 결과다.

상업적 리스크는 더 이른 시점부터 나타난다. 범위가 명확하지 않고 통합 계획이 불안정하면, 벤더는 가격 산정의 기준점을 확보할 수 없다. 그 결과 동일 범위에도 시간·자재 기반 견적과 고정가 견적 간 큰 차이가 발생한다. 이는 단순한 가격 차이가 아니라, 거버넌스 불확실성을 계약 조건으로 전가한 것이다. 결국 그 부담은 고객이 떠안게 된다.

문제가 쉽게 드러나지 않는 이유는 현장 팀이 대체로 전문적이고 성실하게 일하고 있기 때문이다. 하지만 핵심은 노력의 문제가 아니라 권한과 인센티브 구조에 있다. 프로젝트를 운영하는 프로그램 매니저는 추가 자원 투입이나 조직 간 비용 집행을 승인할 권한이 없다. 이들의 역할은 관계를 유지하고 수익성을 관리하는 것이며, 고객 프로젝트를 근본적으로 해결하는 것과는 다르다.

효과적인 개입은 ‘리더십’에서 시작된다

문제가 명확해지고 고객이 대응을 결정했다면, 실제 변화를 이끄는 것은 거버넌스 문서나 정기 회의가 아니라 고객과 벤더 최고 경영진 간의 직접적인 대화다. 이는 일상 운영을 담당하지 않지만, 결과에 책임을 지는 임원들이 참여해야 한다.

이러한 대화가 효과적인 이유는 인센티브 구조를 바꾸기 때문이다. 벤더의 산업 책임자나 파트너는 해당 프로젝트를 성공 사례로 만들어야 하며, 실패는 조직 내 부담으로 이어진다. 이들은 실행 조직이 갖지 못한 권한을 보유하고 있다. 최우수 인력 투입, 계약 범위를 넘어선 비용 부담, 신속한 인력 재배치 등 프로젝트의 방향을 바꿀 수 있는 결정이 가능하다.

또한 개입은 구조적으로 설계돼야 한다. 양측 고위 임원이 참여하고, 신규 인력 투입을 통해 벤더의 투자 의지를 명확히 보여야 한다. 동시에 프로젝트가 정상 궤도에 오를 때까지 일정 주기로 점검을 이어가고, 양측 모두에 시간 기반 책임을 부여해야 한다. 이 과정에서 단순히 프로젝트뿐 아니라 파트너십 자체도 평가 대상임을 명확히 해야 한다.

이는 일회성 회의가 아니라 지속적인 리더십 개입이며, 기존 거버넌스를 대체하는 것이 아니라 이를 실제로 작동하게 만드는 역할을 한다.

신뢰할 수 있는 유일한 회복 신호

리더십 간 논의가 효과를 발휘했다면, 그 결과는 벤더의 대응 방식에서 드러난다. 단순한 낙관적 계획이나 약속이 아니라, 실패 지점과 개선 방안, 그리고 고객 측의 성과 격차까지 구체적으로 제시해야 한다.

자사 책임만 인정하는 벤더는 여전히 관계 관리에 머무른다. 반면 자사 실패와 고객의 개선 필요사항을 동시에 명확히 제시하는 벤더는 공동 책임 구조를 구축하고 있는 것이다. 이것이 진정한 신뢰의 기준이다.

프로젝트 실패는 대부분 양측 요인이 결합해 발생한다. 지연된 의사결정, 부족한 내부 자원, 설계 이후 변경된 요구사항 등이 대표적이다. 이러한 요소를 자사 문제와 함께 제시하는 벤더는 책임을 회피하는 것이 아니라 결과 개선에 투자하고 있는 것이다.

만약 경영진 회의 결과가 단순한 약속과 형식적인 의지 표명에 그친다면, 지속적으로 압박을 유지해야 한다. 실제 개입은 구체적인 문제 인정, 명확한 자원 배정, 그리고 양측 모두를 향한 냉정한 평가에서 드러난다.

최종 책임은 고객에게 있다…‘중간에서 판단하는 역할’ 필요

모든 과정에서 최종 책임은 고객에게 있다. 총괄 조정자는 실행과 통합을 책임지지만, 판단 자체를 외부에 맡길 수는 없다. 이는 단순한 역할 구분이 아니라, 거버넌스의 본질이다.

벤더의 상태 보고서는 중립적인 데이터가 아니다. 이는 보상, 향후 계약, 개인의 평판이 걸린 사람들이 구성한 ‘의도된 서사’다. 보고서에 담긴 내용도 중요하지만, 빠져 있는 내용이 더 많은 것을 말해준다.

따라서 고객은 ‘중간에서 판단하는 역할’을 수행해야 한다. 데이터를 검증하고, 교차 확인하며, 보고서가 답하지 않은 질문을 던져야 한다. 무엇이 포함됐는지뿐 아니라 무엇이 빠졌는지도 살펴야 한다.

운영위원회가 좋은 소식만 듣고 있다면, 이는 프로젝트가 잘 진행되고 있다는 의미가 아니라, 누군가 리더십이 듣고 싶어 하는 내용만 선택하고 있다는 신호일 수 있다.

고객은 상태 보고가 아닌 예측을 요구해야 한다. 독립적으로 검증 가능한 근거를 확보하고, 벤더가 자사 문제와 고객 문제를 함께 제시할 때 이를 गंभीर하게 받아들여야 한다. 이는 책임 회피가 아니라, 거버넌스가 제대로 작동하고 있다는 신호다.

경고 신호는 항상 명확하지 않을 수 있다. 그러나 대응할 수 있는 시간은 제한적이다. 기다림은 결코 전략이 아니다.
dl-ciokorea@foundryco.com

You selected the right vendors. Now govern them like you mean it.

Waiting for your vendor to fix a program isn’t a strategy. It’s a cost, accumulating quietly while everyone in the room maintains the fiction that the process is working.

I’ve been in both rooms. The room where the client already knows something is wrong and needs the language and the evidence to act, and the room where the client doesn’t know yet. The program feels manageable, the vendor is professional, the steering committee meetings run on time, and the warning signs are sitting in plain sight waiting for someone to name them.

That second room is the more important one. Because the window to act is still open. And most clients don’t move until it’s started to close.

Warning signs most clients miss appear in design

The earliest signal is rarely a missed milestone or a failed deliverable. It appears in language. When the phrase “path to green” starts appearing in status reports and steering committee decks, the program has already accepted it’s not green. It’s shifted from managing execution to managing the narrative.

Watch what the steering committee is actually doing. If it’s consistently hearing about what happened last month rather than what’s forecast for next month, leadership has been converted from a decision-making body into an audience. The vendor controls the agenda, the framing, and the cadence of what gets surfaced.

The most serious signal is when a program sponsor hears about material issues from their own direct reports that the vendor hasn’t raised in the room. That’s not a communication gap but a calculated decision about what leadership is ready to hear. When that pattern appears in SAP, Oracle, or Salesforce programs, the trust that makes the governance model function has already eroded.

When you see these signals, don’t wait for the next steering committee. Start demanding data that can be independently corroborated. Ask the vendor to forecast, not report. If they can’t tell you where the program will be in 60 days, they’re managing your perception, not your program.

Your master conductor has a conflict of interest you’re not addressing

A pattern I’ve seen consistently across multi-vendor programs involving Accenture, Deloitte, PwC, and others is the master conductor, or program integration coordinator, is quick to name client’s gaps, other vendors’ shortcomings, and third-party dependencies running behind. What they almost never do is name their own firm’s failures with the same directness in the same room.

That’s not a personality issue but a structural conflict. The firm serving as master conductor is delivering against its own statement of work (SOW), and the governance position gives them access to information, reporting authority, and narrative control they’ll use to, consciously or not, protect their own delivery track.

This is why I advise clients to treat the master conductor and program integration coordinator role as structurally separate from the vendor delivery role. That means a, entirely separate firm, an independent integrator with no delivery stake in the outcome. In practice, it’s more often a designated individual or a group within the project management or transformation office carved from one of the existing vendors, reporting directly to the client and accountable to the steering committee, not to their own firm’s engagement leadership.

There’s no true firewall in that model, but there’s a behavioral test. Watch what that role or team does with information that reflects badly on their own firm. Do they surface it or escalate it with the same urgency they bring to client gaps? Do they forecast problems on their own track, or only on everyone else’s?

A master conductor who’ll escalate failures that implicate their own delivery team is doing the job. One who only calls out the client and the other vendors is protecting the engagement.

Before the next SOW is signed, make it structural. Define the master conductor role separately from the delivery role, name the individual or team, set the reporting line directly to the client, and use the behavioral test to determine whether the role is being performed or merely filled.

Waiting isn’t neutral

The financial cost of waiting is more specific than most clients realize. In a multi-vendor environment where two or three system integrators are billing against active SOWs, every month of schedule extension carries a material cost, potentially millions to tens of millions of dollars per firm, not because scope expanded, but because governance didn’t hold the timeline.

The commercial exposure appears even earlier. When scope boundaries are unclear and the integrated plan is unstable, vendors have no reliable baseline to price against. The result is predictable: a significant spread between a time-and-materials estimate and a fixed fee quote for the same scope. That spread is not a pricing difference. It’s the vendor converting your governance uncertainty into their contract protection. The client absorbs it either way.

What makes the waiting feel reasonable is the vendor’s day-to-day team is usually professional and working hard. So the problem is authority and incentive, not effort. The program manager running the engagement can’t authorize additional resources nor commit spend across organizational lines. Their job is to manage the relationship, protect their firm’s margin, and keep the engagement profitable. Fixing your program isn’t the same job.

The window to act is real and short. A senior executive at the vendor can absorb costs, bring new talent, and make commitments the delivery team has no authority to make. But that authority diminishes as the program ages. The more that’s been billed and the more scope has shifted, the harder it is for even a motivated senior executive to make the client whole. Clients who act in design or early build have options that clients who wait until three months before go-live don’t.

The intervention that works is a leadership one

When the signals are clear and the client is ready to act, the intervention that moves the needle isn’t a governance document or a scorecard meeting but a top-to-top conversation between client and vendor senior leadership. This includes execs who aren’t running the day-to-day program but have something personal at stake in the outcome.

That conversation works because it activates a different set of incentives. The vendor’s senior executive, the sector partner and industry leader whose name is on the relationship, needs your program to be referenceable. They don’t want a PR failure on a flagship engagement, nor do they want to explain to their firm’s leadership why a major client program collapsed. They have authority their delivery team doesn’t: power to assign their best resources, ability to absorb costs the SOW or change order doesn’t cover, and they can accelerate staffing decisions and make commitments that change what the program can do. They have skin in the game their team doesn’t.

Also, structure the engagement deliberately. Have senior executives on both sides and new talent brought in as a visible signal of vendor investment. And have a cadence that continues until the data shows the program is back on track, with time-bound accountability on both sides. And have explicit understanding that the relationship itself is under review, not just the program.

This is sustained leadership engagement, not a one-time meeting, and it doesn’t replace the governance model. It enforces it.

The only recovery signal worth trusting

When the top-to-top works, you’ll know it by what the vendor brings back to the table. Not reassurances or a revised plan with optimistic milestone dates, but facts about where they failed, what they’re changing, and, most critically, where the client has performance gaps that also need to close.

A vendor who comes back and accepts blame still manages the relationship. A vendor who says we failed here and here, these are the specific changes we’re making, and you have a gap here we need you to address, that vendor is engaged and mutually accountable. That’s the integrity test.

It runs both ways because program failure almost always does. Slow client decisions. Unavailable business resources. Requirements that shifted after design was locked. A vendor who names those things alongside their own failures isn’t deflecting, they’re investing in an outcome. That’s the signal the recovery is real.

If the executive meeting produces only promises and general commitment, keep the pressure on. Real engagement looks like specific admissions, named resources, and a willingness to hold the mirror up to both sides of the table.

You hold the accountability. Be the human in the middle.

Through all of it, the client holds the ultimate accountability. The master conductor holds the responsibility for execution and integration across the vendor ecosystem. That distinction isn’t administrative. It means the client can’t outsource their judgment, regardless of how rigorous the governance model looks on paper.

Think of it like the vendor can hallucinate. Not out of malice, but because every status report is a curated narrative produced by people whose compensation, future work, and professional reputation depend on how that narrative lands. The program deck isn’t neutral data, it’s information filtered through interests. What’s present tells you something. What’s absent, however, tells you more.

Be the human in the middle. Verify, cross-reference, ask questions the deck didn’t answer, and notice what’s missing as much as what’s there. If the steering committee is only hearing good news, that’s a sign someone is deciding what leadership is ready to hear, not that the program is running well.

Demand forecasts, not status reports. Look for hard evidence that can be independently corroborated. When the vendor names a client performance gap alongside their own, take it seriously. That’s the accountability model working the way it’s supposed to, not a deflection.

The warning signs may not always be apparent, though. The window is open, but won’t stay that way, so waiting isn’t a strategy.

Ways CIOs can prove to boards that AI projects will deliver

There’s been a wake-up call for CIOs. All the talk about perceived productivity boosts that have previously dominated conversations about AI has been replaced with a demand for measurable value from investments in emerging tech.

As MIT states that project failure rates are as high as 95%, executive boards are starting to question when AI will pay dividends. PWC’s Global CEO Survey shows that more than half of companies have seen neither higher revenues nor lower costs from AI, and only one in eight have achieved positive outcomes.

While Gartner predicts significant growth in AI spending this year, John-David Lovelock, distinguished VP analyst at the research firm, says the lack of tangible returns means digital leaders are changing tack. Rather than hoping their AI explorations will produce returns, CIOs are switching to more targeted initiatives.

“The projects growing quickly are the ones doing business, and those initiatives include AI,” he says. “CIOs are starting to de-emphasize AI and re-emphasize business. These projects are about AI enhancing existing work and moving away from moonshot transformational projects.”

Lenovo’s CIO Playbook for 2026, produced with tech analyst IDC, also suggests enterprises will get serious about AI deployments this year, with explorations replaced by production-level services that drive business transformation. With boards exerting pressure for measurable returns, Ewa Zborowska, research director at IDC, says more digital leaders want to use AI to enhance, innovate, and reinvent their organizations.

“CIOs aren’t just considering AI out of curiosity, they want to see what they can get out of it to grow the business,” she says. “AI adoption is much more about doing new things or taking a fresh approach to creating value rather than becoming more efficient at cost-cutting.”

Such is the clamor for value that Richard Corbridge, CIO at property specialist Segro, suggests that returns from AI are a main digital leadership priority: “If you discover, for example, that everyone in the organization used Copilot 10 times today, that might mean they’ve been more efficient,” he says. “But what have they actually done with the time they saved? How has saving time created value?”

CIOs will grapple with these questions during the next 12 months. With CEOs and boards becoming impatient for returns, digital leaders are working more with their bosses to define value. Successful CIOs fine-tune their arguments to ensure their projects are backed, and then demonstrate the value of their AI initiatives to the board.

Defining a valuable AI project

What’s clear is CIOs can’t deliver outputs from AI projects without input from their enterprise peers. IDC’s Zborowska says tighter cooperation across project ownership and KPIs ensure emerging technology investments are targeted at the right places.

This increased interaction between digital and business leaders also changes project aims. As stakeholders work closely together to generate value from AI, Zborowska expects executives to seek KPIs that stretch across operational concerns.

“I’d bet we see more non-financial aims over the next few years,” she says. “Executives will consider things such as are employees more engaged, has their work improved in any way, are AI implementations impacting customer experiences, and are internal decisions being made more efficiently.”

Martin Hardy, cyber portfolio and architecture director at the UK’s Royal Mail, agrees that defining valuable AI projects is all about finding the right focus. Effective deployments target processes in distinct areas, and business stakeholders must be part of the value-defining process.

“If we’re making decisions about legal documentation, AI is probably not there yet,” he says. “But if we can use AI to approve holidays, for instance, that might be something because if you have rules that say no more than two people off at a time, you could use AI to check about booking holidays without having to ask everyone in the office.”

For CIOs seeking value-generating use cases, Gartner’s Lovelock suggests AI can deliver results in key business areas such as boosting revenue, supporting decision-making, engaging staff, and improving experiences. He says the right path to AI exploitation correlates with Gartner’s enterprise technology adoption profiles, which group companies into a range of categories.

“The folks who are furthest forward, what we call the agile leaders in technology, are much more likely to drive AI to change the business,” he says. “The laggards on the other side are more likely to take on the technology that’s given to them by incumbent software providers, and use it in a prescriptive manner.”

Fine-tuning the use case

The challenge now is for digital leaders to work with their business peers to determine a more refined approach to AI deployment. For some CIOs, the value of AI is clear but the potential risks must be considered.

Take Dan Keyworth, executive director of performance technology and systems at McLaren Racing, whose focus is operational stability and race-day reliability. While he says being aware of developments in generative and agentic AI is crucial, the priority is tried-and-tested technologies rather than innovations that put performance at risk.

“Formula One is grounded in traditional machine learning and simulation,” he says. “Developing models has been a big part of our performance journey, and since the engine already existed, gen AI is the turbo that’s bolted on with more investment in AI.”

For other digital leaders, like Barry Panayi, group chief data officer at insurance firm Howden, success depends on keeping the human in the loop. Yes, automation can improve customer service, but rather than replacing staff, he wants to use AI to ensure Howden’s professionals have the right insight when they interact with clients.

“There’s absolutely no desire to use data to drive productivity by automating what we do with our customers,” he says. “This is a business where people speak to people. Our brokers need information that can give them an edge, and prove to their clients they understand the risks and can give them the best deals.”

Nick Pearson, CIO at technology specialist Ricoh Europe, adds that the use case for AI at his firm is two-fold: boosting operational productivity and improving customer processes. So he’s established a tri-party AI council with the head of service operations and the commercial manager in Spain. This council explores opportunities to buy, build, and reuse emerging tech.

“We’ve got a strategy that looks at where AI matters, which means exploring the technology we already have to boost internal productivity,” he says. “We’ve got a lot of people who know how to code and build things in Copilot Studio and other platforms, so let’s use that to increase productivity.”

Showing returns to the board

For Gartner’s Lovelock, the key lesson for CIOs eager to generate value from AI is to work with their peers and set desired outcomes before investing. “Most people start with the idea that more is more, and if you do that, you won’t get to the idea of quality,” he says.

That sentiment resonates with Segro’s Corbridge, who encourages digital leaders to start conversations with other professionals by focusing on value. Ask people how investing in an AI implementation will create value for them personally, for the wider business, and the customers the organization serves.

He says CIOs shouldn’t try to prove that AI works, but rather concentrate on how emerging tech adds value. That definition is so critical to Segro’s way of working that the organization uses the phrase proof of value rather than proof of concept.

“Most things work, but they might be more expensive,” he says. “For example, you might be able to use AI to transform how the organization uses spreadsheets, but that project might cost you $300,000. And if you’re currently paying someone $40,000 to do that work, and they’re happy doing it, then you have to question the value.”

Lessons are being learned, says IDC’s Zborowska, whose firm’s research suggests that half of AI POCs now transition into production. While some people might think this success rate isn’t impressive, the quantity a year ago was 10%. After several years of AI exploration, it appears CIOs and their businesses are now firmly focused on real returns.

“These numbers speak to the fact that companies are being more mature and mindful in how they allocate budgets,” she says. “They also support the main theme that we’re on a journey to transformation and a maturing market for AI adoption.”

How effective are semantic hubs in moving agentic AI forward?

The focus of enterprise AI initiatives is shifting from storing, processing, and moving data to ensuring data means the same thing wherever it’s used. This is vital if an LLM is to understand the nuances and specifics of an individual business.

Walmart’s recent announcement that it’s ending its partnership with OpenAI to power shopping through ChatGPT is a case in point. Relying on the LLM to scrape Walmart’s product data and then infer meaning from that led to hallucinations and poor customer experience, resulting in conversion rates three times lower than shoppers using Walmart’s own website. Had the AI agent been grounded in Walmart’s actual business logic developed over years, then the results might have been very different.

Semantic hubs that provide a centralized architecture translating raw data into consistent and clear business concepts are key components in powering effective agentic AI deployments. They mitigate the risks of semantic drift where an LLM’s understanding of concepts and terms is fluid, and changes over time. However, businesses don’t operate in a vacuum and need to exchange data with suppliers, customers, regulators, and financial institutions. Semantic hubs on their own may provide a point of failure in these instances as definitions and understanding will vary across organizations. As commerce moves toward more autonomous agentic systems, this is a problem.

A shared language

McKinsey predict AI agents could handle between $3 and $5 trillion of global commerce by 2030. The economic and business rationale for moving to a more agentic world is compelling and, although still in its early stages, the technology that will power this revolution is developing rapidly. MCP, Agent2Agent (A2A), and other open standards offer protocols for agents to communicate with each other and pull in data as needed. However, a missing building block is a common language that allows dispersed agents to consistently and accurately infer meaning from the data they use.

The Open Semantic Interchange (OSI) initially developed by Salesforce and Snowflake can be seen as a universal mental model for data, ensuring every AI agent perceives business definitions with the same precision and intent as a human expert, regardless of which system they navigate. “If successful, OSI has the power to fundamentally reshape the competitive landscape by commoditizing the definition of a semantic model,” says Brad Shimmin, VP practice lead, data and analytics at tech analyst The Futurum Group. “Vendors will no longer be able to lock in customers with proprietary metric languages. Instead, they’ll need to compete on the execution of semantics, differentiating on performance, caching efficiency, security, and the sophistication of their AI integrations.”

Hurdles on the road to technical sovereignty

While the OSI initiative was only launched in September last year, major vendors including Cloudera, Databricks, Instacart, and ThoughtSpot have now joined founding members Salesforce and Snowflake in developing the standard. However, many CIOs are concerned that without wider cooperation from other enterprise software providers like Microsoft (Power BI) and SAP, there’s a risk it won’t build the momentum needed for true interoperability across industries. The rapid adoption of MCP by competing vendors offers hope that more companies will join the initiative as they realize common and open standards are required to build effective agentic systems.

Also, OSI specification is still in its early development phase, so committing live data assets to tools still in beta isn’t viable in the short term. This is likely to change over the coming months as leading vendors incorporate OSI and case studies emerge.

Another essential driver of adoption will be support from industry groups that have vested interests in seeing safe and reliable agentic systems be deployed through their sectors. The banking and medical sectors, for instance, rely on agreed languages and definitions to prevent financial fraud and health risks. Trevor Hall, chief architect at Tableau Salesforce and an OSI contributor says that while OSI wouldn’t necessarily define specific domain models such as medicine, he’d hope that industry would actually lean on OSI to define its models.

OSI next steps

Snowflake is positioning itself as a primary custodian of the OSI standard, and with around 20% of the cloud data warehousing market, it has the potential to drive adoption. Salesforce is the other key founding member that sees OSI as a core element of its Agentforce fleet of AI agents. Alongside this, Phase 2 of expanding the OSI ecosystem will continue through the remainder of 2026, with plans to bring native import and export buttons for OSI models to over 50 different data platforms.

Things are moving quickly in the agentic space and the underpinning infrastructure is taking shape. Now’s the time to make sure your data is ready for this revolution, and OSI could be a key element in your planning.

The increasing need to expand a tech knowledge base

Technological sovereignty is often debated in terms of jurisdiction, compliance, or vendor origin. All of that is important, but it leaves out the important issue of retaining critical knowledge, which directly impacts the CIO.

Case in point, British bank TSB undertook a critical platform migration in 2018. The operation relied on a structure that, on paper, had guarantees of a validated provider, testing, and formal program governance.

Once the migration was complete, the new platform began experiencing technical difficulties, resulting in a significant disruption to branch, telephone, online, and mobile banking services, affecting a large portion of its 5.2 million customers. The extent of the matter was so complex, key problems weren’t resolved until the end of the year.

The crisis also had a significant economic impact. Banco Sabadell, which acquired TSB in 2015, had to absorb losses exceeding €200 million, and four years later, in December 2022, British regulators imposed a combined fine of nearly £49 million on the bank for failures in operational risk management, governance, and outsourcing supervision related to the migration. Then, in April 2023, the Prudential Regulation Authority, the Bank of England’s prudential supervisor, personally fined the then CIO £81,620 for failing to take reasonable steps to ensure adequate supervision.

The lesson from this case isn’t that a large migration can go wrong. Every CIO knows that can happen. But TSB didn’t have the capacity to govern and question critical vendor dependency.

The constant of knowledge

When we talk about technological dependence, we usually think of market concentration, long-term contracts, proprietary formats, migration difficulties, or negotiating power with the vendor. All of that exists and will continue to be important. But knowledge dependence is another form that comes up in the conversation and has a greater impact on the CIO’s day-to-day work.

This occurs when the organization doesn’t retain enough internal knowledge to discuss the technology, or subject it to serious scrutiny.

The TSB case was a clear example. The oversight of a critical department relied too heavily on unquestioned supplier guarantees. In other words, there was insufficient internal capacity to rigorously govern the outsourcing relationship.

With this example, the meaning of lock-in changes. It no longer manifests itself when migration becomes prohibitive or when an architecture becomes unchangeable. It begins earlier, when the company is operating its technology but can no longer reliably evaluate it.

In fact, this dependency isn’t easy to perceive because it coexists with a sense of reasonable operation. The services are available and the providers respond, and yet risks are being taken.

On the other hand, it forces a broader definition of sovereignty. The issue goes beyond where the data resides, under what jurisdiction a provider operates, and what degree of regulatory exposure a platform introduces.

Another question is how much critical knowledge does the company retain about what underpins its operations. From this perspective, maintaining sovereignty doesn’t equate to reviewing ownership of the technology or internalizing its implementation, thus preventing reducing the conversation to a legal or geopolitical debate.

Hidden knowledge dependencies

The common mistake when discussing tech dependence is to focus solely on the noisiest areas like cloud computing, AI, large platforms, and data storage. When discussing knowledge dependence, it’s essential, but not always easy, to look inward.

One area to consider is the architecture. Even if systems are functioning, it may become increasingly difficult to answer basic questions, like why the environment is designed this way, which parts are replaceable, or what changing a critical layer would entail. If this is the case, it’s a sign of dependency.

Another aspect is the operation. Outsourcing execution can make perfect sense, but problems arise when understanding is also outsourced. That is, when the internal team needs to go externally to make decisions or solve problems.

Dependency can also be hidden within the complexity of technological layers. In other words, it doesn’t necessarily have to be directly linked to a large platform, but to the set of integrations and connectors surrounding it, or a partner ecosystem that’s become a tangled mess. If no one understands the complete picture, dependency exists.

The knowledge CIOs can’t afford to lose

All of this shifts the focus to the specific responsibility of knowledge. Not all capabilities carry the same weight or have the same strategic value. But there’s a decisive threshold, the moment when the organization no longer sufficiently understands a dependency to be able to manage it. From that point on, the risk extends beyond the operation itself. The quality of decisions deteriorates, the CIO’s ability to discuss risks or costs diminishes, and many aspects end up being accepted without clear rationale.  

If it isn’t detected in time, there’s a risk of reaching a point of no return, where control of the technological roadmap is lost.

The debate for the CIO

The solution isn’t necessarily to distrust suppliers or outsourcing on principle. There’s a more subtle and demanding issue for the CIO of clearly deciding what knowledge can and can’t be shared externally. So the debate on sovereignty needs to become more pragmatic and more linked to the company’s actual capacity to understand what it depends on, and to change course when necessary.

In an environment of complex platforms, encapsulated services, and outsourced intelligence, preserving decision-making capacity will be an indisputable condition for technological autonomy.

❌