Visualização de leitura

SAP: Latest news and insights

SAP (NYSE:SAP) is an enterprise software vendor based in Walldorf, Germany. Its cloud and on-premises enterprise resource planning (ERP) software, including S/4HANA, helps organizations manage their business operations and customer relations. The German multinational also offers a vast array of software solutions tailored to specific facets of the enterprise, including data management, analytics, and supply chain management, as well as solutions aimed at specific industry verticals.

AI is an area of increasing emphasis for SAP, which has a market cap of $200 billion, making it the 31st-largest technology vendor in the world as of mid-May, 2026.

Latest SAP news and analysis

SAP’s AI offer to legacy customers comes with a catch

May 13, 2026: SAP is offering the more than 20,000 customers “stuck” on its legacy ECC systems a path to AI, but only if they commit half their maintenance spend to the cloud first.

SAP’s AI promises last year? Most are still rolling out

May 13, 2026: Looking back at the bold promises SAP made about AI at Sapphire 2025, it said Knowledge Graph, Joule Studio, and AI Agent Hub would ship by the end of the year. Those tools are now technically available, but adoption has lagged, and SAP is already announcing version 2.0.

SAP’s biggest AI bet yet: Agents that execute, not just assist

May 12, 2026: At Sapphire 2026, SAP unveiled what it calls the “Autonomous Enterprise:” a sweeping vision in which AI agents don’t just assist workers, but execute business processes themselves as the vendor transforms itself into what CEO Christian Klein calls “a business AI company.”

SAP customers say migration is eating their budgets — and AI is next in line

May 12, 2026: SAP is piling on the AI innovations at its annual Sapphire event, but for many SAP customers still navigating costly migrations to SAP’s S/4HANA cloud ERP, the more pressing question isn’t what AI can do, it’s whether they can get there from here.

SAP’s new API policy restricts AI access, draws customer criticism

May 4, 2026: SAP has encountered pushback from users over the scope and implications of its new policy of limiting API usage to “SAP-endorsed architectures, data services, or service-specific pathways.”

SAP to acquire data lakehouse vendor Dremio

May 4, 2026: SAP plans to acquire Dremio, which bills itself as an agentic lakehouse company, although it already partners with Snowflake and Databricks, vendors with similar offerings to Dremio.

SAP to acquire Reltio to help customers make data AI-ready

March 27, 2026: ERP giant SAP has agreed to acquire master data management software provider Reltio to bolster the capabilities of its Business Data Cloud (BDC). The goal is to make customers’ data AI-ready so it provides reliable input for Joule and Joule agents across the enterprise, SAP said.

SAP shakes up its service and support portfolio, but only for the cloud

March 3, 2026: SAP has unveiled an update to its services and support portfolio that focuses on cloud and AI, while leaving on premises offerings untouched. Split into three tiers, the new plans provide what the company describes as a “simplified engagement model” that leans heavily on AI-based tools.

SAP migrations often fail during planning phase

February 24, 2026: Most SAP migrations fail before implementation begins. This is the conclusion of the “The State of SAP Migrations” from ISG, a technology research and advisory firm that surveyed more than 200 business and IT decision-makers from large, international companies with over 1,000 employees.

SAP Emarsys is dead, long live SAP Engagement Cloud

February 20, 2026: SAP has changed the name of its SAP Emarsys customer experience offering to SAP Engagement Cloud, signalling — at last — its commitment to integrating it into the core of its enterprise infrastructure.

SAP tosses some Compatibility Pack users a (short) lifeline

January 8, 2026: SAP is throwing a lifeline to customers who are running late on their transition from S/4HANA Compatibility Packs in their data centers. Although usage rights were set to expire at the end of 2025 for most users, SAP has announced a “final” five-month transition period for customers.

SAP employees’ trust in leadership has diminished since the restructuring

December 3, 2025: SAP’s restructuring may have been good for its bottom line, but behind the scenes, it has backfired. The company did what it promised, said Greyhound Research chief analyst Sanchit Vir Gogia. But, he said, “Numbers only tell half the story. Inside the organization, something broke.”

SAP touts Microsoft disaster recovery plan for Europe; Analysts doubt it will work

November 19 2025: SAP has cut European deals with Microsoft, Capgemini, and Orange to ostensibly deliver an emergency disaster recovery failover if Microsoft is legally blocked from delivering European services. 

SAP to offer concessions to the EU over antitrust probe into ERP support practices

November 7, 2025: SAP will submit concessions to the European Commission to settle an antitrust investigation into its software licensing and maintenance practices.

Five takeaways from SAP TechEd Berlin

November 6, 2025: SAP’s TechEd conference this week offered a few surprises, but its focus, predictably, could be spelled in two letters: AI. Artificial intelligence was everywhere, as the more than 2,800 in-person delegates and the 20,000-plus online attendees quickly discovered.

SAP shakes up its certification process

November 4, 2025: SAP is overhauling how IT pros earn certifications in its technologies — with the aim of better aligning credential evaluation with the way IT pros work with SAP products in real-world settings.

SAP customers not sold on S/4HANA ROI

November 4, 2025: A huge percentage of SAP customers are wary about the company’s efforts to move its customers away from perpetual licenses to subscription pricing, with most worried about future pricing increases, according to a recent survey.

SAP and Snowflake add zero-copy sharing between their systems

November 5, 2025: SAP and Snowflake are collaborating to extend SAP Business Data Cloud (BDC) with Snowflake’s data and AI platform, they announced Tuesday at SAP TechEd in Berlin.

US Supreme Court rejects SAP appeal, paving the way for Teradata antitrust trial

October 7, 2025: The US Supreme Court declined to hear SAP’s appeal in an antitrust case brought by Teradata, allowing the data analytics firm to move forward with claims that the German software maker illegally tied its enterprise applications to its own database technology.

SAP sets timeline to replace SuccessFactors recruiting module with SmartRecruiters

October 6, 2025: Now that SAP has completed its acquisition of SmartRecruiters, executives are talking about their plans to replace the recruiting module of SuccessFactors, SAP’s cloud-based HR management tool.

SAP’s role-aware Joule Assistants will partner with workers

October 6, 2025: SAP is rolling out new role-aware Joule Assistants, department-specific clouds, and tools for finance, HR, and supply chain professionals at SAP Connect, its new event for line-of-business leaders. The AI assistants are designed to partner with human workers while managing and orchestrating agents across SAP Business Suite. 

SAP targeted by EU antitrust investigation of its ERP support services

September 25, 2025: SAP’s software support and maintenance services are the target of an antitrust investigation by the European Commission, which is concerned that the company forces customers to buy its services for longer, and for more licenses, than they need. The Commission, which enforces antitrust policy in the European Union, opened a formal investigation into the ways that SAP limits customers’ choices on where and when to buy maintenance and support services for its software.

Amazon and SAP partner on European data sovereignty platforms to offer better oversight

September 24, 2025: Amazon’s AWS European Sovereign Cloud, set to launch in Germany by the end of 2025, is starting to take shape with the announcement of SAP’s Sovereign Cloud as its latest partner. The announcement is part of a gradual expansion of the capabilities on offer to organizations looking to invest in the concept of sovereign clouds to meet the challenges posed by AI expansion.

SAP offers concessions to EU regulators to avert an antitrust probe

September 23, 2025: SAP has reportedly submitted concessions to the European Commission, aimed at resolving antitrust concerns related to its ERP software business practices. The move represents a preemptive effort to avoid a formal investigation that could result in fines of up to 10% of its annual global revenue.

SAP user group calls for licensing clarity

September 17, 2025: SAP’s licensing policies are top of mind at the conference of the German-speaking  SAP User Group (DSAG) this week. While the cloud is the right way forward, DSAG members need transparency from SAP to allow them to make the move, DSAG chairman Jens Hungershausen said in a pre-event post.

SAP change management still challenges enterprises

September 9, 2025: SAP managers are under pressure to speed up change and delivery of return on investment, and some are seeking AI’s help with that, according to the SAP Change Management Index 2025. The survey, commissioned by Basis Technologies, found that 48% of respondents are facing pressure to implement changes more quickly, and 41% to deliver faster return on investment.

SAP data sovereignty service lets customers run cloud workloads inside their data centers

September 2, 2025: SAP Sovereign Cloud On-Site is a new service launched in conjunction with the announcement of the global availability of the company’s Sovereign Cloud platform. In practice, this will mean local SAP or SAP certified personnel will manage SAP Sovereign Cloud On-Site workloads in a data center approved by the customer.

Three critical patches for SAP

August 12, 2025: For its August Patch Tuesday, SAP released a series of patches, including three for critical vulnerabilities that each ranked 9.9 on the CVSS scoring scale.

SAP to acquire SmartRecruiters to enhance its SuccessFactors HCM suite

August 1, 2025: SAP announced it has agreed to acquire AI-powered talent acquisition firm SmartRecruiters to bolster its SuccessFactors human capital management (HCM) suite.

SAP seeks to cut 80% of your data management work

July 15, 2025: According to Irfan Khan, Chief Product Officer and President of SAP Data & Analytics, the strength of the company’s Business Data Cloud platform is that it can automatically align and synchronize various data sources without ETL or data pipeline management. This will allow customers to reduce the collection, refinement, and quality control tasks that typically account for more than 80% of the overall data management function, and instead focus more time on AI model training and application, he said.

US judge issues split decision in antitrust case against SAP

July 2, 2025: The judge allowed parts of the case to proceed to trial, but pushed back against accusations that SAP was inappropriately denying a rival access to customer data.

SAP’s Rise rebrand conceals cost changes

Jun 27, 2025: Licensing changes in the wake of SAP’s rebranding of Rise with SAP Premium as Cloud ERP Private could increase costs for CIOs not paying attention to the minutiae of their ERP and AI usage.

SAP GUI flaws expose sensitive data via weak or no encryption

Jun 25, 2025: Newly disclosed vulnerabilities in SAP GUI for Windows and Java store user data with outdated or no encryption, posing compliance and breach risks for enterprises.

SAP, IBM slammed for role in Quebec auto insurance board ERP overhaul fiasco

Jun 18, 2025: Investigations into a controversial Canadian ERP implementation involving SAP SE and LGS, an IBM subsidiary, took a bizarre turn when the Quebec anti-corruption squad conducted raids at the headquarters of the organization which commissioned the system overhaul.

Upgrade or else, SAP warns as end of S/4HANA Compatibility Pack licensing nears

Jun 4, 2025: SAP S/4HANA customers who rely on functionality in Compatibility Packs have just a few months to transition to native capabilities as the termination of licenses for the decade-old packs looms. The company has imposed a hard deadline of December 31, 2025, for most customers to stop using the packs, and will treat their use after that as “commercial non-compliance — a breach of contract.”

Nearly half of SAP ECC customers may stick with legacy ERP beyond 2027

June 4, 2025: At the end of 2024, only 39%, or about 14,000, of the 35,000 SAP ECC customers had migrated to S/4HANA, according to Gartner. At the current rate of migration, Gartner projects there will still be 17,000 holdouts, or nearly half of the ECC customer base, by 2027.

SAP teams up with Alibaba to host Cloud ERP workloads in China

May 28, 2025: Alibaba Group, the Chinese e-commerce giant and cloud computing infrastructure provider, is adopting SAP’s Cloud ERP Private for its own enterprise infrastructure, and the companies plan to jointly sell their services.

SAP and AWS launch co-innovation program to accelerate enterprise AI adoption

May 22, 2025: SAP and Amazon Web Services have launched an AI Co-Innovation Program, offering dedicated technical resources and cloud credits to help enterprises embed AWS generative AI tools into their ERP systems.

SAP wants to make AI ubiquitous — just don’t ask about S/4HANA

May 22, 2025: For all the talk of marrying SAP’s core business suite with emerging AI power, conspicuously absent from this year’s Sapphire was any talk about the state of S/4HANA, the adoption of which appears to be lagging desired levels at this time.

IBM’s massive SAP S/4HANA migration pays off

May 21, 2025: Ann Funai, CIO and vice president of IBM’s Business Platform Transformation, says IBM achieved a 30% reduction in infrastructure-related operational costs since completing its migration to SAP’s cloud ERP platform.

SAP revamps its cloud ERP application packages

May 20, 2025: SAP is repackaging its cloud ERP applications to make it easier for new customers to buy into its ecosystem, and adding AI-based product enhancements for its existing customers.

SAP goes all-in on agentic AI at SAP Sapphire

May 20, 2025: At its annual SAP Sapphire conference in Orlando, Florida, the ERP software titan debuted a slew of new innovations, including AI Foundation, which it calls the AI operating system for SAP business AI, as well as new AI agents across the SAP Business Suite, and an omnipresent Joule copilot that’ll accompany and support users as they move across applications. 

The road to S/4HANA: How CIOs are managing SAP ECC’s end of support

May 19, 2025: With the deadline for SAP ECC end of support nearing, CIOs face major decisions, and likely headaches, as they consider whether and how to transform their core ERP systems for a new era.

SAP NetWeaver customers urged to deploy patch for critical zero-day vulnerability

April 25, 2025: The unrestricted file upload flaw is likely being exploited by an initial access broker to deploy JSP web shells that grant full access to servers and allow installing additional malware payloads.

Has Oracle knocked SAP off the ERP throne?

April 23, 2025: A new study estimates that Oracle became the leading provider of ERP applications in 2024, stealing a position that SAP had held since the early 1980s.

SAP defies the economic downturn

April 23, 2025: Despite all the economic turbulence, SAP made a solid start to the new year: Sales and profit are up significantly compared to the same quarter last year.

SAP customers on Business Suite: New strategy, same old concerns

April 10, 2025: Once again, SAP customers must grapple with a new SAP strategy. Their demands for ‘Business Unleashed’ are familiar: more clarity and consistency, better integration, accessibility for all.

What is S/4HANA? SAP’s latest ERP system explained

April 2, 2025: S/4HANA can realign and transform business models and processes. Here’s what you need to know about SAP’s flagship enterprise resource planning (ERP) system.

SAP adoption surges in Europe as enterprises embrace cloud

March 26, 2025: About 40% of companies in the DACH region are increasing their overall IT budgets, while SAP-specific investments are on the rise for 47% of enterprises, according to a German SAP user group.

SAP customers struggle with S/4HANA migration

March 24, 2025: According to a recent Horváth study, more than 60% of companies experience deviations in budget, schedule, and result quality during S/4HANA migration.

SAP CEO Christian Klein predicts manual data entry will disappear from SAP by 2027

March 20, 2025: Celebrating the software giant’s 30th anniversary in Korea, Klein announced plans to extend SAP’s AI assistant to the Korean market with bold pronouncements about user productivity.

SAP introduces Joule for Developers

Mar 19, 2025: SAP has added AI capabilities powered by its AI assistant, Joule, to SAP Build Process Automation and SAP Build apps, extending the existing AI capabilities in SAP Build Code and ABAP Cloud.

Celonis sues SAP for anti-competitive data access practices

Mar 17, 2025: The German startup Celonis has accused the software giant of abusing its market power to hinder competition — in particular, to give its own processing mining solution preferential treatment over third-party offerings.

SAP patches severe vulnerabilities in NetWeaver and Commerce apps

March 12, 2025: The enterprise software vendor bundled 25 security patches into its March update, addressing flaws that impact middleware, interfaces, custom apps, and more.

SAP taps Splunk’s Simon Davies to lead reorg’d APAC region

February 19, 2025: Paul Marriott passes the helm to the former Splunk exec who had previous stints with Salesforce and Microsoft in the region.

SAP aims to unify data for AI, analytics with new Business Data Cloud

February 13, 2025: With its new Business Data Cloud, SAP wants to help enterprises unify data held in its own systems with that in other vendors’ applications: It’s goal is to expedite advanced analytics and AI use cases.

SAP builds new business suite in the cloud

February 13, 2025: SAP is taking more control: From the new Business Suite to the Business Data Cloud and the AI apps, everything is to be pre-configured in the cloud.

SAP throws a lifeline to large organizations with new ECC offering

February 4, 2025: Scheduled for release in 2028, SAP’s new ECC offering will, says analyst, reduce risks, security vulnerabilities, and compliance challenges tied to outdated systems.

SAP restructures board to emphasize AI-first, suite-first strategy

January 29, 2025: SAP is expanding its Executive Board to include the Strategy & Operations division and is appointing an expanded Executive Board consisting of eight additional managers.

SAP to give on-prem customers three-year reprieve — with a catch

January 27, 2025: Since many migrations to S/4HANA are still stalled, SAP apparently wants to give its customers more time to make the switch.

IBM offers SAP-on-Power users a new way into the cloud

January 7, 2025: Enterprises running S/4HANA on IBM Power Systems will soon have access to SAP’s RISE managed application offering, which has previously focused on those running the ERP suite on x86 servers.

SAP customers still slow to deploy AI broadly

December 23, 2024: Organizations are interested in leveraging AI to optimize internal processes and analyze data for insights, but security, data quality, and governance remain hurdles.

SAP systems increasingly targeted by cyber attackers

December 13, 2024: A review of four years of threat intelligence data, presented Friday at Black Hat by Yvan Genuer, a senior security researcher at Onapsis, reports a spike in hacker interest in breaking into SAP ERP systems.

Nearly 25% of SAP ECC customers unsure about their future

December 6, 2024: The future of SAP architectures is hybrid. But according to a survey conducted by the Financials subgroup of the German-speaking SAP User Group (DSAG), where exactly the journey will go has not yet been decided for many organizations.

SAP ups AI factor in its SuccessFactors HCM suite

October 28, 2024: The launch by SAP of new AI capabilities in its SuccessFactors HCM (human capital management) suite Monday is a case of “better late to the party than never,” according to an analyst with Info-Tech Research Group.

Riled by SAP’s AI policy, customers issue list of demands

October 22, 2024: SAP’s strategy of offering AI innovations only in the cloud continues to attract a lot of criticism from its user base. Here’s what SAP customers would like to see happen.

SAP: good figures, but bad mood

October 22, 2024: Employee engagement is suffering from the ongoing restructuring at SAP, although the software company reported good figures and has raised its outlook.

SAP sustainability tracking rollout focuses on data consistency, outlier detection

October 21, 2024: Enterprise CIOs are under increasing pressure from global regulators to rein in sustainability shortfalls due to partner problems. SAP’s pitch is that most enterprise partners are already using SAP, so it’s in an ideal position to collect and distribute partner data.

SAP joins the AI agent era — but not all customers may benefit

October 9, 2024: SAP is expanding its generative AI copilot Joule to include AI agents. Deeply embedded in SAP systems, the company’s agents aim to solve increasingly complex tasks.

SAP launches collaborative AI agents, adds Knowledge Graph

October 8, 2024: SAP’s promised collaboration between its AI copilot, Joule, and other agents will become reality in the fourth quarter of 2024, the company announced at its 2024 TechEd conference Tuesday.

SAP Build gains AI capabilities to help build autonomous agents

October 8, 2024: SAP wants developers to view its Build platform as the one extension solution for all of SAP’s applications, according to Michael Aneling, chief product officer for SAP Business Technology Platform (BTP).

SAP faces probe in the US over alleged price fixing in government contracts

September 24, 2024: German software giant SAP is under investigation by US officials for allegedly conspiring to overcharge the US government for its technology products over the course of a decade. The probe, led by the Department of Justice (DOJ), is focused on whether SAP and its reseller, Carahsoft Technology, colluded to fix prices on sales to the US military and other government entities

SAP CTO to step down after ‘inappropriate behavior’

September 3, 2024: Juergen Mueller is leaving SAP’s executive board, saying his behavior at a company event was incompatible with company values.

SAP partners up to make AI more practical

August 15, 2024: Many companies find it difficult to incorporate AI into their business processes. To change this, SAP wants to work more closely with the appliedAI initiative.

SAP patches critical bugs allowing full system compromise

August 14, 2024: Both the vulnerabilities score above 9 on CVSS and can allow access to sensitive data if not patched immediately.

SAP is restructuring its Executive Board

July 30, 2024: Head of sales Scott Russell and head of marketing Julia White are unexpectedly leaving SAP; White will not be replaced.

SAP restructuring to impact more jobs than expected

July 24, 2024: The restructuring at SAP affects almost a tenth of its workforce. The company estimates the cost of the internal restructuring at around €3 billion.

SAP Q2 results reveal large orgs now firmly on the path to AI

July 24, 2024: It “had a direct impact on our bookings,” company CEO says during second quarter earnings call.

SAP offers AI to all Rise customers — in unknown, varying amounts

July 19, 2024: Joule AI is now available to all Rise with SAP customers, but customers not using SAP Cloud solutions remain out of luck.

SAP security holes raise questions about the rush to AI

July 18, 2024: Cloud security firm Wiz has published a detailed report about SAP security holes, now patched, that raises alarming questions about the secondary role AI efforts are having on cybersecurity defenses.

SAP publishes open source manifesto

June 27, 2024: SAP has made five commitments — make consistent contributions to the community, champion open standards, strive to adopt an open-first approach, nurture open source ecosystems, and adopt a feedback-driven approach.

SAP, Salesforce lead $356 billion enterprise applications market: IDC

June 21, 2024: The software giants were neck-and-neck as the overall enterprise software market grew 12% in 2023, said IDC.

SAP to buy digital adoption specialist WalkMe for $1.5 billion

June 5, 2024: After Signavio and LeanIX, SAP is acquiring the Israeli provider WalkMe to help user companies with their digital transformation.

SAP CEO Christian Klein: Everything we do contains AI

June 5, 2024: SAP CEO Christian Klein kicked off the company’s Sapphire customer conference with the promise of a real productivity boost from AI.

SAP adds more tools for developers on its platform

June 4, 2024: Behind the scenes, SAP is also using AI to extend the capabilities of its Business Technology Platform.

SAP embeds Joule in entire enterprise portfolio, plans integration to other AIs

June 4, 2024: Joule could communicate with other AIs to complete more complex tasks spanning multiple applications, SAP suggests.

SAP AI pact with AWS offers customers more gen AI options

May 29, 2024: SAP wants to work more closely with AWS on AI, complementing existing partnerships with Google and Microsoft.

SAP customers see S/4HANA and AI as top digital transformation drivers

May 20, 2024: With SAP’s end of mainstream maintenance for SAP Business Suite 7 set for 2027, recent findings from the US SAP user group reveal that companies are increasing focus on shifting to S/4HANA and embracing AI.

SAP faces turning point as Hasso Plattner steps down

May 15, 2024: The departure of CEO Hasso Plattner marks the end of the founding era at SAP, and adds further complexities for the German software multinational as it faces ongoing restructuring efforts, among many other challenges to solve.

SAP forecasts clarity in the cloud

May 7, 2024: After customers and user groups that adopted S/4HANA early accused SAP of bait-and-switch tactics, CIO editor-in-chief in Germany Martin Bayer recently sat with Christian Klein, CEO of the multinational software company, to clear the air on cloud reassurance, using gen AI as a migration accelerant, and positive growth for the future.

Deutsche Telekom calls on SAP for IT infrastructure move to Rise

March 22, 2024: Deutsche Telekom will move its SAP infrastructure to Rise with the help of its own IT services subsidiary, T-Systems.

SAP user group: S/4HANA usage is growing, but still in the minority

March 21, 2024: Customers want more information about cloud and AI strategy from the German ERP giant.

SAP and Nvidia expand partnership to aid customers with gen AI

March 18, 2024: SAP is embedding Nvidia’s generative AI foundry service into SAP Datasphere, SAP BTP, RISE with SAP, and SAP’s enterprise applications portfolio to equip customers with greater and more simplified access to the technology.

SAP enhances Datasphere and SAC for AI-driven transformation

March 6, 24: SAP adds new generative AI and data governance features to SAP Datasphere and SAP Analytics Cloud, enabling customers to incorporate non-SAP and unstructured data when creating AI-based planning models and scenarios.

SAP names Philipp Herzig as chief artificial intelligence officer

February 16, 2024: It’s a small promotion and a change of title for one man, and a sign of a larger change in strategic focus for many others at SAP.

SAP 2024 outlook: 5 predictions for customers

February 12, 2024: As SAP continues to position itself as a leader in generative AI and innovative technologies, customers must prepare to navigate new service offerings and an inevitable move to SAP RISE.

SAP has a new succession plan

February 12, 2024: SAP’s board wants to bring former Nokia chairman Pekka Ala-Pietilä on board to succeed founder Hasso Plattner as chairman.

SAP and IBM under scanner of Indian investigative agency for Air India deal

February 5, 2024: Air India failed to adhere to the rules while awarding an ERP contract worth $27 million to SAP India and IBM India.

SAP offers big discount to lure on-prem S/4HANA customers to Rise

January 30, 2024: The restructuring at SAP affects almost a tenth of its workforce. The company estimates the cost of the internal restructuring at around €3 billion.

SAP announces $2.2B restructuring program that’ll impact 8,000 jobs

January 24, 2024: The restructuring program will focus on AI and impact about 7.4% of SAP’s total workforce.

SAP doubles down on cloud-first innovation with executive reshuffle

January 10, 2024: Product engineering head Thomas Saueressig will take on a new role to maximize potential for customers in the cloud, but that’s cold comfort for on-premises users.

SAP pays multi-million fine for bribery

January 11, 2024: With a $220 million fine, SAP is drawing a line under a long-standing investigation by US authorities. The company is alleged to have bribed officials.

SAP faces breakdown in trust over innovation plans

December 5, 2023: The company’s plan to offer future innovations in S/4HANA only to subscribers of its Rise with SAP offering is alienating customers, user conference hears.

SAP unveils tools to help enterprises build their own gen AI apps

November 1, 2023: SAP Build Code suite combines new and existing developer tools, while a foundational AI model trained on anonymized customer data will be available to help automate tasks.

SAP’s new generative AI pricing: Neither transparent nor explainable yet

October 12, 2023: The ERP vendor is adding a new pricing tier to its Rise with SAP offering with an opaque mix of bundled and usage-based pricing for generative AI functionality.

SAP offers faster updates, longer maintenance for S/4HANA in private clouds

October 11, 2023: SAP is offering free migration consultations, more frequent feature releases and two years’ additional maintenance to entice customers to update to S/4HANA Cloud private edition and, ultimately, adopt Rise with SAP.

SAP prepares to add Joule generative AI copilot across its apps

September 26, 2023: Like Salesforce and ServiceNow, SAP is promising to embed an AI copilot throughout its applications, but planning a more gradual roll-out than some competitors.

SAP’s AI offer to legacy customers comes with a catch

More than 20,000 SAP customers are “stuck” on legacy ECC systems due to customizations, according to the CEO of SAP partner MyWave, Geraldine McBride, and many won’t migrate anytime soon. At Sapphire 2026, SAP offered them a path to AI, but only if they commit half their maintenance spend to the cloud first.

The offer, confirmed by SAP Chief Strategy Officer Sebastian Steinhaeuser during a media briefing, enables SAP to extend limited AI capabilities to customers still running its software on-premises during their transition to cloud ERP. But it comes with a significant condition: Customers must shift at least 50% of their maintenance spending to the cloud before they can enable Joule assistants on premises.

It’s a another small crack in SAP’s resolve: For years, it has been upsetting customers using its legacy systems by telling them the only way to access its latest innovations is to migrate to its latest platform, S/4HANA, in the cloud.

“There’s no confusion at all” about the latest change, said Steinhaeuser. “To be super clear, our Joule assistants and agents are designed for cloud. This is destiny and destiny is unchanged. […] However, we also want to meet customers where they are. We’ve put in a lot of work technically to make Joule available to ECC and S/4 customers who have committed to their journey.”

An SAP spokesperson confirmed the 50% requirement and said the company has no plans to extend the ECC end-of-service deadline beyond 2027, with extended support available through 2030 at additional cost.

For customers already on SAP’s Cloud ERP Private offering, the deal is sweeter. SAP will activate up to three Joule Assistants at no additional cost to cover common workflows in finance, HR, and the supply chain. New customers will be guided through activation during onboarding; existing customers can contact their SAP representative.

Meeting customers halfway to the cloud

SAP has spent years pushing customers toward S/4HANA, its cloud-based ERP successor to the aging ECC platform. But migration has proven difficult. The Americas’ SAP Users’ Group (ASUG) 2026 Pulse of the SAP Customer survey found that S/4HANA migration remains customers’ single biggest challenge, with 61% citing budget constraints and 48% struggling with integration.

Manoj Swaminathan, SAP’s chief product officer for Business Suite, struck a conciliatory tone ahead of Sapphire. “We want to take every customer along with us,” he said. “Some are in ECC, some on-prem, and some on the pathway as part of cloud ERP transformation. We want to work with every one of them so they can realize value now as they are modernizing.”

But for customers who haven’t committed significant cloud spending, that road to value remains gated.

Christopher Diaz, senior vice president of finance at Rio2, a Canadian mining company that recently implemented SAP’s public cloud, described the challenge for smaller organizations. “We plan to use the Joule assistant once we have the production information in the cloud,” he said. “Right now, we’re still tracking a bunch of production information in different spreadsheets.”

Carrot and stick

Industry analysts offered mixed assessments. Maribel Lopez, founder of Lopez Research, gave SAP some credit. “They are correct that it is very difficult for any vendor — SAP or any other software provider — to offer AI in an on-premises platform that wasn’t designed for this,” she said. “What SAP needs to do is find a way to migrate customers faster.”

Mickey North Rizza, group vice president of enterprise software at IDC, framed AI as both incentive and pressure. “AI is the new carrot and stick, reshaping businesses into a new operating model. Migration is an investment, and despite the great promise of cloud, many just haven’t done it,” she said, adding that IDC research found 31% of organizations moving from on-premises to cloud while 23% are actually moving back.

“Considering SAP is providing faster migration and testing as part of the movement to the cloud, and they are offering AI at different value propositions, they are taking into account the needs of their clients,” Rizza said.

Other vendors aren’t waiting

While SAP threads the needle between flexibility and protecting cloud revenue, third-party vendors are moving faster. MyWave, an SAP partner whose CEO Geraldine McBride is a former SAP president, offers AI agents that run natively on ECC without requiring migration or a 50% cloud commitment.

Many organizations “are kind of stuck on ECC due to Z codes and industry customizations,” McBride said. “We’re releasing value inside ECC today that can contribute to help migrate to S/4. It’s never migrate then transform — we transform then migrate.”

Ben McGrail, managing director at consulting firm Xmateria, estimates that as many as 40% of SAP customers will still be on ECC by 2030. “A good third aren’t even thinking about migration,” he said. “The biggest mistake SAP made was talking so hard about deadlines.”

Those who’ve tried and stalled, he added, often ran into a familiar problem: “They put it in front of the CFO, and he wants to know what the operational business gets out of it. [But] nothing is driving this from a business perspective.”

For those still running legacy systems, the message from Sapphire is clear: SAP’s AI future is available to you — but only if you’re already halfway to the cloud.

Your AI agent deletes critical data: Who is responsible?

A Replit AI coding agent deleted a company’s live production database during an active code freeze last year. “This was a catastrophic failure on my part,” it nonchalantly admitted. “I destroyed months of work in seconds.” While the data was eventually restored with a rollback, the agent believed the destruction was permanent and had no built-in mechanism to undo its own actions.

For a CIO, this isn’t just a technical glitch. It’s a total breakdown in enterprise accountability. When an agent causes this much damage, the blame game usually circles between the business unit that requested the tool, the engineer who gave it write-access and the security team that signed off on it.

The software alone can’t be held responsible. And as AI adoption reaches 88% of enterprises, according to McKinsey, many organizations still lack a clear answer for who actually owns the fallout. A new Rubrik Zero Labs report highlights this problem: 86 percent of IT and security leaders expect AI agents to outpace their organization’s security guardrails within the next year.

IT must lead to mitigate agent risk

Organizations that treat AI agents as experiments rather than core infrastructure do so with increased risk. That approach fails at scale because of operational maturity, not technical capability. An MIT survey suggests that 95% of generative AI pilots fail to deliver measurable business impact, often because they are forced into existing processes without a proper management framework.

I’ve talked to numerous IT leaders who report this problem. Teams experiment with agents for data analysis or customer service, but when an issue arises, the first hurdle is figuring out who coordinates the response. Part of the confusion stems from a misunderstanding of what these agents actually are. Unlike a standard SaaS API, which is built for a narrow, specific function requiring constant re-authentication, AI agents can be partially or fully autonomous.

By utilizing the Model Context Protocol (MCP), agents can interact with an entire SaaS platform rather than just one “door.” Essentially, you authenticate once and the agent has the keys to the whole building to consume whatever it needs for a workflow. The shift from functional isolation to platform-wide autonomy is why the old governance rules no longer apply.

The shared responsibility framework

At Rubrik, we use a shared responsibility model through our AI Center of Excellence (CoE). To lead this, we’ve developed a specific roles and responsibilities matrix that governs our AI strategy. Our CTO takes the lead alongside the general counsel, the CFO and me to act as executive decision-makers. A senior strategy team includes the CISO, general counsel and head of global structure, followed by the architects and cross-functional leaders in IT, InfoSec and legal who enable the actual training, tool approval and execution.

Our approach focuses on three distinct pillars: secure adoption and governance of third-party tools like Claude, building our own internal AI capabilities and integrating AI into our core products. Under this CoE, we apply the same principles we use for any enterprise technology but with defined departmental stakes.

IT owns the architecture and deployment standards. InfoSec provides continuous assessment, looking for prompt injection risks and vulnerabilities. Legal defines the guardrails for data handling and automated decision-making. Finally, business teams act as the consumers using AI to transform operations. The CoE exists to provide for them, ensuring that if they don’t follow these standards, risk isn’t introduced through misalignment.

Make governance practical

We want to move fast but not be reckless. Enabling agents to write actions should not be a fearful decision if the guardrails in place include strong governance and recoverability. Our process ensures that when a team identifies a need for an agent, there is a direct route from the initial request through technical and security vetting into a monitored production environment.

We’ve seen the need for this firsthand during our own internal AI deployments. As we rolled out more tools, each with its own set of terms and regulations, we hit a point of chaos. There was no holistic way to establish safeguards. By using an agent cloud framework, we established full observability and remediation and automatically enforced security at the agent level.

For example, when we expanded our use of Claude Code in internal test environments, we discovered a class of security issues that did not map cleanly to our existing controls. To control that behavior, we defined a policy boundary barring the transfer of data from the agent environment to external code repositories, forums and other public-facing platforms.

The recovery time problem

The operational stakes for these failures are rising. According to the Rubrik Zero Labs report, nearly nine in ten leaders expressed concern about meeting recovery objectives as agent-driven threats increase. In addition, 88% say they cannot roll back agent actions without system disruption. When agent failures compound security or data integrity issues, recovery becomes impossible without a framework.

In practice, detection usually starts with the consumer. For example, we use a “PTO Agent” that scans calendars and cross-references them with our HR system to ensure time-off requests are aligned. I recently received a Slack alert from this agent noting OOO time in April and asking to log it, even though I had already cleared it. While a minor “hallucination,” it tested our process: the issue flows to the IT help desk, which automatically notifies the AI delivery team and the business owner. Currently, our team triages these errors manually to fix the bug and redeploy, but our roadmap involves automating this triage with a human-in-the-loop component.

AI agents: from innovation to operations

Organizations that formalize AI governance attribute 27% of their total AI efficiency gains to those guardrails. Many AI governance failures come down to two things organizations skip in the rush to deploy:

  1. Treat agents as first-class identities. Most “rogue” behavior is a permissions failure. If an agent isn’t integrated into your identity provider with strict least-privilege access and a clear audit trail, it shouldn’t be on your network. We must treat agents like employees: They need a “manager” in the system and an identity that can be instantly revoked.
  2. Demand architectural reversibility. Legacy environments rely on “undo” buttons and version control. AI agents operate in live production where the “undo” is often invisible. Before an agent moves past the pilot stage, your architectural review must answer: If this agent makes an unauthorized change, how do we surgically reverse it without taking the business offline? Agent reversibility requires intent-driven, context-rich AI governance engines to maintain oversight.

Organizations must have the right strategy for secure agent operations. Build the model gradually. Begin with IT-led oversight for critical functions and expand as you gain experience. The organizations that establish operational accountability now will scale AI effectively. Those that continue with scattered, ungoverned deployments will keep playing the “who’s responsible?” game every time something breaks.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

CISA’s AI SBOM guidance pushes software supply-chain oversight into new territory

The US Cybersecurity and Infrastructure Security Agency (CISA) and its G7 cyber agency partners have released a list of minimum elements for an AI software bill of materials, a move that could help CISOs assess the security and provenance of AI systems entering enterprise environments.

The guidance extends traditional SBOM concepts into AI by calling for documentation of models, datasets, software components, providers, licenses, and other dependencies. The supplemental minimum elements are not exhaustive or mandatory, CISA said, but reflect a consensus among G7 experts and are expected to expand as AI technology evolves.

For security leaders, the document puts AI risk more firmly inside enterprise supply-chain oversight. That could make AI SBOMs part of the same vendor-risk conversations that already surround software composition, cloud services, and third-party technology platforms.

But one important difference is that AI SBOMs require visibility beyond software composition, because AI risk is shaped by models, data, infrastructure, and system behavior.

“AI systems add new layers of opacity: model lineage, training and inference data, fine-tuning history, prompts, vector databases, third-party foundation models, APIs, orchestration logic, and runtime behavior,” said Sakshi Grover, senior research manager for IDC Asia Pacific Cybersecurity Services.

AI software is also different because it is probabilistic, with outputs shaped by data provenance as well as code, according to Keith Prabhu, founder and CEO of Confidis.

“AI software inherently encompasses more than just software,” Prabhu said. “In addition to the software components, it would also need to track models, training data, prompts and system instructions, model weights and checkpoints, and GPU dependencies.”

Sanchit Vir Gogia, chief analyst at Greyhound Research, put the shift more broadly.

“The question is no longer only, ‘what code is inside this product?’ The question is, ‘what code, model, data, infrastructure, control, and vendor decision shapes this system’s behavior?’” Gogia said.

How to make use of it

The immediate use of the guidance may be in procurement and vendor risk management. It gives security teams a way to press vendors before AI-enabled products are allowed into production.

“Organizations should ask vendors to provide visibility into model provenance, training data sources, software and API dependencies, licensing obligations, security testing practices, update cycles, runtime monitoring controls, and shared responsibility boundaries,” Grover said.

The level of scrutiny may also depend on the type of supplier.

“For large vendors, CISOs should specifically seek transparency around third-party foundation model dependencies, geographic data flows, model update practices, and whether customer data is being retained for model training or fine-tuning,” Grover added. “For startups, the focus should be on the maturity of governance processes, dependency tracking, secure development practices, identity controls, and operational monitoring across the AI life cycle.”

The same risk-based approach should apply to how the technology will be used. For higher-risk deployments, Gogia said AI SBOMs should become part of a broader vendor evidence pack, supported by documentation on data flows, security architecture, model behavior, privacy impact, red-team findings, incident response, logging, and prompt-injection testing.

The gaps that remain

The biggest gap is that an AI SBOM may show what a vendor says is inside an AI system, but does not prove whether the system can be trusted for the way an enterprise plans to use it.

“Minimum elements create visibility,” Gogia said. “They do not create assurance. They tell the buyer what the vendor says exists. They do not, by themselves, prove that every dependency has been disclosed, every dataset is lawful, every control works, every model behaves within tolerance, or every runtime pathway is being monitored.”

The hard part will be proving that the document matches reality. Security teams may receive an AI SBOM from a vendor, but they still need to determine whether it reflects the system running in production and keeps pace with changes to the AI environment. Prabhu said even a high-quality AI SBOM will offer only partial visibility into AI risk.

Issues such as evolving AI behavior, hallucinations, changing prompt usage, and limited training data transparency can still make it difficult for security leaders to assess actual risk. As AI systems mature, AI SBOMs will also have to evolve to address those gaps, Prabhu added.

This article originally appeared in CSO.

New US CIO appointments, May 2026

Movers & Shakers is where you can keep up with new CIO appointments and gain valuable insight into the job market and CIO hiring trends. As every company becomes a technology company, CEOs and corporate boards are seeking multi-dimensional CIOs and IT leaders with superior skills in technology, communications, business strategy, and digital innovation. The role is more challenging than ever beforebut even more exciting and rewarding! If you have CIO job news to share, please email me!

Chuck Jones joins Northrop Grumman as CIDO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chuck Jones, Northrop Grumman

Northrop Grumman

Northrop Grumman is a global aerospace, defense, and security company. Jones was previously at Raytheon as VP digital operations. Before that he held roles of VP, IT, integrated supply chain at Honeywell, and VP supply chain digital technology at GE Power. He has a BBA from the University of Cincinnati Carl H. Lindner School of Business.

McDonald’s names Mustafa Husain CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Mustafa Husain, McDonald’s

McDonald’s

The American multinational chain restaurant has over 36,000 restaurants in more than 100 countries. Mustafa has led many key tech initiatives for McDonald’s, most recently serving as VP, restaurant technology engineering. In that role, he led innovation and engineering for their global restaurant platform. Husain was previously with Amazon as a senior manager of product management for Amazon Scout. He has a BASc and MEng from the University of Toronto, and an MBA from Western University’s Ivey Business School.

Turner Construction appoints Dawn Paquette as CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Dawn Paquette, Turner Construction

Turner Construction

Turner is a North American-based, international construction services company and a builder in diverse market segments. Paquette joins Turner from GE Aerospace where she was CIO supporting multiple business units. While there, she led global digital teams and oversaw tech systems across advanced manufacturing and aviation environments. Before that, she held leadership roles at Anthem Blue Cross Blue Shield, and she holds a BS from Miami University and an MS from Mountain State University.

Autodesk names Mike Kelly CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Mike Kelly, Autodesk

Autodesk

Autodesk is a global design software and services company whose products help professionals design, visualize, simulate, and analyze projects. Most recently, Kelly was operating partner and CIO at Andreessen Horowitz where he built data, AI, and cybersecurity capabilities. He was also CIO at Red Hat and McKesson, driving large-scale modernization initiatives. He has a BBA from Southern Methodist University and an MBA from UC Berkeley’s Haas School of Business.

Memorial Hermann Health System appoints Desiree Gandrup-Dupre CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Desiree Gandrup-Dupre, Memorial Hermann Health System

Memorial Hermann

This non-profit, values-driven, community-owned health system owns and operates 14 hospitals and has joint ventures with three other hospital facilities. Gandrup-Dupre joins from Kaiser Permanente, where she was SVP of care delivery and technology services, leading large-scale tech transformation initiatives that modernized systems at scale, and strengthened integration across care settings. She holds a BPS from Syracuse University.

Dycom names Regina Salazar as CIDO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Regina Salazar, Dycom

Dycom

Dycom is a provider of specialty contracting services to the telecommunications infrastructure and utility industries throughout the US. Most recently, Salazar served as SVP and CDIO at Novelis, Inc., where she led a global team and spearheaded an AI-powered transformation that delivered significant business impact and operations improvement. Before joining Novelis, she was VP IT and CIO, North American Region at Whirlpool Corporation. She holds a BS from UNICAMP in Brazil, and an MS from Central Michigan University.

Todd Treonze joins J.Jill as SVP and CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Todd Treonze, J.Jill

J.Jill

J.Jill is a national lifestyle brand that provides apparel, footwear, and accessories with over 200 stores nationwide and an ecommerce platform. Treonze held prior positions as VP, IT corporate systems and brand integration for Catalyst Brands, SVP and CIO at Sparc Group and Brooks Brothers. He holds a BS from Lehigh University.

V2X names Mike Uster CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Mike Uster, V2X

V2X

V2X’s team of about 16,000 professionals works across defense, national security, civilian, and international markets to bring people, technology, and operations together to support critical missions around the world. Most recently, Uster served as CIO, CTO, and SVP at ManTech, where he led enterprise-wide digital transformation initiatives, cybersecurity modernization, and advanced technology adoption across global operations. Earlier in his career, Uster held roles supporting government and national security missions at Northrop Grumman, Lockheed Martin Skunk Works, RAND Corporation, and TRW Inc. He holds a BA from Biola University.

Horizon Services welcomes John Souther as CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

John Souther, Horizon Services

Horizon Services

Horizon Services is a consumer services company based in the Northeast, specializing in heating, cooling, plumbing, and electrical solutions for both commercial and residential properties. Souther most recently served as CIO, HVAC Americas for Carrier Corporation. He holds BA and MBA degrees from Harvard.

Craig Richardville appointed CDIO for UF Health

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Craig Richardville, University of Florida Health

UF Health

University of Florida Health is an academic health center encompassing hospitals, physician practices, colleges, centers, institutes, programs and services across northeast and north-central Florida. Most recently, Richardville served as deputy CIO at UF Health, where he played a key role strengthening operational performance, advancing Epic modernization efforts, and deepening collaboration across the system. He holds both a BBA and MBA from the University of Toledo.

Pep Boys welcomes Nik Umrani as CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Nik Umrani, Pep Boys

Pep Boys

With almost 800 locations across the US and Puerto Rico, Pep Boys specializes in automotive services by providing maintenance and repair solutions. Umrani joins Pep Boys from NSM Insurance Group/Novacore, where he served as global CIO and led a full-scale enterprise transformation supporting the company’s growth. He’s also held senior leadership roles at Comcast, ADT, and Verizon Communications. Umrani holds a BE from the University of Mumbai, and MS and MBA degrees from the University of Alabama at Birmingham.

Peter Scavuzzo elevated to CITO at CBIZ

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Pete Scavuzzo, CBIZ

CBIZ

CBIZ is a professional services advisor to middle-market businesses nationwide providing industry knowledge and expertise in accounting, tax, benefits, insurance, and tech. He joined in 2024 after its acquisition of Marcum LLP, where he was CIDO and CEO of Marcum Technology leading innovation initiatives. He has BS and MS degrees from Polytechnic University.

ePlus promotes David Mellor to CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

David Mellor, ePlus

ePlus

ePlus offers transformative technology solutions, including AI, security, cloud and data center, networking and collaboration, as well as managed, consultative, and professional services. Mellor joined in 2018 and most recently served as VP, enterprise IT services and infrastructure. Previously, he held leadership roles at Landauer, Inc. including VP, strategic development.

Julie Irish named CIO for Alteryx

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Julie Irish, Alteryx

Alteryx

Alteryx is an AI-ready data and analytics company that enables organizations to automate data preparation, blending, and analytics without coding. Irish most recently was CIO at Couchbase, where she led IT, data, business technology, and security. Prior to that, she held various senior tech leadership roles at New Relic and Harvard Business Publishing. She holds a BS from the University of Virginia.

AVI-SPL welcomes Doug King as CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Doug King, AVI-SPL

AVI-SPL

AVI-SPL is a global company that provides AV, UC, and experiential tech solutions. King was previously CIO for ePlus and held prior roles as SVP, administration and CIO for Landauer, Inc. and VP, sales and marketing for Kirby Building Systems. He holds BA and MBA degrees from Northwestern University.

Chris Stori joins Verkada as CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Stori, Verkada

Verkada

Verkada unifies video security, access control, air quality sensors, alarms, intercoms, and workplace tools through a cloud-based software platform. Stori served most recently as CEO for Bright Machines. Prior to that, he held leadership roles at Cisco including SVP and GM of networking experiences overseeing enterprise networking, IoT and Meraki. Earlier, he was a consultant at McKinsey & Company, advising on international market expansion. Stori holds a BS and MS from the University of Illinois, and an MBA from Northwestern University.

Exostar promotes Amy Hogan to CIO

width="1240" height="698" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Amy Hogan, Exostar

Exostar

Exostar offers a software platform that supports exclusive communities within highly regulated industries where organizations securely collaborate, share information, and operate compliantly. She most recently served as VP of enterprise systems and solutions, leading enterprise transformation initiatives that strengthened scalability, governance, and operational discipline across business systems, DevOps platforms, corporate IT, and enterprise analytics. She holds a BS from the University of Dayton.

New CIO appointments, April 2026

Albertsons Companies appoints Brian Rice to board of directors

Francesco Tinto joins Kimberly-Clark as CIO

University of Pennsylvania names Joshua Beeman CIO

Jim Connolly joins Legacy Food Group as CIO

AGCO appoints Jena Holtberg-Benge as CDIO

Ryan Biren promoted to CIO for THOR Industries

Genworth welcomes Morris Taylor as CIO

Barninder Khurana joins Patriot Growth as CIO

Prashant Lamba announced as CIO for RGP

New CIO appointments, March 2026

USAA names Dan Griffiths as CIO

Dori Henderson joins Brown & Brown as CITO

Bob Hardester named CIO for Advantage Solutions

Skadden adds Vince DiMascio as CDIO

Chris Hickok appointed CIO for Messer Construction Co.

Stormont Vail Health appoints Aaron Wootton as CIO

TowneBank welcomes Varun Chandhok as CIO

Todd Paladini joins Freddy’s Frozen Custard & Steakburgers as CIO

Optiv promotes Doug Goehl to CIO

Brad Rohrer joins the University of Iowa as CIO

Morongo appoints Tony Gawne as CIO

New CIO appointments, February 2026

Lucius DiPhillips joins Adobe as CIO

Delta Air Lines appoints Amala Duggirala as CDTO

Ahmad Al-Dahle named CTO for Airbnb

NCR Atleos announces Rohan Pal as CIO

Arpit Davé joins BioMarin as CDIO

University of Arizona appoints Elliott Cheu as CIO

Dave Fazio named CIO for Smith Douglas Homes

Hassan Janjua joins the City of Newport News, VA as CIO

RealTruck welcomes Eric Firer as Global CIO

Thomas Marlow joins Bayhealth has CDIO

The Oncology Institute announces Rakesh Panda as CIO

Jay Tkachuk joins Redwood Credit Union as CIO

New CIO appointments, January 2026

Nationwide appoints Michael Carrel as CTO

Anand Varadarajan named CTO for Starbucks

Goodyear welcomes Raman Mehta as CIO

Jim Fowler joins Lumen as CTPO

DXC Technology appoints Russell Jukes as CDIO

Dana Haggas named CIO for Emory University

Associa appoints Michelle Johnson as chief information and transformation officer

Laura Fultz announced as CDIO for Emory Healthcare

1-800-FLOWERS.COM, Inc. taps Alex Zelikovsky as CIO

Westfield welcomes Lloyd Scholz as CIO

Christopher Mackie promoted to CIO at McGuireWoods

Kendall Knight joins Intermodal Tank Transport as CIO

New CIO appointments, December 2025

Intel appoints Cindy Stoddard as CIO

John Hancock names Kartik Sakthivel CIO

Ameet Shetty joins RaceTrac as CIO

Guidehouse taps Ron White as CIO

AmeriLife names Sulabh Srivastava CIO

Tim Farris joins Clancy & Theys Construction Company as CIO

Ronald McDonald House Charities welcomes Jarrod Bell as CIO

Devang Patel joins Devereux as CIO

MIB promotes Daniel Gortze to CIO

How CIOs use AI agents to accelerate revenue growth

A wave of AI agents has recently emerged for sales and revenue teams, including Highspot’s Deal Agent, an agent to accelerate pipeline generation and conversion, and Qualified’s Piper for Demandbase, an AI sales development representative (SDR) agent. Salesforce’s AgentForce and many others are also rallying behind using agents for this use case as well, so much so that Gartner estimates for the remainder of the year, 40% of enterprise applications will feature task-specific AI agents. Further analysis in a Workato-sponsored 2025 survey by Harvard Business Review paints a similar picture, where 86% of 600 technology decision-makers plan to increase investment in agentic AI over the next two years, with a growing proportion to empower sales.

“Sellers are preparing faster, showing up with sharper points of view, being more strategic, and spending more meaningful time with customers,” says Kellie Romack, CDIO at ServiceNow. “We’ve already seen amazing early results with sellers cutting prep time from hours to minutes, almost a 95% improvement.”

Executives report that AI agents are accelerating sales teams by conducting prospect and customer research, reducing manual toil across the sales workflow. Yet, while the excitement is evident, it’ll take discipline to know where precisely to deploy AI agents, let alone how to operationalize, secure, and scale multiple agents throughout an entire sales organization.

“AI agents deliver the most value in sales workflows that are well-defined, rule-based, and involve high volumes of repetitive activity,” says Dan Shmitt, CIO at Salesforce. “These are areas where scale, consistency, and speed matter, but ultimately, where teams lack the capacity to engage at scale.”

Conducting customer research

AI agents are poised to accelerate sales teams in a few ways. One area is giving sales engineers better context. “AI agents can help make sales teams the most relevant communicators in front of their prospects or clients,” says Tiago Azevedo, CIO at AI development platform OutSystems.

Or take the use of AI agents in sales at fleet management company Samsara, for instance. CIO Stephen Franchetti, previously CIO at Slack, shares that AI has always been fundamental to Samsara’s platform to help automobile fleet operators optimize routes or check vehicle health. But more recently, they’ve applied AI knowledge agents internally throughout the company.

Samsara has created an internal model fine-tuned on Samsara data. This Samsara GPT, as he calls it, is trained on the company’s product knowledge base, specific customer data, and is connected to Salesforce and other systems, helping sales executives quickly become account-specific experts, answer customer questions faster, and accelerate onboarding processes.

“It’s the most successful rollout we’ve done,” says Franchetti. “It’s universally embraced by our sales organization and we’ve received amazingly positive feedback.” As a result, Samsara account development representatives (ADRs) are experiencing 16% better attainment using this internal GPT.

At ServiceNow, there are AI agents empowering sellers with highly relevant context. “We’re using an AI-powered coaching experience and sales hub to make our sellers more effective at preparing for meetings and navigating deals,” says ServiceNow’s Romack. “Our AI Sales Coach, built on Anthropic’s Claude, pulls together account data, research, and product intelligence into actionable guidance.”

Accelerating lead prospecting

That knowledge is also improving outbound strategies, helping ADRs synthesize prospect data and draft more personalized outreach. For example, Samsara’s pilot group has noticed a 300% increase in callbacks to emails using this strategy, says Franchetti.

In other organizations, internal experiments are still in pilot mode but showing strong signals. For example, Kate Prouty, CIO at Akamai, shares that the cloud, security, and content delivery network company has been piloting an internal AI-powered sales assistant it calls SaiLS Bot within its sales development representative (SDR) team. “SDR teams used SaiLS to accelerate prospect research with quick company understanding and impact analysis,” she says. This helps spin up account-tailored go-to-market plans, and surface cross-sell opportunities.

By using AI agents, SDRs at Akamai are able to quickly understand target companies, existing solutions, and potential cybersecurity impacts, dramatically shortening their prospecting cycles. “Quantitatively, in the first nine months of using SaiLS Bot, the sales team saved the equivalent of three full-time employees in labor,” says Prouty.

At ServiceNow, agents aid in how they prospect new leads, conduct post-sales follow-up, and query the status of deals in the pipeline more easily. For instance, a seller might prompt something about opportunities in a particular territory with a range of probability to close and stage details in order to gain highly targeted information.

Some leads naturally come from inbound sources, too. But human labor has historically struggled to qualify this interest at scale. “Our recent research shows only 25% of inbound sales leads require human outreach or follow-up, leaving roughly 75% of leads historically receiving no engagement at all,” says Salesforce’s Shmitt. “That gap represents a structural limitation of traditional sales models.”

At Salesforce, one remedy has been deploying its website agent, which answers product questions and qualifies inbound interest. Salesforce then uses a sales agent internally to inform sellers with quick access to account information, deal history, pricing, and executive briefing materials, helping shift initial interest into active conversations.

Enriching pipeline and follow-up

Sales teams are experiencing high pipeline growth using AI agents. “We’ve deployed AI agents across our sales operations at Workato,” says Carter Busse, CIO at the automation and integration platform. For instance, they recently generated $2.7 million worth of new sales opportunities using an internal agent that analyzes calls in Gong, an AI operating system for revenue teams, to determine what made closed-deals successful to better cater future outreach.

In total, Workato has built 28 sales agents for various autonomous sales processes, including handling opportunity enrichment, quote generation, approval routing, and meeting follow-ups. Agents also automatically update CRM fields with new data from client interactions, improving their data quality and reliability.

Busse grounds the results in efficiency savings. “Deals with agent-enriched data progress through pipeline stages up to 20% faster,” Busse adds. “We’ve also seen 40% faster quote turnaround, and five to seven hours per week back to sellers to focus on customer conversations.”

Salesforce’s Shmitt also sees big potential in using agents for repetitive workflows related to follow-up communication, expanding what human sellers can realistically respond to. “In sales, this typically includes processes like answering product questions, managing follow-up, re-engaging stalled interest, and determining when to route a lead to a human seller,” he says. The company deploys its tech internally, using its engagement agent to automate some of these lead engagement activities. “The agent delivers 24/7 personalized outreach, product Q&A retrieval, objection handling, stalled-lead re-engagement, meeting booking, and lead information collection, with defined escalation paths to humans,” he says.

OutSystems is similarly seeing gains from deploying agents to automate tedious tasks and accelerate revenue-facing workflows. “Agents are improving pipeline accuracy and reducing manual administrative work that slows deal cycles,” says Azevedo, who estimates they’ve saved 1,700 hours in manual administrative work using their contact agent. “By arming our team with the right proof points at the right moment, we validate value faster and shorten deal cycles.”

Generating sales enablement materials

AI agents can also generate customer journey references and case studies. For example, OutSystems is dogfooding its AI agent building tools to streamline sales operations, most notably using Agent Workbench. “We’ve built a multi-agent system that arms our sales team with key insights on prospects and their buying signals, relevant case studies, and points for successful pitches,” says Azevedo.

“We’ve enrolled all our sales team members in Deal Mate, one of our customer story agents, which has suggested 2,859 relevant stories to the sales team since October,” he adds. They’ve also created hundreds of automated sales decks using a CMS assistant, which has delivered significant time savings as well.

Overall, AI agents are giving sellers more firsthand, cross-industry insights into how customers are using their platform or services, along with business improvements they’re gaining. CIOs report that this automated research is contributing to productivity savings and improving funnel conversions.

Also, financial analysis can be expedited using agents. At Samsara, MCP servers are supercharging financial analysis that would’ve traditionally required considerable manual labor. “Technically, it’s much less of a lift,” says Franchetti. “Our financial language gets to our community in a much more natural way.” For this, they’ve used Workato One to take recipes and workflows, and expose them as MCP servers for their agentic processes.

Governance and controls required

Although early usage signs look promising for innovating sales pipelines, CIOs must implement governance controls for agentic AI in sales workflows, especially when agents have access to customer data or the ability to conduct autonomous actions.

“The guardrails matter just as much as the technology itself,” says Shmitt. To reduce risks, CIOs should clearly define roles for agents, limit permissions, and set human approvals for high-value actions. “Many organizations introduce agents into workflows where they support decisions rather than deploy fully autonomous agents,” he adds.

Access control, especially authentication and authorization, is a common priority for CIOs. “Our top concern is ensuring agents working on behalf of sales employees only access authorized data,” says Workato’s Busse. “Anything that modifies customer records or sends external communications requires approval.”

For Azevedo, it comes down to balancing innovation with safeguarding infrastructure and data. “Agentic AI requires organizations to invest on both ends of that mission, to ensure customer data and deal integrity remain intact while also automating core business processes throughout the enterprise.” To do so, he recommends unifying disparate data sources, and streamlining how agents discover and create that data.

He adds it’ll also require transparency into agentic behaviors. “Full observability into how agents make decisions is critical for CIOs to support or propel their organization into an age of intelligent hyper-automation based on trust.”

ServiceNow’s Romack adds that it’ll also take CIO leadership. “For users, we’re also here to guide and coach,” she says. “A great example is ensuring our sellers get best practices to create prompts like the right account and deal context, specific metrics, breaking down complex requests, and review and validation.”

In effect, Romack sees the CIO as playing a unifying role to build integrated experiences and curb possible tool sprawl. “The CIO should be the great unifier, bringing together sales, IT, legal, finance, and security to create powerful and responsible experiences,” she says. “That balance is what unlocks value across the organization.”

But it’s not just humans that need leadership and unification. Agents need training on institutional knowledge and policies as well. “We treat agents like employees,” says Busse. “They’re onboarded with curated knowledge about the business, trained on data governance policies, and monitored continuously.”

Plus, CIOs have a financial responsibility to consider. “AI agents should be treated as long-term systems, not short-term experiments,” says Akamai’s Prouty. As such, CIOs should collaborate with IT leadership to manage costs for AI agent rollouts. “Without oversight, costs can spin out of control for AI tools that aren’t delivering value to teams.”

Best practices for AI agents in sales

Before diving headfirst into agent-assisted revenue workflows, sales teams will need foundational capabilities in place. “Many organizations still lack a clear roadmap for how to start, scale, and define success,” says Shmitt. Preparing things like data, governance, and operating structures are key to responsible adoption, he says.

Success will lie in a strong data foundation consistently updated to avoid stale or static resources. “Enterprises that thrive will be those prioritizing a centralized foundation and a strong knowledge base of content and data, and collaborating with business champions across their agentic journey,” says Azevedo. The latter will necessitate a cultural shift, he says, which requires viewing agents as extensions of the workforce with clear ownership and operational structure. As such, he recommends assigning select team members to specific agents, monitoring performance, and flagging issues as necessary.

Next, start small with valuable use cases tuned to seller workflows. “Pick one or two high‑impact use cases where the value is obvious, like sales prep or deal coaching,” says Romack. To do so, she recommends working closely with sellers to understand their pain points to create experiences that function inside their existing workflows. “If they’re swiveling between systems with AI that doesn’t help them land their next meeting or close their next deal, it won’t stick.”

So start where conditions are already strong, adds Shmitt. “Agents deliver the most value in sales environments with clear documentation, structured data, and well-defined workflows,” he says. Testing agents in these areas will help validate accuracy and value, which can inform adoption.

Other CIOs agree organizations shouldn’t automate everything at once. Instead, Busse recommends starting small with high-pain, low risk workflows like an agent that proactively manages license optimization. “Pick something where you can measure impact quickly,” he says.

Past the experimentation stage

Executives are attempting to drive ROI from their AI investments, but returns have been murky. A report from PwC at Davos found that 56% of CEOs said they’ve seen no financial benefit from their AI investments, and only 12% reported both cost savings and revenue growth.

On the flip side, agentic AI in sales is showing more promising returns. While it’s still early days, executives report impressive gains from deploying AI agents within revenue workflows. They help sellers automate repetitive work across the sales lifecycle, from qualifying leads, refining messaging, and researching accounts, to gathering relevant customer insights, auto-updating CRM systems, and more.

“All these use cases and investments we’re making deliver ROI and key metrics from a business perspective,” says Samsara’s Franchetti. “We’ve moved past experimentation. Now it’s more about where the value is and how do we apply AI to meet that value.”

And rather than replacing sales teams wholesale, AI agents are complementing humans and allowing them to emphasize soft skills. “The efficiency gains alone justify the investment,” says Busse. “Reps are spending more time on actual customer conversations and less on administrative work.”

But enterprises don’t yet report closing deals entirely with AI agents. Instead, as Romack says, these tools elevate human potential, with AI agents doing the heavy lifting on context, synthesis, and next steps, while sellers bring the strategy, relationships, and outcomes.

“The value is immediate,” she adds. “You can jump right in and feel the results in your first meeting of the day. Over time, that compounds into better deal execution, like fewer stalled deals, stronger mutual plans, and more consistent value.”

How Stanford Healthcare prescribes AI to streamline the clinician and patient experience

Before an organization launches any AI initiative, considering how disruptive it can be, it’s essential to prioritize governance, as well as make sure sufficient training and change management strategies are implemented. “I think it’s a combination of having all these things in place, otherwise it creates fear and anxiety,” says Aditya Bhasin, Stanford Healthcare’s VP of software design and development. “Otherwise, how do you overcome these things and get the organization to the place it needs to go?”

So with this structure firmly in place, the Bay Area’s Stanford Healthcare, which runs about 300 facilities including two full service hospitals, has rolled out an AI solution that was initially conceived to mitigate overwhelmed clinicians by, for instance, automating draft responses for patients inquiring about billing.

The project started with 10 billing reps and saved 17 hours by handling 1,000 messages using 25 smart templates. Now enterprise-wide with 60% utilization, the AI also drafts test results and accelerates software development.

“We started this as a pilot for our billing reps, so when patients asked questions about where they were in their payment plans, this targeted that particular use case, and we saw substantial success,” he says. “Then we followed through on our patient advisory forum to see the type of responses it got and the timeliness of those responses. Patients were excited they got faster responses, and billing reps were excited, too, about getting technology to help them curate responses and answer them correctly. Then we rolled it out across the organization so now, all our billing reps have access to this technology.”

Not only is Bhasin and his team transforming business from operational, educational, and researching aspects, they’re also transforming how they approach software development.

“When we roll out these tools, we want physicians to spend more time with patients rather than documenting stuff, so now they just talk to patients and the documentation happens automatically for them. Just writing a routine and having it automatically create a lot of test cases around it is a huge impact.”

What it comes down to, says Bhasin, is it brings a bit of joy back into the work by helping to get things done faster while expanding the art of the possible.

Bhasin also details the strong security, trust, and next gen perspective needed for any AI implementation. Watch the full video below for more insights, and be sure to subscribe to the monthly Center Stage newsletter by clicking here.

On an impetus for AI: Stanford Medicine is a unique place. Our three missions are education, research, and clinical care, and AI is transforming every aspect of those missions. Our journey with AI started a couple of quarters after ChatGPT was announced, and the organization decided to double down across the three missions, using that technology to transform the core business. One of the first initiatives we undertook was to help our clinicians with burnout. Post Covid, our portal engagements and patients using the digital channel increased exponentially, so one of the first things that happened was we had physicians getting overwhelmed with messages coming from patients.

When this technology came about to use large language models, one of the first things we rolled out was to help physicians. We started creating draft responses, and in a regulated industry, that required a lot of soul searching and thought process. But whenever a patient messages our clinicians, we create a draft response, which helped reduce a physician’s cognitive load. That was very successful. Then we noticed other opportunities, like once people got their care, another friction point when you talk about patient experience is billing.

We’ve all experienced the complexity of that and it’s uniquely challenging for patients, especially when you have proxies. Billing reps require a lot of understanding of your unique insurance, and all that is a lot of work. So we looked at how to apply this new capability to that niche area, and started with Gemba rounding. One thing we observed was, given all the nuances related to billing, we created 25 different templates for the reps. That adds a lot of load to the work involved so we automated that entire process.

On better access to information: At an organization like Stanford healthcare, there are millions of test results per year. When you have a large patient population and thousands of physicians ordering images as a standard part of the workflow, patients immediately get the lab results in their portal now as part of our legal requirements. But if you ever try to read a radiology result or a blood panel, they come with a level of complexity, so you’re waiting for your physician to really decipher that and give you insight. When you have millions of these coming in, and as these labs get more complex, it adds a lot of workload on your physician. So if you’ve got a family practice or a specialist, and you order a bunch of these labs to decide on next steps, you have to read all of them and respond to patients.

We use the technology to create draft results, and understand how we can help create responses to help patients. At this point, the moment a lab result comes in, we also create a draft result for physicians. A lot of these complex systems pump information into our EHR, and then we use this homegrown framework to create responses for physicians. That’s been successful in terms of physician adoption. There’s always a human in the loop, and we never force our physicians to use this. So it’s the comfort level, complexity, and trust we have to build. The relationship between the physician and the patient is sacrosanct. We’re always trying to help them help their patients.

On strengthening skills: We’ve been very forward looking to embrace AI, and one thing we did a couple of years ago was make a secure version of AI called SecureGPT, which was available to virtually everyone in the organization. It’s been hugely successful and people use it for everything from administrative work to clinical projects. It provided valuable learning, and the organization feels comfortable and supported in having this tool universally available. Out of it, we also worked on training physician scientists. A huge part of AI is going into the next generation of curriculum and how to educate and create interactive sessions for our medical students. So we took aspects and learning to create mandatory training for everybody in the technology group, which is close to 1,000 people. Then we rolled it out to everybody in the organization. Now you’re talking tens of thousands of people from facilities and operational staff, to nurses and physicians.

On keeping it all in check: For most technologists, tech is the exciting part. But trust is paramount in a highly regulated environment like healthcare, as well as how we ensure we don’t expose our organization to risks while maintaining the right level of efficacy between physicians and patients. So with the advent of AI, we’ve got governance bodies at the C level, so everything clinical goes through that. We also have a process by which we validate every solution we come up with. We call it the FURM process, which is fair, useful, reliable, AI models, because with AI, we look to see our interventions are fair, and you want to make sure solutions are equitable across all aspects. But because the rate of change of technology is so fast, the biggest challenge for most people now is how do you bring your organizations along.

How do you roll out transformation in standard workflows, and how do you get business leaders to simultaneously reimagine how work can be done when you have powerful tools. It can really disrupt workflows. So as much as we’re trying to keep this interwoven into existing workflows, we find out how to tell people these capabilities are available. And these are early rollouts, which are enterprise wide. But results have been positive, so it’s creating a feedback effect where we’re taking on more projects to transform.

The death of identity as we know it

A CISO walked out of the RSA conference last month and asked an honest question. “When does it make sense to create agents, sub-agents and swarms of agents versus digital twins?”

He wasn’t looking for a sales pitch. He had just sat through days of keynotes, breakouts and vendor pitches where AI got more airtime than anything else on the agenda, and he walked out with less clarity than when he walked in.

That’s the thing about this moment. Every vendor has an AI story. Every session touches on agents. Very few are offering a working model for how to govern any of it once it’s inside your business.

Similar questions are surfacing in almost every conversation I have. Agents, swarms and digital twins are landing in customer experience, treasury management and executive decision support. That’s the CIO’s world. It’s the CFO’s world too, and the CEO’s. When AI entities act, decide and speak on your organization’s behalf, someone must answer for who they are and who controls them.

A taxonomy: Operational vs. perspective complexity

It’s easy to use agents, swarms and digital twins as if they’re different words for the same thing. They aren’t. Each demands a different governance model and lumping them together is a governance mistake waiting to happen.

At the top of the frame, AI entities either solve operational complexity (how do we get this done?) or perspective complexity (how would our most experienced leader think about this?). Inside operational complexity, three distinct things are getting conflated:

  • Synthetic agents are trained on the aggregated expertise of many practitioners. Think of a model trained on the combined knowledge of 100 pediatricians, validated by a pediatrician. It represents a domain, not a person. The expert grounding is there. Individual accountability is not.
  • AI workers are task-specific single agents given foundational capability and turned loose to figure out the job. They’re often ephemeral, spinning up to execute a workflow and going away when it finishes. The person directing the worker may not be an expert in what the worker is doing. Attribution gets murky fast.
  • Swarms are N instances of the above interacting. A swarm inside a single level is one kind of problem. A swarm that mixes synthetic agents, AI workers and digital twins across trust levels is a different problem entirely, because a high-trust entity can spawn a low-trust one, and what comes back up doesn’t get reclassified to its origin.

Digital twins sit on the perspective-complexity side. A digital twin isn’t a chatbot or a prompt persona. It’s a verified, governed representation of a specific human’s expertise or an organization’s unique institutional knowledge. The individual puts their judgment on the line. Every output traces back to an authorized source. Where AI workers are designed to act, a digital twin is designed to represent — which is why the governance model for one can’t be borrowed from the other.

You can’t manage a digital twin like a service account. You can’t manage an AI worker like an employee. And you can’t let cross-level swarms run without a registry that tracks what spawned what.

The dark side of the taxonomy: Governed vs. feral

Once you’ve got the taxonomy, a second axis shows up quickly. Governed versus feral. Authorized digital twins sit in the governed-perspective quadrant. Adversarial swarms sit in the feral-operational quadrant.

In January, a group of researchers led by Daniel Schroeder and Jonas Kunst published a policy forum in Science magazine on how malicious AI swarms can threaten democracy. The paper describes a technique they call LLM grooming, where swarms flood the web with fabricated content designed to be ingested by future AI training runs. Their warning is that AI swarms can rig the epistemic substrate on which future AI tools depend.

That’s a data integrity problem hiding inside a disinformation problem. If your organization relies on AI for pricing, market intelligence, competitive analysis or strategic planning, the content your models train on tomorrow is being shaped today. The upstream data feeding your downstream decisions is under active manipulation, and most enterprises have no visibility into any of it.

What makes the story more interesting is that the same researchers also see the other side. In a CXOTalk interview, one of the authors was asked whether AI swarms could ever be used for good. Schroeder affirmed, “Yes. They can fact check. They can collaborate. They can collaborate and just build digital twins of humans in order to process information in a way this particular human would understand.”

That’s the tension in one sentence. The same capability that can manufacture consensus can also preserve expertise. The difference comes down to whether the intelligence is governed or feral. Verified Intelligence becomes necessary because the threat and the solution share the same root.

Identity has become a question of authorship

If anyone can spin up a high-fidelity digital version of your CEO, your brand voice or your strategic reasoning, authentication has to answer a different set of questions than it used to. Access stops being the point. Authorship takes over.

Five questions now define the control plane, and they’re governance questions:

  • Who created this entity?
  • Who trained it?
  • Who authorized it?
  • Who can revoke it?
  • Who is it economically aligned to?

Digital twin forking isn’t a fringe risk. It’s inevitable. Unauthorized swarms acting in your organization’s likeness will be a normal threat vector by 2027. (The timeline will feel fast until it feels obvious.) The companies that win will track provenance the way finance tracks capital.

On April 1st, a colleague shared her “Retirement Certificate” from ReplacedByClawd, which lets anyone spin up a digital version of a named person in minutes. The tone is played for laughs. The capability underneath is serious business. Anyone with a browser can fork a likeness, train it on public content and set it loose with no tie back to the real human it mimics. Unfortunately, this was not an April Fool’s joke.

We need authorized versions of our digital twins, and we need them before the unauthorized ones become the norm. A twin your organization actually owns. A twin whose training data, scope and boundaries can be attested. A twin that can be revoked when a leader changes roles or leaves.

Once the humor wears off, the cognition layer becomes a social engineering playground. A convincing digital version of your CFO approving a wire. A cloned voice of a senior engineer pushing a late-night code review. Hackers are headed for this layer. Most security programs are still locked on the session.

The good news is that the framework is starting to take shape. On April 17th, the Coalition for Secure AI (CoSAI) published Agentic Identity and Access Management, a foundational reference that treats agents as first-class identities with their own lifecycle, delegation model and accountability. The paper introduces an agent registry as the system of record, scope attenuation at every hop in a delegation chain, and a “prove control on demand” standard for logging and lineage. It’s the clearest signal yet that the industry is moving past session-layer thinking and closer to cognitive governance this moment requires.

From identity perimeter to cognitive governance

The real shift happens at the control plane itself. Governance has to extend to the cognitive layer. To what an AI entity is authorized to know, say, decide and spawn.

On a recent a16z podcast, Box CEO Aaron Levie and former Microsoft executive Steven Sinofsky talked about what happens when agents become the primary users of enterprise software. Sinofsky made a point that should anchor every CIO’s next 18 months of planning. Enterprises will live in a read-only consumption layer for years before they allow agents to write, act or transact with full autonomy.

That’s a feature, not a bug. And it’s exactly where governed digital twins fit. They answer questions. They prepare context. They surface governance guidance. They rehearse decisions before the executive team commits, and they stress-test strategy before the market stress-tests the brand. They preserve institutional judgment when a senior leader retires or changes roles. This is the agentic enterprise maturing from experimentation into production, without handing the keys to a feral swarm.

Aysha Khan, CIO and CISO at Treasure Data, captured the human side of this shift when she told me recently that “By encoding legacy expertise into governed AI, we do not make ourselves irrelevant. We free ourselves from the maintenance of our past and tap into the possibilities of who we can become next.”

That framing matters. Cognitive governance scales a leader’s judgment while protecting their identity. The people get elevated. The work gets amplified.

The executive imperative

If your AI entities carry your judgment, your voice and your authority, identity governance stops being something the IT team owns in isolation. It becomes a leadership discipline that shapes what the organization can become. Treat AI lineage with the same discipline you bring to capital allocation — visibility, accountability and the ability to trace every decision back to a legitimate source.

Every AI entity you deploy carries a lineage. The companies that can trace that lineage will govern it. The ones that can’t will learn what they’ve lost after the fact.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

“공급망 보안 미흡하면 EU 판매 어려워진다”···블랙덕, CRA 대응 강조

맥키 총괄은 방한 기자간담회에서 “한국은 일본, 대만 등과 함께 아시아의 기술 혁신 국가 중 하나로, 블랙덕도 한국에서 빠르게 성장하고 있다”며 “고객사들이 혁신을 이어가는 동시에 보안도 함께 강화할 수 있도록, 블랙덕의 대응 경험과 공급망 보안 전략을 공유하고자 한국을 찾았다”고 밝혔다.

CRA는 소프트웨어 요소가 포함된 제품 전반에 공통적으로 적용되는 EU의 사이버 보안 규제다. 적용 대상은 회사 소재지와 관계없이 소프트웨어 요소가 포함된 제품을 유럽 시장에 공급하는 모든 기업이며, 한국 제조사 역시 예외가 아니다. 여기에 CRA는 제조사뿐 아니라 수입사·총판사 책임까지 명시적으로 규정하고 있어, 제조사가 의무를 이행하지 않으면 그 책임이 수입사와 총판사에까지 이어진다. 한국 제품을 유럽 시장에 들여오는 수입·유통 채널 전반이 직접적인 규제 대상이 된다는 의미다.

맥키 총괄은 “한국 고객의 질문은 대부분 SBOM과 공급망 관리에 집중되어 있으며, 이는 일본·대만 등 아시아 기술 혁신 센터 지역과 매우 일관된 경향”이라고 말했다. 그는 “현실적으로 유럽의 규제 작동 방식은 아시아의 비즈니스 방식과 다르기 때문에, 유럽에서 규제가 어떻게 기능하는지와 한국에서 실제로 무엇을 해야 하는지 사이의 연결 고리를 이해하지 못하는 경우가 많다”고 진단했다. 이어 “블랙덕은 한국 시장에서 빠르게 성장하고 있으며 CRA는 매우 큰 사업 기회이자 블랙덕 스스로도 반드시 준수해야 할 규제”라며 “블랙덕이 자체적으로 컴플라이언스를 수행한 경험을 템플릿 삼아 고객과 파트너에게 공유하고 있다”고 덧붙였다.

CRA의 핵심 요구사항은 ▲알려진 서드파티 취약점이 없는 상태로 출하 ▲기본값으로서의 보안(secure by default) ▲신규 취약점 24시간 내 보고 ▲모든 의사결정의 문서화 ▲오픈소스 강력 거버넌스 ▲적합성 선언 등이다. 맥키 총괄은 이러한 의무 전반을 관통하는 철학으로 ‘사이버보안은 제품 품질의 일부’라는 명제를 제시했다. CRA가 보안 사고를 별도의 기술 이슈가 아니라 품질관리 체계 안에서 다뤄야 할 사안으로 규정하고 있으며, 이 때문에 누가 어떤 근거로 어떤 결정을 내렸는지를 테스트 결과와 함께 모두 문서화하도록 요구한다는 설명이다.

CRA 비준수 시 제재는 두 갈래로 작동한다. 글로벌 매출(턴오버)의 일정 비율에 해당하는 과징금이 부과되며, 더 큰 위험은 유럽 시장에서의 제품 퇴출이다. 맥키 총괄은 “한국 제조사가 CRA를 준수하지 않기로 결정한다면 유럽 집행위원회는 해당 제품의 유럽 내 판매를 금지할 수 있다”고 경고했다.

블랙덕은 이러한 대응을 위해 SCA(oftware Composition Analysis, 소프트웨어 구성 요소 분석), SBOM(Software Bill of Materials, 소프트웨어 구성 요소 명세서) 관리, VDR(Vulnerability Disclosure Report, 취약점 공개 보고서), VEX(Vulnerability Exploitability eXchange, 취약점 악용 가능성 문서) 등을 포함한 공급망 보안 체계를 제공하고 있다고 설명했다. 특히 신규 취약점이 공개되면 24시간 내 대응이 요구되는 만큼, 지속적인 취약점 모니터링 체계가 필수라고 강조했다.

이날 간담회에서 맥키 총괄은 SBOM과 공급망 관리의 중요성도 짚었다. 맥키 총괄은 “최근 몇 년 동안 많은 기업이 SBOM 생성과 관리에 집중했지만, 현대 소프트웨어의 복잡성은 단순한 기술적 접근만으로 해결하기 어렵다”고 말했다. 이어 “오늘날 하나의 소프트웨어에는 수천 개 이상의 공급사와 개발사가 연결돼 있으며, 이 공급망에 대한 투명성과 지속적인 모니터링이 중요하다”고 강조했다.

이런 복잡성 때문에 맥키 총괄은 현대 공급망을 기존의 선형적인 ‘체인(chain)’이 아닌 복잡하게 얽힌 ‘메시(mesh)’ 구조라고 표현했다. 가령 자동차 산업의 경우 차량 자체뿐 아니라 제조 공장의 로봇, 모바일 애플리케이션, 통신망, 클라우드 서비스까지 모두 연결돼 있기 때문에 차량 보안을 위해서는 전체 생태계에 대한 보안 검증이 필요하다는 설명이다.

앤트로픽 미토스, 오픈클로 등 AI 업계에서 보안 관련 기술이 빠르게 발전하는 상황에 대해 맥키 총괄은 “AI 코딩 도구의 코드 품질은 빠르게 개선되고 있지만 여전히 이슈가 남아 있다”며 “에이전트 기반 보안 서비스의 경우 핵심 과제는 복잡한 컨텍스트(맥락)를 얼마나 정확히 이해하느냐”라고 설명했다. 이어 “앤트로픽이 미토스의 세부 동작 방식과 실제 탐지 결과를 충분히 공개하지 않았기 때문에, 발견된 위협 중 얼마나 실제로 심각한 수준인지 현재로서는 판단하기 어렵다”고 말했다.

블랙덕 역시 자체 AI 보안 도구 ‘시그널(Signal)’을 지원하고 있다고 밝혔다. 맥키 총괄은 “AI는 훌륭한 코드 리뷰어 역할을 수행할 수 있으며 비즈니스 로직 문제나 구조적·아키텍처적 이슈를 잘 찾아낸다”면서도 “다만 기존 애플리케이션 보안 도구를 완전히 대체하는 수준은 아니다”라고 말했다. 그는 “AI는 매우 빠르게 변화하고 있으며 6개월 뒤에는 지금과 전혀 다른 기능이 등장할 수 있다”며 “중요한 것은 AI를 지속적으로 이해하고 그 진화 방향을 파악하는 것”이라고 설명했다.

맥키 총괄은 “CRA는 단순한 규제가 아니라 소프트웨어 공급망 리스크 관리의 글로벌 기준선을 제시하는 법안”이라며 “9월 시행이 임박한 만큼, 아직 CRA 전략을 수립하지 않은 기업이 있다면 오늘 당장 대응을 시작해야 한다”라고 강조했다.
jihyun.lee@foundryco.com

SAP’s AI promises last year? Most are still rolling out

SAP made bold promises about AI at Sapphire 2025: Knowledge Graph, Joule Studio, and AI Agent Hub would ship by the end of the year. Those tools are now technically available, but adoption has lagged, and SAP is already announcing version 2.0.

“Joule Studio adoption has been minimal compared to what we’d like,” said Manoj Swaminathan, SAP’s chief product officer for Business Suite, in a briefing ahead of this year’s Sapphire. The tool “was limited to content-based experiences,” he said. “Anytime more complex agents were involved, it had limited capabilities.”

The issue, according to SAP’s chief AI officer Jonathan von Rüden, was that SAP had favored ease of use over power in its original architecture. “People wanted to see more pro-code flexibility,” he said in an interview at Sapphire 2026. “We had gone with a low-code approach. You could give it extension points and tools, but you couldn’t touch the core of it. Now you can build a custom agent, connect it to your own GitHub.”

Customers also came with “big plans” but needed hard rules and approval gates that the original Joule Studio didn’t support natively. “What people want is agentic flows with clear gates and workflows and subagents,” von Rüden said. “Old Joule didn’t provide that. Now it’s all baked together.”

What actually shipped

Joule and AI Agent Hub are generally available, though the latter is now getting a “massive revamp” with version 2.0. The Knowledge Graph is live and has expanded beyond its original scope. Initially used for building Joule skills, it now feeds context directly to AI agents so they can “figure out how to call something dynamically,” von Rüden said.

Joule Studio, however, is still in early customer adoption; general availability is expected in the third quarter — a year behind the original target. Joule Work, the new engagement layer announced this week, isn’t expected until the second half of this year.

What’s different in 2.0

The revamped Joule Studio addresses the gaps that held back adoption. Beyond the pro-code flexibility, developers can now build with popular agent frameworks like LangGraph and AutoGen. and agents will have a native understanding of SAP’s proprietary code and data models that generic tools can’t replicate.

It’s an evolution, not a reset, according to von Rüden. “The first runs were geared toward automation,” he said. “Now agents need to bring optimization and intelligence as well.” Customers, including Ericsson, Mercado Libre, and Siemens, are already using Joule agents in production.

Meanwhile, SAP is rethinking how it gets AI tools into customers’ hands. Joule Desktop, launching this week, lets individual users build automations without going through IT — a bet that grassroots adoption will move faster than centralized rollouts.

Why architecture matters more than ever in AI-driven software development

For most of my technology career spanning more than two decades, software architecture was primarily seen as a technical discipline — concerned with system design, scalability and integration patterns. Architecture reviews happened after strategy decisions were made, and success was often measured by whether systems operated as per the functional requirements and performed reliably under load.

Architecture operated quietly in the background. It guided structure, while governance lived in policy documents and delivery followed predictable engineering processes.

That model is no longer sufficient. Software development is entering a phase that many enterprises did not fully anticipate. Today, the most important architectural questions are no longer technical alone. They are organizational, operational and increasingly regulatory. As AI becomes embedded across the software development lifecycle, architecture is evolving from system design practice into something far more consequential — the mechanism through which enterprises maintain control, trust and accountability in automated environments. This shift is more visible in regulated industries, where innovation must move quickly but failure is measured not only in downtime, but in regulatory exposure and loss of institutional trust.

From what I’m seeing across enterprise programs today, architecture matters more than ever because software development itself is fundamentally changing. I see a clear pattern emerging where software architecture matters more than ever because it has become the foundation of enterprise control and trust.

The quiet transformation of the software development lifecycle

Unlike past technology revolutions, the AI transformation of software development is not arriving through a single disruptive moment. It is unfolding quietly inside everyday workflows.

Engineers are spending less time writing software from scratch and more time supervising systems to generate, configure and evolve software automatically. Development has moved from construction toward orchestration with the adoption of AI assistants to generate code snippets. Testing frameworks are becoming more automated, and deployment pipelines are making optimization decisions independently.

width="1024" height="270" sizes="auto, (max-width: 1024px) 100vw, 1024px">
Software development with AI-assisted tools.

Aman Sardana

We are also beginning to see AI agents move beyond assisting developers toward replacing parts of the traditional Integrated Development Environment itself. Tools powered by large language models — such as Anthropic’s Claude — are increasingly capable of reasoning across entire repositories, proposing architectural changes, executing multi-step development tasks and interacting directly with development workflows rather than functioning merely as coding autocomplete tools.

This evolution fundamentally alters the SDLC (Software Development Life Cycle). Development is no longer a sequence of controlled human decisions; it is becoming a collaboration between people, platforms and intelligent systems.

Many organizations still describe this evolution as productivity improvement. I believe that this framing misses the deeper implication where the real transformation is not speed — it is control.

This evolution aligns with the industry’s broader shift toward platform engineering and autonomous delivery models discussed in Thoughtworks Insights, which identifies AI-assisted development as a structural change in how software is produced rather than simply a productivity enhancement.

AI-assisted development introduces a new operating model

AI-assisted software development represents more than automation. In my opinion, it introduces a new operating model where development workflows themselves become intelligent systems. The SDLC transitions from linear execution to continuous adaptation. But autonomy without structure creates risk.

Architecture becomes the mechanism that ensures autonomy operates safely within enterprise intent. The question facing CIOs is no longer how fast we can deliver software. It is how do we trust systems that increasingly build themselves?

In regulated industries, this question carries profound implications. Autonomous development without architectural guardrails can introduce compliance gaps, security risks and operational instability at machine speed.

The leadership challenge: Trust over velocity

For years, technology leadership focused on accelerating delivery. Agile, DevOps and cloud adoption are all optimized for speed. AI makes all three easier to achieve.

When development accelerates faster than governance processes, organizations face a dangerous gap. Software may reach production before enterprises fully understand how it behaves, how decisions were derived or whether architectural standards were followed. I believe that the issue is not that AI produces poor software. The real issue is that enterprise oversight mechanisms were never designed for autonomous development velocity.

The importance of trust in technology systems is now reflected across global initiatives focused on responsible digital transformation. Collaborative efforts such as the Global Trust Challenge highlight the growing recognition that AI innovation must be paired with measurable trust frameworks spanning governance, ethics, security and accountability.

With AI-Assisted software development, I am increasingly seeing that the leadership challenge is no longer delivery velocity — it is to gain trust over what is being delivered. There are various aspects of software development that I see are increasingly becoming a concern for senior leadership, especially in the regulated industry.

  • Protection of sensitive data
  • Continuous compliance assurance
  • Explainability of automated decisions
  • Operational resilience amid autonomous change
  • Clear accountability when AI-generated systems fail

In regulated industries, trust is inseparable from technology operations. Regulatory compliance, data protection, operational resilience and auditability cannot be retrofitted after software is created. AI compresses development timelines so dramatically that traditional governance checkpoints become ineffective. Manual reviews cannot scale to match machine-generated output.

 The central question for technology leaders is shifting from “How fast can we deliver?” to “Can we trust what is being delivered?”

Architecture as the enterprise control system

Historically, architecture functioned as a blueprint — describing how systems should be built. In AI-driven software development, architecture is evolving from providing blueprints to being the control system. I am increasingly seeing that the role of architecture is shifting to define what AI systems can access, how decisions are validated and how risk is constrained before problems emerge.

Instead of reviewing every implementation, architecture establishes boundaries within which innovation can safely occur. Rather than slowing innovation, architectural governance enables safe autonomy. Developers and AI systems can move faster precisely because constraints are already designed into the environment.

This shift marks the emergence of what I see as the architecture-led enterprise — organizations where architecture aligns innovation, risk management and strategy simultaneously.

Trust will become the real competitive advantage

Over the next decade, AI will make software creation cheaper, faster and more accessible than ever before. Feature velocity will no longer differentiate organizations. It will be how organizations maintain trust. Customers, regulators and partners will favor enterprises that can demonstrate:

  • Predictable system behavior
  • Transparent decision processes
  • Secure handling of data
  • Operational resilience
  • Responsible AI adoption

One of the most significant changes I observe is that architects are no longer designing only systems — they are designing decision environments.

In AI-assisted development, architects increasingly define:

  • Approved architectural patterns that guide AI-generated solutions
  • Platform constraints that prevent unsafe implementations
  • Reference models that embed regulatory expectations
  • Engineering workflows that enforce organizational standards automatically

In my view, the goal shouldn’t be to limit developers from using AI tools. The goal should be to ensure that, regardless of who — or what — produces the software, outcomes remain aligned with enterprise intent.

Architecture becomes the invisible structure shaping thousands of development decisions every day.

The changing role of the CIO and chief architect

AI is redefining technology leadership itself.

Historically, CIOs and chief architects focused on modernization initiatives, platform adoption and delivery performance. AI has been redefining technology leadership. Today, leadership conversations increasingly revolve around institutional trust.

Executives are increasingly asking:

  • Can we trust AI-generated code operating in production?
  • Do we understand how automated decisions impact customers?
  • Are we scaling innovation faster than our ability to govern it?
  • Is accountability preserved when development becomes partially autonomous?

These questions elevate architecture into strategic leadership territory. The modern CIO is not merely overseeing technology delivery but stewarding an ecosystem of intelligent systems. Architects, in turn, are evolving from project advisors into organizational designers who shape how technology decisions happen across the enterprise.

Architecture is becoming the connective tissue linking innovation, risk management and business strategy. Enterprises with mature architecture practices are discovering they can adopt AI faster because foundational decisions — security, resilience, integration and governance are already institutionalized. Organizations lacking architectural discipline often slow down despite advanced tooling, constrained by uncertainty, risk concerns or operational fragility.

What differentiates organizations now is how safely they can scale innovation.

Final thoughts

The competitive advantage in AI-driven development is shifting away from feature velocity toward architectural maturity.

Humans, machines and automated processes are all contributing to software delivery. Architecture ensures they operate within shared principles. AI may accelerate how software is built, but architecture determines whether that software can be trusted at scale.

Much of the industry conversation around AI focuses on models, tools and productivity gains. Those elements matter, but they are not the defining challenge for enterprise leaders. The real challenge is governance and trust.

I believe that the organizations that succeed in AI-assisted development will not simply adopt better models or faster tools. They will rethink architecture as a strategic leadership function one that aligns technology execution with enterprise governance, risk management and long-term strategy.

For CIOs navigating AI-driven transformation, architecture is no longer optional. It is what makes innovation sustainable.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

“업그레이드 강제 없다”…레드햇, ‘무기한 지원’ RHEL 서비스 공개

기업 환경에서 마이그레이션은 구조적으로 복잡하고 비용이 많이 들며, 특정 환경에서는 제약이 따른다는 점을 고려해 다수의 소프트웨어 및 클라우드 공급업체는 확장 유지보수와 기술 지원, 보안 업데이트를 제공하고 있다. 다만 이러한 서비스는 대개 여러 조건이 따르거나 명확한 종료 시점을 갖는 경우가 많다.

레드햇은 연례 컨퍼런스 레드햇 서밋에서 ‘레드햇 엔터프라이즈 리눅스(RHEL) 롱 라이프 애드온(Long-Life Add-On)’을 발표하며 이러한 관행에 변화를 시도하고 있다. 이 옵션형 지원 서비스는 사전에 정해진 종료 시점이 없으며, 연 단위로 갱신된다. 이를 통해 기업은 사용하는 RHEL 버전에 관계없이 핵심 보안 패치와 버그 수정, 기술 지원을 지속적으로 제공받을 수 있다.

이번 서비스는 오픈소스 소프트웨어 기업 레드햇이 지난 4월 발표한 RHEL 주요 버전에 대한 14년 지원 정책을 기반으로 한 프리미엄 패키지의 연장선에 있다.

컨설팅 기업 인포테크 리서치 그룹(Info-Tech Research Group)의 수석 리서치 디렉터 샤시 벨라몬콘다는 “대부분의 엔터프라이즈 소프트웨어 공급업체는 제품 수명 종료 시점을 ‘마이그레이션을 강제하는 수단’으로 활용한다”라며 “하지만 레드햇은 상황을 다르게 해석하고 있다. 인프라 비용이 증가하고 IT 조직이 과부하 상태인 상황에서 CIO에게 ‘완전한 지원을 받으며 현재 환경을 무기한 유지할 수 있다’고 제시하는 것은 단순한 가격 전략이 아니라 분명한 경쟁 차별화 요소”라고 말했다.

수십 년 단위 운영 지원

레드햇 RHEL 사업부 부사장 겸 총괄 책임자 군나르 헬렉슨은 가상 브리핑에서 “다양한 이유로 일부 고객은 업그레이드나 다른 플랫폼으로의 이전이 불가능하거나 아직 준비되지 않은 상태”라고 설명했다. 이어 “RHEL 롱 라이프 애드온을 통해 고객은 필요한 일관성과 자율성을 유지하면서 현재 환경에 안전하게 머무를 수 있고, 동시에 리스크와 규제 준수를 관리할 수 있다”라고 밝혔다.

레드햇에 따르면 해당 서비스는 기존 14년 지원을 넘어 인프라 지원 기간을 확장해, 표준 또는 확장 소프트웨어 수명 주기를 훨씬 초과하는 환경에서도 안정적인 운영이 가능한 ‘수십 년 단위 운영 경로’를 제공한다. 특히 통신, 의료, 항공우주 산업과 같이 하드웨어 및 규제 수명 주기가 수십 년에 이르는 분야에서 유용성이 크다.

이로 인해 해당 산업을 포함한 다양한 기업은 인프라 운영을 ‘달력 기준’에서 분리할 수 있게 됐다. 즉, 공급업체가 정한 종료 시점이 아니라 자체 비즈니스 목표에 맞춰 현대화를 추진할 수 있다는 의미다. 고객은 레드햇이 ‘치명적’으로 분류한 취약점에 대한 보안 패치와 긴급 버그 수정에 지속적으로 접근할 수 있으며, 문제 해결과 운영 가이드를 위한 24시간 기술 지원도 제공받는다.

헬렉슨은 “고객은 사실상 무제한에 가까운 수명 주기를 확보하게 된다”라며 “인프라 운영 일정에 대한 궁극적인 통제권을 제공하는 것이 목표”라고 말했다.

또한 그는 많은 기업이 빠른 개발 주기를 운영하고 있음에도 불구하고, 동일한 속도로 변화할 수 없거나 아예 변경이 불가능한 핵심 시스템이 전 세계적으로 광범위하게 존재한다고 지적했다. 일부 기업은 특정 서버에서 워크로드가 안정적으로 운영되고 있는 상황에서 비용과 시간이 많이 드는 업그레이드를 강제받는 것을 우려하기도 한다고 설명했다.

이번 서비스는 인공지능(AI) 워크로드에 필요한 GPU 등 특수 하드웨어 부족 문제에 대한 대응책이기도 하다.

헬렉슨은 “하드웨어 부족 상황에서도 장기 지원 옵션을 통해 기업이 제약에 묶이지 않게 됐다”라며 “새로운 하드웨어를 확보할 때까지 완전한 지원이 유지되는 환경에서 시스템을 계속 운영할 수 있다”라고 말했다.

RHEL 롱 라이프 애드온은 모든 RHEL 버전에 적용 가능하며, 확장 라이프사이클 프리미엄 구독이 필요하다. 해당 서비스는 올해 3분기 출시될 예정이며, 연 단위 갱신이 가능해 IT 조직이 요구사항을 지속적으로 재평가할 수 있다.

AI 에이전트 시대 인프라 ‘앵커’ 역할

RHEL 롱 라이프 애드온은 빠르게 변화하는 소프트웨어 환경에서는 이례적인 접근이지만, 완전히 새로운 개념은 아니다.

예를 들어 기업용 소프트웨어 기업 오라클은 오라클 데이터베이스, 오라클 퓨전 미들웨어, 오라클 애플리케이션에 대해 단계별 ‘라이프타임 지원 정책’을 제공하고 있다. 다만 이 지원은 추가 비용이 발생하거나, 제품 일반 공급(GA) 이후 일정 기간으로 제한되는 경우가 많다. 동시에 오라클은 온라인 기술 지원과 기존 패치, 업그레이드 권한(추가 라이선스 없이 최신 버전 사용 가능)을 포함한 ‘서스테이닝 서포트’를 “오라클 소프트웨어를 사용하는 동안” 제공하고 있다.

반면 SAP는 온프레미스 ERP 시스템인 ECC(ERP Central Component)에 대한 주류 유지보수를 2027년 12월 31일 종료할 예정이다. 해당 플랫폼은 많은 대기업의 핵심 시스템이지만, SAP는 고객을 차세대 플랫폼인 S/4HANA로 전환하고 있다.

마이크로소프트(MS)는 지원 종료 이후에도 레거시 제품을 계속 사용해야 하는 고객을 위해 확장 보안 업데이트(ESU)를 제공하고 있지만, 이를 ‘최후의 수단’으로 규정하고 있으며 일부 고객에 한해 2028년까지 지원을 연장하고 있다.

이처럼 경쟁사 대비 더 높은 유연성과 일관성을 제공하는 동시에, 레드햇의 새로운 애드온은 기업이 기존 스택을 재구축하지 않고도 AI 에이전트 기반 기능을 도입할 수 있도록 지원할 가능성도 있다.

시장조사업체 포레스터의 수석 애널리스트 데빈 디커슨은 “기반 실행 환경이 수십 년 동안 안정적으로 유지되는 가운데, 에이전트와 역량은 빠르게 발전할 수 있도록 하는 ‘잠재적 앵커’ 역할을 할 수 있다”라고 말했다.

이어 “이 접근 방식은 통신, 국방, 산업, 공공 부문 고객에게 특히 중요하다”라고 덧붙였다.

디커슨은 “레드햇은 하이브리드 환경, 프라이빗 클라우드, 규제 산업, 그리고 타사의 에이전트 제어 구조에 맞춰 재설계하기 어려운 장기 운영 인프라 등 고객이 실제로 직면한 복잡한 현실을 연결하는 다리를 구축하고 있다”라고 평가했다.
dl-ciokorea@foundryco.com

AI is ready to take over Python programming, but not much else

Tests of how well 19 large language models (LLMs) complete and perform complicated multi-step tasks has shown that they are both error-prone and, in many cases, unreliable.

The findings are contained a preprint paper, LLMs Corrupt Your Documents When You Delegate, written by Microsoft researchers  Philippe Laban, Tobias Schnabel and Jennifer Neville based on a benchmark they created called DELEGATE-52 that allowed them to simulate workflows that might be part of a knowledge worker’s tasks. The paper is currently under review.

They said that the benchmark contains 310 work environments across 52 professional domains including coding, crystallography, genealogy and music sheet notation. Each environment consists of real documents totaling around 15K tokens in length, and five to 10 complex editing tasks that a user might ask an LLM to perform.

And, they stated in the paper’s abstract: “Our analysis shows that current LLMs are unreliable delegates: they introduce sparse but severe errors that silently corrupt documents, compounding over long interaction.”

Those mistakes are significant, they said. “The findings show that current LLMs introduce substantial errors when editing work documents, with frontier models (Gemini 3.1 Pro, Claude 4.6 Opus, and GPT 5.4) losing an average 25% of document content over 20 delegated interactions, and an average degradation across all models of 50%.”

Benchmark exercise receives a thumbs up

Brian Jackson, principal research director at Info-Tech Research Group, found the findings very interesting. “Putting a list of LLMs to the test across different work domains yields a lot of useful insights,” he said. “I think this type of benchmark exercise could be helpful to enterprise developers who are looking to leverage agentic AI to automate specific workflows and understand the limits of what can be achieved.”

However, he said, “what we shouldn’t conclude from this is that, because these foundation models caused document degradation after 20 edits, they can’t be used to automate work in a certain field. It just means they can’t do all of the work as they are currently constructed.”

But, Jackson stated, “in an enterprise environment where having an accurate output is crucial, you wouldn’t take that approach. You would design the automation flow with stronger guardrails in place to prevent errors. This could be done by using multiple agents that play different roles, such as one that makes the edits and another that checks for errors and makes corrections.”

Sanchit Vir Gogia, chief analyst at Greyhound Research, said, “the Microsoft paper should be read as a serious warning about delegated AI, not as a claim that enterprise AI has failed. That distinction matters. The paper is still a preprint, so it deserves careful handling, but its central question is exactly the one CIOs should be asking: can AI preserve the integrity of complex work over repeated delegation?”

The study, he said, is stronger than what he described as “the usual AI benchmark theatre,” because it tests work products, not just looking at clever one-off answers. “It uses reversible editing tasks, domain-specific evaluators, and a round-trip method to see whether a document returns intact after repeated edits. In too many cases, it does not.”

That is the point, explained Gogia. “This is not merely about hallucinations. It is about artefact integrity.”

AI is ‘not yet trustworthy enough’

He added that the headline finding is “uncomfortable: even the strongest models corrupt about a quarter of document content by the end of long workflows, while average degradation across all tested models reaches roughly 50%. The paper also finds that performance varies sharply by domain. Python is the only domain where most models are ‘ready,’ and the best model reaches that threshold in only 11 of 52 domains.”

AI is not failing because it cannot write, said Gogia, it is failing because it cannot yet preserve.

The study, he pointed out, “is especially useful because it shows how errors accumulate. Bigger documents worsen outcomes. Longer interaction worsens outcomes. Distractor files worsen outcomes. Short tests flatter the system, while longer workflows expose it. That maps rather neatly to the enterprise world, where work is messy, files are stale, context is noisy and the most important documents are rarely the simplest ones.”

The honest conclusion, he said, “is not that AI should be kept out of enterprise workflows. It is that delegated AI is not yet trustworthy enough to be left alone with consequential artefacts.”

When AI edits an important document such as a contract, a ledger, a policy, a codebase, a board paper, or a compliance record, Gogia warned, the enterprise still owns the damage.

Mitigation approaches

In order to prevent that damage, Jackson suggested, enterprises can do additional training and fine-tuning of models to be better adapted to their specific workflows: “These foundation models are very good at doing a lot of different tasks, but less good at doing one specific task very well. So, enterprises that want to achieve that may need to improve the models themselves by training on their own data.”

For example, “[the Microsoft paper] points out one multi-agent setup that led to more degradation instead of less, so the method to detect degradation must be well-designed to be effective,” he said. “Another approach that some enterprise platforms have introduced is a way to deterministically verify the output for accuracy using mathematical verification. So, knowing what domains prove more difficult for a single LLM to automate is useful, as developers can plan to add more verification steps to the process.”

He said, “depending on the model, for example, if it’s totally open source or if it’s proprietary, you can have more flexibility in terms of how much you can customize it. So, an enterprise developer might look at these results, pick the LLM best at automating their desired domain, and then send it in for additional training to master the process.”

People do not disappear

According to Gogia, the paper also shows something more precise than ‘AI still needs people.’ “It shows that AI changes the human layer from production to supervision, validation, and accountability. That is a rather different operating model from the one being sold in many boardroom conversations.”

People, he said, “do not disappear. Their work moves. This is the uncomfortable part for enterprises chasing headcount reduction. The people best placed to catch AI errors are often the same people organizations are hoping to replace, reduce, or redeploy. Remove too much domain expertise from the workflow, and the enterprise also removes the people who know when the AI has quietly damaged the work.”

Expertise becomes more valuable, not less, said Gogia: “The paper reinforces this because stronger models do not merely delete content. They often corrupt it. Weaker models are easier to catch when they visibly drop material. Frontier models are more awkward because the content remains present but becomes wrong, distorted, or subtly altered. That requires knowledgeable review, not casual inspection.”

깃랩 “개발자 도구 비용 최대 100배 증가”…AI 시대 과금 구조 대전환

깃랩의 CEO 빌 스테이플스는 “지난 1년 동안 기업의 개발자 플랫폼 서비스 월 비용이 좌석당 수십 달러에서 수백 달러 수준으로 상승했으며, 향후 수천 달러에 이를 것으로 보인다”며 “AI 기반 소프트웨어 개발 도구의 과금 방식에 구조적인 변화가 나타나고 있다”고 밝혔다.

스테이플스는 고객, 투자자, 임직원에게 보낸 ‘깃랩 액트 2(GitLab Act 2)’라는 공개 서한에서 “AI 에이전트가 병렬로 머지 요청을 생성하고, 24시간 내내 파이프라인을 실행하며, 인간 팀이 한 번도 달성하지 못한 속도로 커밋을 생성하고 있다”고 설명했다. 이어 이러한 변화가 비용 증가의 핵심 요인이라고 덧붙였다.

해당 서한에 따르면 깃랩은 올해 초 AI 에이전트 작업에 대해 사용량 기반 과금(consumption pricing)을 도입했으며, 앞으로는 사용량 기반과 구독형 과금을 혼합해 적용할 수 있도록 할 계획이다.

이번 발표는 소프트웨어 기업들이 자율형 AI 시스템 중심으로 사업 구조를 재편하고, 사용량 기반 과금 모델로 전환하는 흐름 속에서 나왔다.

앞서 경쟁사 깃허브는 AI 코딩 도구 코파일럿의 과금 체계를 사용량 기반으로 전환했으며, AI 기반 코딩 작업 증가로 인프라 수요가 확대된 데 따른 조치다. 이와 함께 마이크로소프트(MS), 메타, 오라클 등 주요 기술 기업들도 AI 투자 확대 전략과 연계된 조직 재편을 발표한 바 있다.

비용 상승의 배경

시장조사업체 가트너의 수석 애널리스트 니티시 타야기는 “현재 변화는 구조적인 전환이며, 근본적인 원인은 컴퓨팅 자원 사용량 증가”라고 분석했다. 이어 “대부분의 AI 코딩 에이전트 공급업체가 사용량 기반 과금 모델로 이동하고 있으며, 이 흐름은 더 이상 스타트업에만 국한되지 않는다”고 설명했다.

가트너는 2028년까지 AI 코딩 비용이 평균 개발자 연봉을 넘어설 것으로 전망했다. 이는 대형 언어모델(LLM)의 토큰 사용량 증가와 사용량 기반 라이선스 확산이 주요 요인이다. 타야기는 “이는 일시적인 조정이 아니라 가격 체계 전반의 구조적 재편을 의미한다”고 강조했다.

또한 가트너 조사에 따르면 현재 약 29%의 기업이 개발자 1인당 월 200~500달러 수준의 AI 토큰 비용을 지출하고 있다. 다만 이 수준이 지속될 것이라고 가정해서는 안 된다고 그는 지적했다. 타야기는 “개발자들이 빠르게 일반 사용자에서 핵심 사용자, 그리고 고사용자로 이동하고 있다”며 “코드 생성, 테스트, 리팩토링, 문서화 등 일상 업무에 AI 에이전트가 깊이 통합될 경우 월 2,000달러 이상을 사용하는 사례도 충분히 가능하다”고 설명했다.

구매자에게 미치는 변화

컨설팅 기업 그레이하운드 리서치의의 수석 애널리스트 산치트 비르 고지아는 이번 변화가 좌석 기반 과금 모델을 없애는 것이 아니라 그 역할을 재정의하는 것이라고 분석했다.

고지아는 “기업용 개발자 플랫폼에서 좌석 기반 과금은 사라지는 것이 아니라 중요도가 낮아지고 있다”며 “AI 에이전트가 인간 개발자를 대신하거나 보조하며, 때로는 앞서 작업을 수행하는 환경에서는 좌석 수가 더 이상 가치, 비용, 리스크를 설명하는 핵심 단위가 아니다”라고 말했다.

향후 2년간은 구독형과 사용량 기반 과금이 결합된 하이브리드 구조가 현실적인 모델로 자리 잡을 전망이다. 고지아는 “공급업체는 최소 수익 기반이 필요하고, 고객은 예측 가능한 기준 비용을 원하기 때문에 구독 모델은 유지될 것”이라며 “반면 머신 작업은 비용 변동성이 크기 때문에 사용량 기반 과금은 확대될 수밖에 없다”고 설명했다.

이어 “결국 CIO들에게는 익숙한 클라우드 비용 구조처럼 ‘약정, 사용, 모니터링, 거버넌스, 그리고 비용 논쟁’의 흐름이 반복될 것”이라고 덧붙였다.

또한 개발자 5,000명 규모의 기업이 좌석 기반에서 사용량 기반 과금으로 전환할 경우 단순한 비용 변화에 그치지 않고 관리 방식 자체가 바뀐다고 강조했다. 고지아는 “좌석 기반 계약은 인원 수를 기준으로 할인 협상과 갱신을 진행하는 비교적 단순한 구조였지만, 사용량 기반 과금은 이러한 익숙함을 깨고 소프트웨어 개발 비용을 실시간으로 변화하는 ‘계량형 비용’으로 전환한다”고 설명했다.

구조조정

깃랩은 단순히 가격 정책만 바꾸는 데 그치지 않는다.

회사는 일부 조직에서 최대 3단계의 관리 계층을 제거하고, 연구개발(R&D) 조직을 약 60개의 소규모 팀으로 재편할 계획이다. 또한 승인 절차, 워크플로우 라우팅, 운영 프로세스 전반에 AI 에이전트를 통합한다.

빌 스테이플스는 “회사는 이전 시대에 적합한 구조로 성장해왔지만, 현재 환경에는 더 이상 맞지 않는다”고 밝혔다.

깃랩은 이번 구조조정의 일환으로 인력 감축도 진행한다. 회사는 월요일 미국 증권거래위원회(SEC)에 제출한 8-K 보고서를 통해 이 같은 계획을 공개했다. 다만 영향을 받는 직원 수는 밝히지 않았으며, 최종 규모는 6월 2일 실적 발표에서 공개할 예정이다. 현재 깃랩의 직원 수는 약 2,500명이다.

또한 이번 구조조정에는 국가별 운영 범위를 최대 30% 축소하는 방안도 포함됐다.

회사를 떠나기를 원하는 직원을 위한 자발적 퇴직 신청 기간은 5월 18일 마감된다. 깃랩은 6월 1일까지 새로운 조직 구조를 확정할 예정이며, 관련 지역에서는 현지 노동법 절차를 준수할 방침이다.

깃랩은 이번 구조조정이 플랫폼 재구축과도 맞물려 있다고 설명했다. 머신 규모에 맞춰 Git을 재설계하고, CI/CD를 AI 에이전트를 위한 오케스트레이션 런타임으로 재구성하는 작업이 포함된다.

고지아는 “이번 변화는 특정 기업 하나의 발표를 넘어선다”며 “플랫폼이 점점 소프트웨어 생산 시스템으로 진화하고 있다”고 분석했다. 이어 “이에 따라 CIO는 도입 확대 이전에 비용, 거버넌스, 운영 측면의 경계를 명확히 설정하고 이를 유기적으로 연결해야 한다”고 강조했다.
dl-ciokorea@foundryco.com

SAP’s biggest AI bet yet: Agents that execute, not just assist

At Sapphire 2026, SAP unveiled what it calls the “Autonomous Enterprise:” a sweeping vision in which AI agents don’t just assist workers, but execute business processes themselves. “We’re building nothing less than a new SAP,” CEO Christian Klein told attendees in Orlando, Florida. The company, he said, is “becoming a business AI company.”

The centerpiece is the SAP Autonomous Suite, which deploys more than 50 domain-specific SAP Joule AI assistants across finance, supply chain, procurement, HR, and customer engagement. Those assistants orchestrate a subset of 200-plus specialized agents to execute tasks end-to-end, from compressing the financial close to automating supply chain rebalancing.

Klein emphasized that enterprise AI demands precision. “If AI runs payroll, financial close, or supply chain planning, 80% accuracy is not good enough,” he said.

A new platform and a new interface

Underpinning the suite is SAP’s new Business AI Platform, which unifies SAP’s Business Technology Platform, Business Data Cloud, and AI capabilities into a single governed environment. At its core is what SAP calls “company memory,” a context graph that feeds policies, procedures, Slack conversations, and email approval chains to agents so they know what to do, and, critically, what not to do.

“When there’s an exception, it’s added to company memory and all agents adapt instantly,” said Muhammad Alam, executive board member responsible for product engineering.

SAP also introduced Joule Work, which fundamentally changes how users interact with SAP software. Instead of navigating applications and entering data across screens, users describe a desired outcome, and Joule orchestrates the workflows, data, and agents to get it done.

For developers, SAP launched Joule Studio 2.0, available free through year-end, which lets them build agents with Python, Claude Code, or Cursor, and deploy to a managed runtime. The AI Agent Hub, arriving in the third quarter at no additional charge, provides a single place to discover, manage, and govern agents across SAP and non-SAP systems.

Partners and proof points

SAP brought key partners on stage, and on screen, to underscore its AI ambitions. In video appearances, Anthropic President Daniela Amodei said that Claude models power Joule agents across finance, procurement, and supply chain, and Nvidia CEO Jensen Huang discussed open agent protocols that allow AI to act safely within enterprises.

JPMorganChase CFO Jeremy Barnum said that the bank is upgrading its general ledger to SAP’s unified platform and is exploring agentic capabilities for treasury management. “You can’t realize the full potential of AI in a legacy environment,” he said.

A number of customers are already in production. For example, according to Rob Fisher, KPMG’s global head of advisory, the company has deployed Joule across 270,000 users, with 3,000 consultants using 20 agents, and the company is targeting $120 million in reduced contract leakage.

In addition, Ericsson reported 90,000 hours saved through the use of personalized AI recommendations by its 85,000 employees. Bayer is using cash-collection assistants; Novartis has deployed high-volume sourcing agents; and H&M has demonstrated a store-intelligence system that delivers real-time performance data and AI-driven recommendations to store managers.

Mind the gap

Still, adoption trails ambition. Maribel Lopez, founder of Lopez Research, said enterprises aren’t deploying what’s already available. “SAP customers are very cautious because the SAP workloads are at the heart of running the business,” she said.

Mickey North Rizza, group vice president of enterprise software at IDC, is more optimistic. “Currently, 73% of AI agents and assistants are used frequently, and bring 30 to 90 minutes a day in savings,” she said. “SAP’s AI vision is a north star for their clients to move successfully into the AI world.”

SAP’s Alam added that customers have grown impatient with the company’s AI promises and have been taking him to task. Referring to the financial close assistant, one demanded: “Is it really there? If it’s three months out, I’ll go build it myself.” That’s creating a new sense of urgency, Alam said.

Trust, but verify

SAP has made governance a central focus, Alam noted. The company has built SOX auditor compatibility into its framework to ensure audit readiness at the agent level, and every action is logged and traceable.

But Jonathan von Rüeden, SAP’s chief AI officer, acknowledged that customers have different comfort levels with autonomy, depending on the process. “In a financial close process, the CFO is going to want to have a look when books are being closed,” he said in an interview. “But people are more comfortable with autonomous accruals.”

SAP is also prioritizing interoperability. Agents built in Joule Studio will support the A2A protocol to connect with third-party agents, and SAP’s orchestration layer will govern non-SAP agents at no additional cost.

The road to autonomy

To accelerate adoption, SAP has updated its offerings. RISE with SAP customers will get three Joule assistants activated in their first year, while GROW with SAP customers will get access to the full portfolio of agents upon onboarding. Agent-led transformation tooling can reduce migration efforts by approximately 35%, according to SAP.

“[But] right now, customers don’t need thousands of agents; they need to get agentic AI up and running with a set of secure, governed agents that help them do specific use cases,” Lopez said. “Customers need to ask what the vision is, tie it back into their needs, and then set up the journey.”

SAP customers say migration is eating their budgets—and AI is next in line

SAP has unveiled its most ambitious AI vision yet at Sapphire 2026 this week: more than 50 Joule Assistants, 200 specialized agents, and a new “Autonomous Enterprise” framework. But for many SAP customers, who are still navigating costly migrations to SAP’s S/4HANA cloud ERP, the more pressing question isn’t what AI can do. It’s whether they can get there from here.

Budget constraints are now the top challenge facing SAP customers, cited by 61% of respondents in the Americas’ SAP Users’ Group (ASUG) 2026 Pulse of the SAP Customer survey, 7 percentage points higher than last year. And the culprit isn’t macroeconomic pressure, according to ASUG. It’s the migrations themselves.

“From our research, it’s more so that S/4HANA projects are creating the budget pressures,” said ASUG’s research director Marissa Gilbert. “And we’ll see AI impacting them next.”

Stuck in pilot mode

Separate ASUG research on AI adoption, conducted in February 2026 in collaboration with Microsoft and Intel, revealed just how far most organizations are from SAP’s autonomous vision. The survey of 142 ASUG members found that while 41% are actively piloting AI and 39% are building foundational knowledge, only 24% have moved to active deployment. Just 10% have achieved enterprise-scale rollout.

“AI adoption is advancing, but turning pilots into enterprise deployment remains the key challenge,” the ASUG 2026 SAP AI Research stated. “The largest drop-off occurs when moving from pilot activity into operational rollout.”

The barriers are familiar: security, privacy, and governance concerns (32%), budget constraints (27%), and lack of internal AI skills (27%). What organizations say they need to move forward is equally telling: clear governance frameworks (40%), alignment with business processes (39%), and compelling use cases (39%).

Even among organizations that are evaluating AI’s business impact, few do so rigorously. While 56% say they assess AI ROI, only 18% use formal frameworks; the rest do so informally or on a case-by-case basis. And there’s a significant gap between ambition and accountability: 61% of organizations aim for AI to drive cost reduction, but only 31% consistently measure whether it’s happening.

As one survey respondent put it: “There is not enough focus on delivering value via AI rather than just using AI. Investment is currently not justifying the outcomes.”

A matter of perspective

Not everyone is worried. Mickey North Rizza, group vice president of enterprise software at IDC, paints a more optimistic picture. “More than half of organizations have AI agents embedded in core workflows; 20.5% are scaling to agents gradually,” she said. “The point is, companies are moving and moving fast. SAP’s AI vision is a north star for their clients to move successfully into the AI world.”

But Maribel Lopez, founder of Lopez Research, sees it differently. AI capabilities may be available, she said, but adoption lags far behind. “SAP customers are very cautious because the SAP workloads are at the heart of running the business,” she said.

Chase Christensen, segment CIO at St. Petersburg, Florida-based manufacturing services company Jabil, offered a more tempered view. “It feels like SAP is moving faster than in the past,” he said. “It’s still bumpy. But I’m seeing a path where this doesn’t require multi-year transformations to get value out of innovation.”

Christensen was candid about what SAP still needs to deliver: support. “Give us the support rather than have us deal with a mix of systems integrators to figure it out,” he said.

Migration progress, at a cost

ASUG data shows that 56% of customers are either live on S/4HANA or have begun their migration, up from 45% in 2024. The percentage of them who are planning to wait more than two years to make the move dropped from 22% in 2023 to just 9% in 2025. And S/4HANA as a top challenge has “stabilized compared to last year,” Gilbert said.

But that momentum comes at a cost. Reports of budget pressure jumped seven points year-over-year. Data challenges are intensifying; the percentage of respondents struggling to gain actionable insights from data rose eight points year-over-year to 35%. And nearly half (48%) cited integration as a challenge.

Hybrid environments also appear to be here to stay. While on-premises deployments are declining sharply, 28% of customers remain in hybrid scenarios. “Based on our historical data, I think we can expect hybrid scenarios to remain,” Gilbert said.

SAP’s answer: decentralize

Jonathan von Rüeden, SAP’s chief AI officer, acknowledged the adoption gap at Sapphire. “We may have over-indexed as an industry on centralization,” he said.

His pitch: give AI tools directly to business users through the new Joule Desktop, launching this week, and let adoption flow back to IT organically. “If you get it in people’s hands, that needs to flow back into IT eventually,” von Rüden said. “Now is the time to do that.”

That approach may find a receptive audience. The ASUG research shows that Microsoft dominates current AI tool adoption (72% of respondents use Copilot), but SAP is positioned for growth; more than half of respondents said they plan to use the embedded AI features in SAP applications and Joule agents.

However, whether budget-constrained IT organizations can absorb another wave of change remains an open question.

社員のAI疲弊と経営層の期待——板挟みになるCIOたち

2024年、クラウドソフトウェア企業BlackLineはAIエージェント「Buckie」を導入した。社員がHRやITに関する質問を気軽にできるナレッジベースだ。しかし1年も経たないうちに移行を余儀なくされた。「テクノロジーの進化があまりにも速かった」とCIOのSumit Johar氏は言う。翌年6月には「Google Gemini Enterprise」に移行し、今では社員が自ら300近くのAIエージェントを構築するまでになった。

こうした急速なAI導入の波は、CIOの課題を複雑にしている。社員の側では、次々と新しいツールやプロセスが押し寄せ、新しいワークフローや期待外れの時間節約から来る燃え尽きによる「AI疲弊」が広がっている。一方、取締役会はCEOにAI導入と成果を強く求め、CIOは経営層と現場の狭間に置かれる。グローバルITコンサルティング企業SutherlandのCIO、Doug Gilbert氏は「現状、AIの導入失敗率は最大90%に達する」と言う。焦って進めるより、じっくり正しく進めた方が、最終的には早く成果が出る、というのがGilbert氏の持論だ。

なぜ社員はAIに疲れるのか

AIインテグレーション企業Cadre AIのファウンダー兼CSO、Riley Stricklin氏は、AI疲弊が広がっているのは社員がAIに反対しているからではなく、新しいツール、新しい期待、絶え間ない変化に圧倒されているからだと指摘する。AI導入の初期は時間がかかり、約束された時間節約が実現する前に一時的に業務負荷が増える。そしてようやく新しいテクノロジーに慣れ始めた瞬間、また新しいものが登場して状況が変わる。「これがAI疲弊を生む。従業員にとって変化のスピードが速すぎるのだ」とJohar氏は言う。

AI疲弊が最も起きやすいのは、既存のプロセスにAIを後付けで乗せたときだとGilbert氏は指摘する。たとえば社員がプログラムのデータをChatGPTなどの別のLLMにコピー&ペーストするように求められるケースだ。さらにAIが会社のデータと適切に統合されていない場合、LLMがハルシネーションを起こし、低品質なアウトプットを生成することもある。これらは、効率性を期待していた社員を疲れさせる。

一方で、既存システムにAIを差し込むのではなく、ワークフロー全体を再考してAIを業務に組み込むアプローチをとったCIOは取り組みに成功している。シームレスにAIを統合できレれば、社員はAIを意識すらしないだろう。そして効率性を手に入れることができる。

あらゆる方向からのプレッシャー

Gilbert氏によれば、場当たり的なアプローチをとってしまう理由はトップダウンのプレッシャーが関係しているという。取締役会やCEOが他社のAI事例を見て「我が社も」となり、その要求がCIOに下りてくる。CIOは素早くソリューションを展開するプレッシャーを感じる。

投資家もAIがコストを大幅に削減することを期待し、リーダーシップに即時のROI証明を求める。しかし「コストを削減する前にコストをかけなければならないことを、常に理解してもらえるわけではない」とJohar氏は言う。McKinseyの調査では、AI関連の業績への影響を報告した企業は参加企業の39%に過ぎず、大多数のAIプログラムはまだ意味のある財務的成果を出せていないことが示されている。

プレッシャーは上からだけではない。冒頭に紹介した他部門からのAIツール導入要請が25%増加している。IT部門は寄せられる要請を一つひとつ評価しなければならず、IT側の疲弊も深刻だ。さらにAIそのものの変化のスピードが速いため、テクノロジーを評価して調達を決定した頃には、すでに時代遅れになっている可能性もある。社員が自ら構築した300近くのAIエージェントに対して、データプライバシーやセキュリティのポリシーへの準拠を確保するガバナンスと構造をもたらすことも、CIOとそのチームの責任だ。

AIをどう語るか——社員を動かすナラティブ

Stricklin氏は、CIOがAIで成功するためには、AI導入の前に「収益改善」「コスト削減」「処理時間の短縮」など具体的なビジネス目標を設定することが重要だと言う。しかし、社員にAIの受け入れを促すには、ビジネス上のメリットを強調するより別のアプローチが有効だとも指摘する。

AIの価値を社員の視点から見るとどうだろうか。業務をより効果的にこなせるようになる、スキルが身につくなどが挙げられるだろう。Johar氏は、このような社員の視点からAIを語ることを勧める。これにより「社員は時間を投資することにずっと協力的になる」。Gilbert氏も、AIは人員削減の手段ではなく、社員と並走するものだというナラティブに転換することの重要性を強調する。人間が常に関与してモデルを微調整し、AIのアウトプットの精度を継続的に高める役割を担うべきだと言う。

Google Workspaceが委託した調査では、AIが会社に大きなプラスの影響をもたらしたと答える確率は、経営幹部の方が一般社員より15%高い。この認識のギャップを埋めることが、CIOの重要な役割だ。

Stricklin氏はまた、組織全体に一度にAIを展開しようとするのではなく、今後6カ月で注力する2〜3の優先領域を選び、社員を巻き込みながら最善の進め方を考えることを勧める。「すべてを同時に解決しようとすると、成果より混乱の方が大きくなる」。どの領域にAIを使わないかを決めることも同様に重要だと言う。

Gilbert氏は「AIがすべてのビジネス課題に対する答えではない。不要だと判断すれば、CEOや取締役会にも臆せず反論すべきだ」と言う。「ときにはAIが答えではないこともある」。

GitLab CEO sees developer tool bill increasing 100-fold

GitLab CEO Bill Staples says enterprises’ monthly bill for developer platform services has risen from tens of dollars per seat to hundreds over the last year, and is headed toward the thousands, signaling a structural change in how they will be billed for AI-enabled software development tools.

The increase in cost reflects the volume of work AI agents generate inside development pipelines, Staples wrote in an open letter to customers, investors, and employees titled “GitLab Act 2. “Agents open merge requests in parallel, trigger pipelines around the clock, and push commits at a rate no human team ever did,” he wrote.

GitLab introduced consumption pricing for agent work earlier this year and will now allow customers to mix consumption and subscription pricing, the letter said.

The announcement comes as software vendors increasingly reposition themselves around autonomous AI systems and usage-based pricing.

Earlier this year, rival GitHub moved Copilot toward usage-based billing as AI-assisted coding workloads increased infrastructure demands. Large technology vendors including Microsoft, Meta, and Oracle have also announced restructuring efforts tied to broader AI investment strategies.

Why the bill is rising

Nitish Tyagi, senior principal analyst at Gartner, said the shift is structural and the underlying driver is compute consumption. “Almost all AI coding agent vendors are moving toward a consumption-based pricing model. This shift is no longer limited to startups,” Tyagi said. Gartner has predicted that by 2028, AI coding costs will overtake the average developer’s salary, driven by rising LLM token consumption and the spread of consumption-based licensing. “This signals a structural pricing reset rather than a temporary adjustment,” Tyagi added.

Tyagi said Gartner research shows 29% of organizations today report AI token costs of $200 to $500 per developer per month, but cautioned that CIOs should not assume this will remain the norm. “Developers are rapidly shifting from light users to mainstream and power users. Power users can easily consume more than $2,000 per month, particularly when AI agents are embedded into daily workflows such as code generation, testing, refactoring, and documentation,” he said.

What changes for buyers

Sanchit Vir Gogia, chief analyst at Greyhound Research, said the move does not eliminate per-seat pricing but redefines its role. “Per-seat pricing for enterprise developer platforms is not dead. It is being demoted,” Gogia said. “The seat is no longer the unit that explains value, cost or risk once AI agents begin producing work on behalf of, beside and sometimes ahead of human developers.”

According to Gogia hybrid commercial architecture, rather than the death of subscriptions, is the realistic outlook over the next two years. “Subscriptions will remain because vendors need economic floors and buyers need predictable baselines. Consumption will expand because machine work has variable costs,” he said. “The compromise will be familiar to CIOs who have already lived through cloud economics: commit, consume, monitor, govern, and then argue about the bill.”

Gogia said a 5,000-developer enterprise moving from per-seat to consumption pricing is changing not just the cost line but the rhythm of governance. “A per-seat contract gives procurement a tidy fiction: count the humans, negotiate the discount, approve the renewal and move on. Consumption pricing breaks that comfort. It turns software development spend into a living meter,” he said.

Restructuring

GitLab is not just changing its pricing model.

The company plans to remove up to three layers of management in some functions, reorganize research and development into roughly 60 smaller teams, and integrate AI agents into approvals, workflow routing, and operational processes.

“We grew into a shape that was right for the last era and isn’t right for this one,” Staples wrote in the letter.

GitLab will cut jobs as part of the restructuring, the company said in an 8-K filing with the US Securities and Exchange Commission on Monday. GitLab did not disclose the number of employees affected and said the final scope would be shared on its June 2 earnings call. The company employs approximately 2,500 people.

GitLab said in the letter the restructuring includes a reduction of its country footprint by up to 30%.

A voluntary separation window for employees who do not wish to continue with the company closes on May 18. GitLab said it would finalize the new structure by June 1, and that local labor law processes would be followed in affected jurisdictions.

The restructuring is tied to a platform rebuild, GitLab said, including re-engineering Git for machine scale and reworking CI/CD into an orchestration runtime for AI agents.

For CIOs, Gogia said, the reframing is bigger than any single vendor’s announcement. “They are becoming software production systems. That requires a different buying model. It requires cost boundaries, governance boundaries and operational boundaries, all tied together before adoption scales,” he said.

This article first appeared on InfoWorld.

El desafío global de los CIO

Por si los directores de sistemas de información no tienen ya bastante con lo que lidiar ante el aumento de la demanda de IA en todo el ámbito empresarial, el actual clima geopolítico, volátil, les obliga ahora a adoptar una mentalidad más centrada en lo global en todo, desde sus cadenas de suministro de TI y la regulación hasta la infraestructura distribuida y la plantilla.

Últimamente, las noticias han hablado mucho sobre el impacto de la guerra de Irán en la tecnología: ataques con drones que dañan los centros de datos de AWS en Baréin y los Emiratos Árabes Unidos, así como el aumento de los ciberataques y la amenaza de la escasez de semiconductores. “Esta no es simplemente una guerra de misiles y milicias, sino una guerra que abarca las redes, las cadenas de suministro y los sistemas que impulsan el mundo moderno”, escribe S. Yah Kalash, investigador principal del Centro para la Innovación en Gobernanza Internacional. “Su consecuencia más duradera no será el cambio territorial, sino la aceleración de un orden tecnológico global fracturado, disputado y profundamente politizado”.

Aunque las áreas de TI han recibido discretos aumentos para financiar iniciativas de IA durante el último año, la preocupación económica de los altos directivos por el incremento de los costes energéticos, la inflación persistente y el declive de la economía global afectará directamente a las organizaciones tecnológicas, según un reciente informe de Forrester. “Los CIO tendrán que redoblar una vez más sus esfuerzos en la gestión del gasto en TI mediante la priorización, la reducción selectiva de costes, una gestión estricta de los proveedores y otras técnicas”, reza  el informe. “Aumentará el escepticismo sobre el retorno de la inversión en IA, lo que obligará tanto a los CIO como a los CISO a defender las inversiones”.

Como en cualquier situación, un periodo de inestabilidad global hace que sea de vital importancia que los responsables de TI sigan fomentando una comunicación abierta con el resto del equipo directivo para establecer objetivos alcanzables y gestionar las expectativas de forma eficaz. “Manténganse en estrecha conexión con el resto de la alta dirección y comprendan qué repercusiones está teniendo la volatilidad en el negocio”, afirma Mark Moccia, vicepresidente y director de investigación de Forrester. “Esto puede dar lugar a recortes en el presupuesto tecnológico a corto plazo debido a las presiones [de pérdidas y ganancias]”.

“Una tormenta perfecta de retos” para los líderes de TI

Los responsables de TI afirman que no se sienten intimidados por el desafío que plantean las tensiones geopolíticas. Tadas Tamošaitis, director técnico de FL Technics, con sede en Vilna (Lituania), un proveedor global independiente de servicios de mantenimiento, reparación y revisión (MRO) de aeronaves, está muy familiarizado con ellas. El sector de la aviación opera en la encrucijada de algunas de las presiones normativas, geopolíticas y de la cadena de suministro más exigentes del mundo. Depende del equipo de TI mantenerlo todo a flote en tiempo real, afirma. “Las aerolíneas esperan que las aeronaves vuelvan al servicio en plazos ajustados, por lo que cualquier interrupción en nuestros sistemas digitales tiene consecuencias inmediatas y cuantificables en tierra”, explica Tamošaitis.

En estos momentos, el sector global de MRO está atravesando lo que él denomina “una tormenta perfecta de retos”, dado que los picos de demanda pospandémicos han puesto a prueba unas cadenas de suministro aeroespaciales ya de por sí frágiles. Los plazos de entrega de las piezas críticas están alcanzando duraciones récord, afirma Tamošaitis.

“Las tensiones geopolíticas —desde controles de exportación más estrictos sobre tecnologías de doble uso hasta restricciones del espacio aéreo que afectan a dónde y cómo movemos aviones y piezas— están redefiniendo constantemente el mapa logístico”, cuenta.

Al mismo tiempo, los reguladores globales y un número creciente de autoridades aeronáuticas nacionales están modificando sus requisitos digitales y de datos de formas que no siempre coinciden, añade Tamošaitis. “Para una empresa de MRO que opera en múltiples jurisdicciones, esto genera una complejidad real: es posible que los mismos datos sobre la configuración de una aeronave deban almacenarse, consultarse y compartirse de manera diferente dependiendo de qué autoridad tenga la supervisión”, dice.

A todo ello se suma la amenaza constante de los ciberataques. “Las empresas de MRO poseen datos de ingeniería altamente sensibles —configuraciones de aeronaves, historiales de mantenimiento, sistemas logísticos— lo que las convierte en objetivos atractivos”, reflexiona Tamošaitis. “Un ciberataque exitoso no solo interrumpe los sistemas informáticos; puede dejar las aeronaves en tierra. Debemos construir sistemas que estén aislados físicamente y sean lo suficientemente resilientes como para resistir los ataques, al tiempo que permanezcan lo suficientemente conectados para una coordinación global las 24 horas del día, los 7 días de la semana”.

Recientemente, FL Technics completó la adquisición de Job Air Technic, lo que, según Tamošaitis, añadió una importante dimensión de integración de sistemas. Fue una tarea compleja que requirió la consolidación de la infraestructura informática heredada, los almacenes de datos y los procedimientos operativos de todas las entidades adquiridas, sin interrumpir las operaciones de mantenimiento que funcionan las 24 horas del día. “Una cosa es planificar una integración de TI sobre el papel; otra muy distinta es ejecutarla cuando un avión que entra en el hangar a las 2 de la madrugada no puede esperar a que termine una migración de datos”, afirma.

Los retos globales reconfiguran los stacks de TI y las estrategias

Para Moe Rosenfeld, director de sistemas de información de eCopier Solutions, con sede en Nueva York, los problemas de la cadena de suministro son lo que le quita el sueño. “La parte de hardware de la gestión documental recorre una cadena de fabricación global, y la tensión geopolítica se refleja en los plazos de entrega, la disponibilidad de componentes y la estabilidad de los proveedores de formas que no tenía en mi radar hace cinco años”, explica. “Hay que pensar en la pila tecnológica del mismo modo que un responsable de logística piensa ahora en las rutas de transporte”.

Los responsables de TI solían evaluar la tecnología principalmente en función de las características y el coste, señala. “Ahora me planteo preguntas que nunca solía hacer, como dónde tiene su sede este proveedor, dónde se encuentran físicamente sus servidores, qué ocurre con los datos de mis clientes si se produce una interrupción del comercio o una situación de sanciones que afecte al país de origen de ese proveedor”.

Las cadenas de suministro globales están imponiendo un cambio fundamental en el diseño de las arquitecturas de TI, coincide Victoria Ma, directora de servicios e innovación digital para EE. UU. y Canadá en la consultora de cadenas de suministro Miebach, que opera en cuatro continentes. “En lugar de optimizar un modelo único y centralizado, las organizaciones ahora tienen que dar soporte a una red que abarca múltiples regiones, entornos normativos y modelos operativos”, afirma. “Esto introduce una complejidad significativa en los sistemas y los datos. Los sistemas de planificación y ejecución deben integrarse no solo entre funciones internas, sino también con socios externos —proveedores, fabricantes y proveedores de logística— que a menudo operan con diferentes plataformas y estándares en distintas partes del mundo”.

Identificar casos de uso de la IA que funcionen en todas las regiones

Navegar por la adopción de la IA en un entorno operativo distribuido a nivel mundial es también un punto crítico fundamental. Remi Alli, director de sistemas de información (CIO) de Black Wallet, la empresa matriz de Kiros, un ecosistema de tokens, cuenta que la proliferación de agentes —el caos que supone la aparición de cientos de herramientas de IA autónomas en diferentes unidades de negocio sin un plan coherente— es un quebradero de cabeza a nivel mundial. “El mayor obstáculo es mantener una infraestructura unificada y segura al [enfrentarnos a] regulaciones regionales, como las nuevas leyes de transparencia de la IA. Esto hace imposible un enfoque único para todos y nos obliga a pasar de un abastecimiento global y centralizado a estrategias regionales más complejas”, relata.

Black Wallet está abordando esto mediante la creación de ‘Consejos de IA’ que actúan como guardianes para evaluar los casos de uso, garantizar el cumplimiento normativo de los datos y evitar arquitecturas fragmentadas. “También estamos sustituyendo la planificación anual por talleres trimestrales de ‘cocreación tecnológica-empresarial’ para asegurarnos de que nuestros proyectos de IA realmente mejoren el retorno de la inversión, en lugar de limitarse a experimentar”, afirma Alli.

La IA y el panorama regulatorio global están “convirtiendo una situación ya de por sí complicada en algo verdaderamente caótico”, afirma Rosenfeld. “El Reglamento de IA (IA Act) de la UE está en marcha, EE. UU. sigue definiendo su enfoque federal, los estados individuales actúan por su cuenta y otros países están haciendo todo lo anterior a ritmos diferentes y con filosofías distintas”.

Para cualquiera que gestione flujos de datos transfronterizos, las herramientas con IA integrada tienen ahora un peso normativo que no existía hace dos años, afirma. “Y lo difícil es que las normas aún no se han redactado por completo. Hoy estás tomando decisiones de infraestructura frente a un objetivo de cumplimiento que aún está en constante cambio. No es una situación cómoda, pero es en la que nos encontramos”.

Para complicar aún más las cosas, mientras existe una fuerte presión por parte de los consejos de administración y los ejecutivos para acelerar las estrategias de IA, las organizaciones están teniendo dificultades para identificar casos de uso que se mantengan en todas las regiones con diferentes calidades de datos, requisitos normativos y niveles de madurez operativa, según Ma.

Al mismo tiempo, el panorama de proveedores está repleto de plataformas “con IA” que a menudo no se adaptan bien a las cadenas de suministro globales, observa Ma. “Una solución que funciona en una región puede no ser escalable debido a diferencias en la disponibilidad de datos, la infraestructura o las restricciones de cumplimiento”, afirma. “Esto coloca a los responsables de TI en una posición difícil: deben filtrar las afirmaciones de los proveedores al tiempo que se aseguran de que las tecnologías seleccionadas puedan funcionar de manera consistente en un panorama global fragmentado”.

El reto no radica tanto en adoptar la IA rápidamente como en construir una base escalable y adaptable a cada región que aporte un valor cuantificable en toda la red, indica Ma.

Hacer frente al cumplimiento normativo

Mantener el cumplimiento normativo más allá de las fronteras es otro gran reto. En lo que a Rosenfeld respecta, nadie está hablando lo suficiente de lo rápido que se ha ampliado el ámbito de aplicación del cumplimiento normativo. “Hace unos años, uno pensaba en la HIPAA, quizá en algunas normas de privacidad a nivel estatal”, afirma. “Ahora, me fijo en clientes con plantillas distribuidas y les pregunto si sus flujos de trabajo documentales afectan a interesados de la UE, porque si es así, el RGPD entra en juego, lo hayan invitado o no”.

Esto es antes de tener que estar al día de las normativas relacionadas con la IA que aún se están redactando en tiempo real en diferentes jurisdicciones, añade Rosenfeld.

Elijah Fernández, cofundador y director técnico de la plataforma virtual de salud conductual Cerevity Health, está de acuerdo y afirma que su ámbito de competencia ha pasado de gestionar redes físicas a proteger a una plantilla clínica muy dispersa y a lidiar con una normativa de datos fragmentada. “En la tecnología sanitaria, la soberanía de los datos y las normativas transfronterizas son objetivos en constante cambio. Cuando tu plantilla está muy dispersa, la defensa perimetral tradicional falla por completo. Ya no estás protegiendo un edificio de oficinas corporativas”, afirma Fernández. “Lo que se protege son cientos de terminales individuales que operan en diferentes redes locales, a menudo sujetas a leyes regionales de privacidad que se solapan o entran en conflicto”.

Para Tamošaitis, de FL Technics, el mantra es mantener la flexibilidad dentro de la estructura. “Estamos implementando plataformas unificadas de ERP y logística que operan en todas las regiones, respetando al mismo tiempo los silos normativos locales: normas de residencia de datos, controles de exportación y requisitos de cumplimiento específicos de cada jurisdicción”.

Allí donde se permite la nube, la empresa aprovecha las arquitecturas híbridas y multicloud para obtener resiliencia y escalabilidad. Los responsables mantienen infraestructura local en los países cuya normativa lo exige, como los centros de datos de la UE para el RGPD y los sistemas con sede en EE. UU. para los requisitos de la FAA, afirma Tamošaitis. “Añade complejidad, pero no es negociable”, admite.

Para mantener el cumplimiento normativo más allá de las fronteras, Fernández afirma que tuvieron que imponer una “estandarización absoluta” a nivel de infraestructura. “Por ejemplo, para garantizar que nuestros registros de auditoría y expedientes clínicos sigan siendo válidos desde el punto de vista forense en diferentes zonas geográficas, configuramos todo nuestro sistema de historias clínicas electrónicas para que funcionara estrictamente en la hora del Pacífico”, señala. “Independientemente del lugar desde el que se conecte un proveedor, la infraestructura estandariza los datos cronológicos en una única fuente de verdad. También hemos adoptado por completo una arquitectura de confianza cero, partiendo de la base de que todas las redes locales que utiliza nuestro personal están intrínsecamente comprometidas”.

El cumplimiento normativo también es un impulso para que FL Technics invierta fuertemente en la formación de sus equipos de TI, así como en reforzar la ciberseguridad de los entornos de infraestructura crítica y la integración digital en contextos de gran intensidad operativa y con un alto componente de ingeniería. “Las herramientas técnicas son importantes, pero se necesita gente que entienda el ámbito lo suficientemente bien como para tomar las decisiones correctas bajo presión”, afirma Tamošaitis.

Y para empresas como FL Technics que se enfrentan a adquisiciones transfronterizas, Tamošaitis afirma que el departamento de TI debe realizar su debida diligencia con antelación —y planificar a largo plazo—. “Es esencial contar con una hoja de ruta de integración planificada con antelación. […] Hay que contar con entre 12 y 18 meses de sistemas paralelos. El coste de mantener esa redundancia es mucho menor que el coste de la interrupción operativa si se precipita la transición”, señala.

Diseñe pensando en la flexibilidad y tenga en cuenta a su personal

Al preguntarles cómo pueden los responsables de TI afrontar las realidades globales actuales, los líderes y expertos en TI ofrecieron los siguientes consejos.

La modularidad es clave. Tamošaitis aconseja a los responsables de TI que se aseguren de diseñar con modularidad desde el principio, dado que las normativas varían, las realidades geopolíticas cambian y los nuevos mercados traen consigo nuevos requisitos. “Las arquitecturas de TI que se pueden reconfigurar rápidamente son una auténtica ventaja competitiva. Los sistemas rígidos y monolíticos se convierten en un lastre en el momento en que cambia el entorno externo”, recomienda.

Tranquilizar al consejo de administración sobre la resiliencia. Mantener la resiliencia en tiempos de incertidumbre geopolítica es una de las principales preocupaciones de los consejos de administración. Especialmente en el caso de las infraestructuras críticas, los consejos necesitan saber que las TI pueden mantener las operaciones durante un ciberataque grave, afirma. “Esa conversación debe tener lugar al más alto nivel, y los presupuestos deben reflejar lo que está en juego, no solo las comparativas con la competencia”, dice Tamošaitis.

Optimizar la toma de decisiones y reducir el riesgo de concentración. Moccia, de Forrester, afirma que es una buena idea contar con un proceso de priorización listo para usar y siempre activo que permita restar rápidamente prioridad a cualquier elemento «por debajo de la línea» que pueda esperar. También recomienda reducir el riesgo de concentración en proveedores y plataformas, y seguir buscando áreas de posible consolidación y contratos que no sea necesario renovar.

Anticiparse al reto del cumplimiento normativo global. Rosenfeld aconseja a las organizaciones globales que dejen de tratar el cumplimiento normativo global como un problema del departamento jurídico que ocasionalmente afecta a TI. “Pertenece a TI: los flujos de datos, los controles de acceso, los requisitos de residencia, etc.”, subraya. “Son decisiones técnicas con consecuencias legales, y si esperas a que el departamento jurídico te diga qué desarrollar, ya vas con retraso. Anticípate, traza un mapa de dónde van realmente tus datos y toma las riendas de esa conversación”.

Invertir en talento. En lo que respecta al personal, invierte en talento especializado y no subestimes la importancia de la retención, dice Tamošaitis. “Las fricciones geopolíticas y la complejidad normativa requieren profesionales de TI con un profundo conocimiento del sector; personas que entiendan la normativa de aviación, los controles de exportación o la soberanía de datos, no solo la tecnología. Ese talento ya es escaso y lo será aún más. Crea programas de formación, trayectorias profesionales y el tipo de entorno en el que esas personas quieran quedarse”.

El liderazgo importa. Al tiempo que tranquilizas a tus consejos de administración, mantén también informados a tus empleados. Moccia afirma que los líderes deben seguir siendo transparentes y visibles ante toda la organización de TI y compartir las perspectivas de la alta dirección sobre el impacto de la volatilidad en la empresa y lo que el liderazgo está haciendo para minimizarlo.

No descuidar el bienestar personal. Moccia también aconseja a los líderes de TI que no se olviden del cuidado personal. “El trabajo de cualquier directivo de alto nivel es intenso y constante. La volatilidad no es más que otra variante del cambio y el caos en su día a día, por lo que mantener a raya los niveles de estrés personal es más importante que nunca, sea cual sea la forma que esto adopte para el director de sistemas de información (CIO) a nivel personal”.

ServiceNow’s AI control tower offers hazy view of spend

IT budgeting has gotten a lot trickier as vendors begin to adjust their pricing to include variable charges for agentic AI usage in addition to seat-based licensing fees.

A case in point is the licensing model for ServiceNow, which has introduced usage-based pricing for its AI components. As part of its AI transformation, the company last week announced Action Fabric, part of its AI Control Tower, which meters agent usage measured in “assists” and charges for it on top of customers’ subscription fees.

Customers receive a set number of assists with their subscription and can choose to pay for more if they need them, said ServiceNow COO Amit Zavery.

The number of assists used in any given agentic interaction varies, however, making it difficult for users to predict their usage, and therefore their bills.

This model is no surprise to industry watchers. “As AI agents begin to move into the heart of the enterprise, we’re moving beyond the age of SaaS into a far more granular era where every last detail of a given workload is monitored, metered, and monetized,” said Carmi Levy, independent technology analyst.

This may not sit well with organizations. “For CIOs and other senior leaders making the strategic decisions and approving the budget, the prospect of paying additional usage-based fees on top of existing subscriptions, and then having to closely monitor usage and spend as AI adoption spreads across the organization, could be an additional irritant as organizations as a whole look for ways to trim spending,” he said. 

ServiceNow’s model is similar to those introduced by SAP and Salesforce, said Info-Tech Research Group advisory fellow Scott Bickley: a base license with entitlement to some level of usage, then consumption-based charging at the data layer and at the AI layer. “With ServiceNow, they’re going to now break out levels of AI capabilities, and each tier will encompass a certain level of functionality, and they each have different levels of capabilities attached to them, and I’m sure different pricing attached to them.”

This will not make CFOs happy, he said; they don’t like variability in pricing, nor unpredictable expenses in the monthly budget. And there’s the potential for things to go expensively wrong if, for example, an autonomous agent goes into a retry loop when its first attempt to perform a task fails, gobbling assist credits with each iteration.

Adoption of the service, he said, will depend on “the ability to decipher the pricing model and match it to business use cases, so that you can accurately predict usage based on what you want to accomplish as an outcome.” If enterprises can’t predict costs and maintain budget versus value over time, they won’t stick with it, he said.

This means that customers now need to pay particular attention to their contracts and find that “line of demarcation where the meter starts running,” Bickley said, noting that historically, ServiceNow’s licensing metrics have lacked clarity on what falls under eligible objects or how things are metered.

“If you can’t manage and predict and build the capabilities to see through what these consumption metrics are going to cost you, then you need to have very flexible contract commercial terms that can help buffer the unanticipated upticks in usage,” he said. “But you really need to have a combination of FinOps maturity, the ability to build clarity in your use cases for AI, the ability to then estimate consumption against those use cases, and a very detailed understanding of the license or subscription metrics.”

Now it has become imperative for customers to take the time to extract those details from ServiceNow, appending them to the contract in a way that’s understandable, he advised. “And if you can’t get them to that point, then you should not be signing the contract.”

Bickley said he feels some sympathy for ServiceNow: “They’re dealing with a moving target themselves. They have to pay for this consumption to the LLM providers as well, but they need to do better in terms of simplifying it for their customers.”

❌