Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • 칼럼 | 기술을 넘어선 경쟁력, 뛰어난 FDE는 어떻게 다른가
    전례 없는 규모의 AI 투자가 이뤄지고 있음에도 불구하고, 대부분의 기업은 ‘통합의 벽’에 부딪힌 상태다. 기술은 개별 환경에서는 제대로 작동하고, 개념검증(PoC)도 충분히 인상적인 결과를 낸다. 하지만 실제 고객과 접점이 생기고 매출에 영향을 미치며 실질적인 리스크가 발생하는 운영 환경에 AI를 적용하려는 순간, 기업들은 주저하게 된다. 이는 충분히 타당한 이유에서다. AI 시스템은 본질적으로 비결정적(non-deterministic) 특성을 지니기 때문이다. 예측 가능한 방식으로 동작하는 기존 소프트웨어와 달리, 대규모 언어모델(LLM)은 예상치 못한 결과를 만들어낼 수 있다. 틀린 정보를 확신에 차서 제공하거나, 존재하지 않는 사실을 생성하는 ‘환각(hallucination)’, 브랜드 톤과 맞지 않는 응답을 내놓을 위험도 존재한다. 리스크 관리에 민감한 기업일수록 이러한 불확실성은 어떤 수준의 기술적 고도화로도 극복하기 어
     

칼럼 | 기술을 넘어선 경쟁력, 뛰어난 FDE는 어떻게 다른가

7 de Maio de 2026, 03:25

전례 없는 규모의 AI 투자가 이뤄지고 있음에도 불구하고, 대부분의 기업은 ‘통합의 벽’에 부딪힌 상태다. 기술은 개별 환경에서는 제대로 작동하고, 개념검증(PoC)도 충분히 인상적인 결과를 낸다.

하지만 실제 고객과 접점이 생기고 매출에 영향을 미치며 실질적인 리스크가 발생하는 운영 환경에 AI를 적용하려는 순간, 기업들은 주저하게 된다. 이는 충분히 타당한 이유에서다. AI 시스템은 본질적으로 비결정적(non-deterministic) 특성을 지니기 때문이다.

예측 가능한 방식으로 동작하는 기존 소프트웨어와 달리, 대규모 언어모델(LLM)은 예상치 못한 결과를 만들어낼 수 있다. 틀린 정보를 확신에 차서 제공하거나, 존재하지 않는 사실을 생성하는 ‘환각(hallucination)’, 브랜드 톤과 맞지 않는 응답을 내놓을 위험도 존재한다. 리스크 관리에 민감한 기업일수록 이러한 불확실성은 어떤 수준의 기술적 고도화로도 극복하기 어려운 장벽으로 작용한다.

이 같은 현상은 산업 전반에서 공통적으로 나타난다. 기업의 AI 도입을 지원해 온 경험을 돌아보면, 많은 조직이 인상적인 AI 데모를 구축하고도 통합 단계를 넘어서지 못하는 사례를 반복해왔다. 기술은 준비돼 있었고 사업적 타당성도 충분했지만, 조직의 리스크 수용도가 이를 따라가지 못했다. 또한 실험 환경에서 가능한 AI 활용과 실제 운영 환경에서 허용되는 범위 사이의 간극을 메울 방법을 아는 사람도 없었다. 이 지점에서 문제의 본질은 기술이 아니라 이를 실제로 적용할 인재라는 결론에 이르게 된다.

필자는 몇 달 전 IT 인력 제공 플랫폼인 안델라(Andela)라는 기업에 합류했다. 이 관점에서 보면, 기업이 필요로 하는 역량은 보다 분명해진다. 바로 파견형 엔지니어 정확히 말해 FDE(Forward Deployed Engineer)다. 해당 개념은 데이터 분석 기업 팔란티어(Palantir)가 정부 기관과 기업 내부에 자사 플랫폼을 배포하는 과정에서 필수적인 고객 중심 기술 인력을 설명하기 위해 처음 사용했다. 최근에는 선도 AI 연구소와 하이퍼스케일러, 스타트업까지 이 모델을 채택하고 있다. 예를 들어 오픈AI는 고가치 고객의 플랫폼 도입을 촉진하기 위해 숙련된 FDE를 배치하고 있다.

다만 CIO가 반드시 이해해야 할 점이 있다. 지금까지 이러한 역량은 주로 AI 플랫폼 기업의 성장 전략을 위해 집중적으로 활용돼 왔다. 기업이 통합의 벽을 넘어 AI를 실제 운영에 적용하기 위해서는, 이 같은 FDE 역량을 내부적으로 확보하고 육성해야 한다.

FDE를 만드는 요소

FDE의 핵심 특징은 기존 엔지니어가 하지 못하는 방식으로 기술적 솔루션과 비즈니스 성과를 연결하는 능력에 있다. FDE는 단순히 시스템을 구축하는 개발자가 아니다. 엔지니어링, 아키텍처, 비즈니스 전략이 교차하는 지점에서 작동하는 ‘번역자’에 가깝다.

이들은 생성형 AI라는 미지의 영역을 조직이 탐색할 수 있도록 이끄는 ‘탐험대장’과 같은 존재다. 특히 AI를 실제 운영 환경에 배포하는 과정은 단순한 기술 문제가 아니라 리스크 관리 문제라는 점을 명확히 이해한다. 따라서 적절한 가드레일 설정, 모니터링 체계, 위험 통제 전략을 통해 조직의 신뢰를 확보하는 것이 필수적이다.

필자는 구글 클라우드와 안델라에서 15년간 일하며 이러한 역량을 모두 갖춘 인재를 극소수만 만나왔다. 이들을 구분 짓는 요소는 단일 기술이 아니라 네 가지 핵심 역량의 결합이다.

첫째는 첫째는 문제 해결 능력과 판단력이다. AI의 출력은 대체로 80~90% 정확하지만, 나머지 10~20%는 오히려 더 위험한 오류를 포함할 수 있다. 때로는 그럴듯하게 보이지만 잘못된 결과이거나, 불필요하게 복잡해 실무 적용을 어렵게 만들기도 한다.

뛰어난 FDE는 이러한 오류를 식별할 수 있는 맥락적 이해를 갖추고 있다. 이들은 AI가 생성한 저품질 결과나, 중요한 비즈니스 제약을 무시한 권고를 빠르게 찾아낸다. 무엇보다 중요한 점은 이러한 리스크를 통제할 수 있는 시스템을 설계할 수 있다는 것이다. 출력 검증, 인간 개입 프로세스, 모델이 불확실할 때 작동하는 결정적 대체 응답 체계 등을 통해 위험을 관리한다. 이러한 역량이야말로 단순히 인상적인 데모와, 경영진이 실제 도입을 승인할 수 있는 운영 시스템을 가르는 결정적 차이다.

둘째는 솔루션 엔지니어링과 설계 역량이다. FDE는 비즈니스 요구사항을 기술 아키텍처로 전환하는 동시에 비용, 성능, 지연 시간, 확장성 등 현실적인 트레이드오프를 균형 있게 고려해야 한다. 특정 활용 사례에서는 추론 비용이 낮은 소형 언어모델이 최신 대형 모델보다 더 나은 성과를 낼 수 있으며, 이러한 선택을 기술적 완성도가 아닌 경제적 관점에서 설명할 수 있어야 한다.

무엇보다 중요한 것은 단순성을 우선하는 접근이다. 통합의 벽을 가장 빠르게 넘는 방법은 대부분 적절한 가드레일을 갖춘 최소기능제품(MVP)으로 전체 문제의 80%를 해결하는 데서 시작된다. 모든 예외 상황을 포괄하려다 통제 불가능한 리스크를 초래하는 복잡한 시스템이 아니라, 현실적으로 관리 가능한 수준의 솔루션이 더 효과적이다.

셋째는 고객 및 이해관계자 관리 역량이다. FDE는 비즈니스 조직과의 주요 기술 접점 역할을 수행하며, AI 경험이 많지 않은 경영진에게 기술적 작동 원리를 설명해야 한다. 다만 이들이 실제로 주목하는 것은 기술 자체가 아니라 리스크, 일정, 그리고 사업적 영향이다.

바로 이 지점에서 FDE는 조직의 신뢰를 확보하고, AI를 실제 운영 환경으로 확장할 수 있는 기반을 마련한다. FDE는 비결정적 AI의 특성을 경영진이 이해할 수 있는 리스크 프레임워크로 전환한다. 예를 들어 문제 발생 시 영향 범위는 어디까지인지, 어떤 모니터링 체계가 구축돼 있는지, 그리고 롤백 계획은 무엇인지 등을 명확히 제시한다. 이러한 과정은 AI의 불확실성을 가시화하고 관리 가능한 형태로 전환함으로써, 리스크에 민감한 의사결정자들이 이를 수용할 수 있도록 만드는 핵심 역할을 한다.

넷째는 전략적 정렬 능력이다. FDE는 AI 구현을 측정 가능한 비즈니스 성과와 직접 연결한다. 어떤 기회가 실제 성과를 만들어낼 수 있는지, 혹은 기술적으로는 흥미롭지만 가치 대비 과도한 리스크를 수반하는지에 대해 판단하고 조언한다.

또한 초기 도입 단계뿐 아니라 운영 비용과 장기적인 유지보수까지 함께 고려한다. 이러한 사업 중심의 시각에 더해, 리스크를 객관적으로 평가하는 능력이 결합될 때 비로소 FDE는 단순히 뛰어난 소프트웨어 엔지니어를 넘어서는 차별화된 역할을 수행하게 된다.

이 네 가지 역량을 모두 갖춘 인재는 공통된 특성을 보인다. 대부분 개발자 등 기술 중심 직무에서 커리어를 시작했고, 컴퓨터공학 기반의 교육을 받았을 가능성이 높다. 이후 특정 산업에 대한 전문성을 쌓고, 빠르게 변화하는 환경 속에서도 지속적으로 학습하는 유연성과 호기심을 갖추게 된다. 이러한 희소한 조합 때문에 이들은 주로 대형 기술 기업에 집중돼 있으며 높은 보상을 받는 경향이 있다.

CIO의 딜레마

FDE가 이처럼 희소한 자원이라면, CIO에게 남은 선택지는 무엇일까.

인재 시장에서 자연스럽게 공급이 늘어나기를 기다리는 방법이 있지만, 이는 상당한 시간이 필요하다. 그 사이 AI 프로젝트가 통합의 벽에서 멈춰 있는 매달, 실제 가치를 창출하는 기업과 여전히 이사회에 데모만 보여주는 기업 간 격차는 더욱 벌어진다. AI의 비결정적 특성은 앞으로도 사라지지 않는다. 오히려 모델 성능이 향상될수록 예측 불가능한 행동의 가능성은 더 커질 수 있다. 결국 성공하는 기업은 기술이 완전히 무위험 상태가 되기를 기다리는 조직이 아니라, AI를 책임감 있고 자신 있게 운영 환경에 적용할 수 있는 내부 역량을 갖춘 조직이다.

대안은 내부에서 FDE를 육성하는 것이다. 이는 채용보다 더 어렵지만, 확장 가능한 유일한 해법이다. 다행히 FDE 역량은 체계적으로 개발이 가능하다. 적절한 인재 풀과 집중적이고 구조화된 교육이 필요하다. 안델라(Andela)는 경험 많은 엔지니어를 FDE로 전환하는 교육 과정을 구축했으며, 이를 통해 효과적인 방법론을 축적해왔다.

FDE 인재 풀 구축 전략

우선 적합한 후보자를 선별하는 것이 중요하다. 모든 뛰어난 엔지니어가 FDE로 전환할 수 있는 것은 아니다. 기술 영역을 넘어서는 호기심을 갖춘 숙련된 소프트웨어 엔지니어를 찾아야 한다. 기본적인 개발 역량이 탄탄하고 데이터 과학과 클라우드 아키텍처에 대한 경험이 있는 인재가 적합하다. 특히 특정 산업에 대한 이해는 빠른 적응을 돕는 중요한 요소다. 의료 규제나 금융 리스크 프레임워크에 대한 경험이 있는 인재는 해당 분야를 처음 배우는 경우보다 훨씬 빠르게 성장할 수 있다.

기술 교육 과정은 세 단계로 구성된다. 기초 단계에서는 AI와 머신러닝에 대한 기본 이해를 다진다. LLM 개념, 프롬프트 설계 기법, 파이썬 활용 능력, 토큰 구조, 기본적인 에이전트 아키텍처 이해가 포함된다. 이는 기본 역량에 해당한다.

중간 단계는 실전 도구 활용 역량이다. FDE가 수행하는 ‘세 가지 역할’에 대응하는 핵심 기술이 요구된다.

  • 첫째는 RAG(검색 증강 생성)로, 기업 데이터와 모델을 정확하고 안정적으로 연결하는 능력이다.
  • 둘째는 에이전트형 AI로, 다단계 추론과 작업 흐름을 적절한 통제와 검증 단계와 함께 설계하는 역량이다.
  • 셋째는 운영 환경 대응 능력으로, 모니터링 체계와 가드레일, 장애 대응 프로세스를 갖춘 상태에서 솔루션을 실제 배포할 수 있어야 한다.

이러한 역량은 실제 운영 환경의 리스크를 고려한 시스템을 직접 구축하고 배포하는 과정을 통해 습득된다.

고급 단계에서는 모델 내부 구조와 파인튜닝 등 심화 지식을 익힌다. 이는 표준적인 접근 방식이 통하지 않을 때 문제를 해결할 수 있는 능력으로 이어진다. 단순히 정해진 절차를 따르는 수준을 넘어, 새로운 상황에 맞춰 즉각적으로 대응할 수 있는 역량이다. 또한 보안 책임자(CISO)와 같은 이해관계자에게 특정 접근 방식의 안전성을 설명할 수 있는 수준의 전문성이 요구된다.

기술 역량만큼 중요한 것이 비기술적 역량이다. FDE는 기술 중심의 대화에서 벗어나 비즈니스 문제와 리스크 완화 중심으로 논의를 재구성할 수 있어야 한다. 프로젝트 범위 변경, 일정 지연, 비결정적 시스템의 불확실성 등 민감한 이슈를 포함한 고난도 이해관계자 관리도 필수다. 무엇보다 중요한 것은 판단력이다. 불확실한 상황에서도 합리적인 결정을 내리고, 새로운 유형의 기술 리스크를 받아들여야 하는 경영진에게 신뢰를 줄 수 있어야 한다.

조직과 후보자 모두에게 현실적인 기대치를 설정하는 것도 중요하다. 아무리 체계적인 프로그램을 갖추더라도 모든 인재가 FDE로 전환되는 것은 아니다. 그러나 소수의 FDE 인재만 확보하더라도 통합의 벽을 넘는 속도는 크게 빨라질 수 있다. 실제로 비즈니스 조직에 배치된 한 명의 FDE는, 비즈니스 맥락 없이 분리된 환경에서 일하는 다수의 기존 엔지니어보다 더 큰 성과를 낼 수 있다. 이는 문제의 본질이 기술이 아니라는 점을 FDE가 정확히 이해하고 있기 때문이다.

AI 시대의 승부처

FDE 역량을 확보한 기업은 통합의 벽을 넘어설 수 있다. 이들은 인상적인 데모를 실제 가치를 창출하는 운영 시스템으로 전환하고, 성공 경험을 바탕으로 조직의 신뢰를 점진적으로 확대해 나간다.

반면 이러한 역량을 확보하지 못한 기업은 AI 투자에도 불구하고 실질적인 성과를 내지 못한 채 정체 상태에 머물 가능성이 크다. 그 사이 더 높은 리스크를 감수하는 경쟁 기업들이 시장을 선점하게 된다.

안델라에 합류할 당시 필자는 AI가 인간의 역량을 완전히 대체하지는 못할 것이라고 판단했다. 지금도 그 생각은 변함없다. 다만 인간 역시 진화해야 한다. FDE는 그 진화의 방향을 보여주는 대표적인 인재상이다. 깊이 있는 기술 이해, 비즈니스 감각, 리스크 관리 능력, 그리고 지속적인 변화에 대응하는 유연성을 모두 갖춘 존재다.

AI 시대에 CIO가 지금 이 역량에 투자한다면, 단순히 기술 발전 속도를 따라가는 수준을 넘어 그동안 쉽게 확보되지 않았던 기업의 AI 가치를 실질적으로 실현하는 주체가 될 수 있다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • How UKG puts AI to work for frontline employees
    As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation fram
     

How UKG puts AI to work for frontline employees

6 de Maio de 2026, 07:00

As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation framework to transform the frontline worker experience, and why the CIO-CHRO partnership is critical to making it stick.

How do you leverage AI for growth and transformation at UKG?

UKG is one of the largest HR, pay, and workforce management tech platforms in the market, and our expertise is in creating solutions for frontline workers, which account for 80% of the world’s workforce. This is important because when companies rebrand themselves as AI for knowledge workers, they’re not talking about frontline workers. But people in retail, manufacturing, healthcare, and so on also benefit from AI capabilities.

So the richness of our data sets, and our long history with the frontline workforce, positions us well for AI driven workforce transformation. 

What are some examples?

We use agentic AI for dynamic workforce operations, which shows us real-time labor demand. Our customers employ thousands of frontline workers, and the timely market insights and suggested actions we give them are new and valuable.

We also provide voice agents. Traditionally, when a frontline worker requests a shift, managers would review availability, fill out paperwork or update scheduling software, and eventually offer an appropriate job. With voice agents, AI works directly with the frontline worker, going through background and skills validation, communication, and even workflow execution. The worker can also ask if they can swap shifts or even get advice on how to make more money in a particular month. This is where AI changes the entire frontline worker experience.

We also launched People Assist, an autonomous employee support agent. Typically, when an employee is onboarded, IT and HR need to trigger and approve workflows. People Assist  not only tracks workflows, but also performs those necessary IT and HR onboarding activities so new employees are productive from day one.

What framework do you use to create these new capabilities?

For internal AI usage for our own employee experience, we use an idea-to-implementation framework, which involves a community of UKG power users who are subject matter experts in their area. Ideas can come from anybody, and since we started nine months ago, more than 800 ideas have been submitted. The power users set our priorities by choosing the ideas that will make the most impact.

Rather than funneling ideas through a small central team — a linear process that kills momentum — we’ve democratized innovation across the business. We give teams the governance frameworks, change models, and risk guardrails they need to move quickly.  With AI, the most important thing isn’t to launch, but to land.

But before we adopted the framework, we defined internal personas so we could collaborate with different employee groups across the company, from sales to finance.

With the personas and the framework, we can prioritize ideas by persona, which also facilitates crowd sourcing. You’re asking an entire persona which of these 10 ideas will make their lives better, rather than senior leaders making those decisions for them.

Why do so many CIOs focus on personas for their AI engine?

Across the enterprise, every function has a role to play. We hire marketing, sales, and finance for a particular purpose. Before AI, we gave generic packaged tools to everyone. AI allows us to build capabilities to make a specific job more effective. Even our generic AI tools are delivered by persona. Its impact on specific roles is the reason personas are so important right now. Our focus is on the actual jobs, the people who do them, the skills and tasks needed, and the outcomes they want to achieve.

We know our framework and persona focus work from employee data. In our most recent global employee engagement survey, 90% said they’re getting the right AI tools to be effective. For the AI tools we’ve launched broadly across the company, eight out of 10 employees use them. For me, AI isn’t about launching 10,000 tools, because if no one uses them, it’s just additional cost for the CIO and the company.

Is the build or buy question more challenging in this nascent stage of AI?

The lifecycle of technology has moved from three years to three hours, so whenever we build at UKG, we use an open architecture, which allows us to build with a commercial product if one comes on the market.

Given the speed of innovation, we lean toward augmentation rather than build. There are areas, like our own native products, where a dedicated engineering team makes sense. But for most of our AI capabilities — customer support and voice agents, for example — we work with our vendor partners. We test and learn with multiple vendors, and decide on one usually within two weeks.

This is what AI is giving all CIOs: flexibility, rapid adoption, interoperability, and the ability to quickly switch vendors. It’s IT that’s very different from what it used to be.

Given the shift to augmentation, how will the role of the software engineer change?

For software builders, business acumen — the ability to understand context — is no longer optional. In the past, the business user would own the business context, and the developer, who owns the technology, brings that business idea to life. Going forward, the builder has the business context to create the right prompts to let AI do the building, and the human in the loop is no longer the technology builder, but the provider of context, prompts, and validation of the work. So the engineer doesn’t go away, however they now finish a three-week scope of work in hours. With AI, engineers operate at a different altitude. The SDLC stays, but agility increases where a two-week concept compresses into two days.

At UKG, you’re directly connected to the CHRO community. What should they be thinking about how the workforce is changing with AI?

The best CHROs are thinking about the skills they’ll need for the future, and how to train existing talent to be ready. They’re not questioning whether we’ll need people, but how to sharpen our teams for new roles. The runbooks for both IT and HR are evolving, which is why the CIO-CHRO partnership has never been more critical to create the right culture for AI transformation.

CIOs can deliver a wealth of employee data like roles, skillsets, and how people spend their time. And as HR leaders help business leaders think through their roadmap for talent —  both human and AI — IT leaders can equip them with exactly that intelligence.

What advice would you give to CIOs driving AI adoption?

Invest in AI fluency, not just AI tools. Your people don’t need to become data scientists, but they do need a new kind of literacy — the ability to work alongside AI, question its outputs, and know when to override it. That’s a training and culture investment, not a software investment.

And redesign work before you redeploy people. Don’t just drop AI into existing workflows. Use this moment to ask what work really matters. AI is forcing us to have the job design conversations we should’ve had years ago, so it’s important to be transparent about the journey. What’s killing workforce trust now is ambiguity. Your people can handle hard truths but not silence. Leaders who communicate openly about where AI is taking the organization will retain the talent they need to get there.

  • ✇Security | CIO
  • 중국 법원 “AI 도입은 해고 사유 아니다”…기업 인력 전략에 경고
    중국 법원이 기업이 단순히 인공지능(AI)으로 인력을 대체하기 위해 직원을 해고하는 것은 허용되지 않는다고 결론 내렸다. 이에 따라 자동화 기반 해고를 정당화하려는 기업의 접근 방식에도 제약이 불가피해질 전망이다. 법원 문서에 따르면, 해당 사건은 일부 업무가 자동화된 직원이 급여가 크게 삭감된 이후 재배치를 거부했고, 결국 해고에 이르게 된 사례다. 사건에 대한 중국 법원 입장을 인용한 블룸버그는 “회사가 제시한 해고 사유는 사업 축소나 운영상의 어려움과 같은 부정적 상황에 해당하지 않았으며, ‘고용 계약을 지속할 수 없는’ 법적 요건도 충족하지 못했다”라고 전했다. 항저우 중급인민법원은 AI 도입이 중국 노동법상 고용 계약 해지를 정당화하는 ‘객관적 상황의 중대한 변화’에 해당하지 않는다고 판단했다. 또한 기업이 제시한 해고 사유 역시 법적 기준에 미치지 못했다고 결론 내렸다. 이번 판결은 그동안 기업들이 운영 문제로
     

중국 법원 “AI 도입은 해고 사유 아니다”…기업 인력 전략에 경고

6 de Maio de 2026, 02:48

중국 법원이 기업이 단순히 인공지능(AI)으로 인력을 대체하기 위해 직원을 해고하는 것은 허용되지 않는다고 결론 내렸다. 이에 따라 자동화 기반 해고를 정당화하려는 기업의 접근 방식에도 제약이 불가피해질 전망이다.

법원 문서에 따르면, 해당 사건은 일부 업무가 자동화된 직원이 급여가 크게 삭감된 이후 재배치를 거부했고, 결국 해고에 이르게 된 사례다.

사건에 대한 중국 법원 입장을 인용한 블룸버그는 “회사가 제시한 해고 사유는 사업 축소나 운영상의 어려움과 같은 부정적 상황에 해당하지 않았으며, ‘고용 계약을 지속할 수 없는’ 법적 요건도 충족하지 못했다”라고 전했다.

항저우 중급인민법원은 AI 도입이 중국 노동법상 고용 계약 해지를 정당화하는 ‘객관적 상황의 중대한 변화’에 해당하지 않는다고 판단했다. 또한 기업이 제시한 해고 사유 역시 법적 기준에 미치지 못했다고 결론 내렸다.

이번 판결은 그동안 기업들이 운영 문제로만 여겨온 사안, 즉 AI로 인한 효율성 향상이 추가적인 의무 없이 곧바로 인력 감축으로 이어질 수 있는지에 대해 법적 관점에서 재조명했다.

AI는 ‘법적 사유’ 아닌 ‘경영 판단’

이번 판결의 핵심은 법원이 AI 도입을 해석하는 방식의 변화에 있다. 법원은 자동화를 외부 충격으로 보지 않고, 기업의 경영 판단으로 규정했다. 이에 따라 변화에 따른 부담을 직원에게 자동으로 전가할 수 없다는 입장을 분명히 했다.

그레이하운드 리서치의 수석 애널리스트 산치트 비르 고기아는 “법원이 자발적인 자동화 선택을 예측 불가능한 외부 충격처럼 간주해 해고를 정당화하던 기업의 관행을 제한했다”고 설명했다.

이 같은 구분은 상당한 의미를 가질 수 있다. 고기아는 “AI가 통제 불가능한 사건이 아닌 전략적 선택으로 간주될 경우, 기업은 직무를 없애기 전에 협의, 재교육, 합리적인 재배치 등 적법한 절차를 입증해야 할 필요가 있다”고 분석했다.

이번 항저우 판결은 중국 외 지역에서 법적 선례로 적용되지는 않지만, 그 논리는 다른 시장으로 확산될 가능성이 높다. 고기아는 “인도, 영국, 미국 등에서는 판결이 직접적인 구속력을 갖지 않지만, 노동자 측에서는 AI 도입이 기업의 선택이라는 점을 강조하며, 기업이 그에 따른 책임을 입증해야 한다는 논리를 보다 명확하게 제기할 수 있게 됐다”고 말했다.

예를 들어 인도에서는 이미 기술 변화에 따른 구조조정 시 사전 통지와 보상이 요구된다. 이번 중국 판결은 이러한 구조조정이 법적 절차를 따라야 한다는 해석을 더욱 강화할 수 있다.

유럽과 영국 역시 사전 협의 의무와 자동화된 의사결정에 대한 제한이 존재하는 만큼, 기업이 해고를 정당화하기 위해 제시해야 할 입증 책임은 한층 높아질 것으로 보인다.

비용 절감 수단 넘어 ‘거버넌스 이슈’로

이번 판결은 CIO에게 중요한 변화를 시사한다. AI 중심 혁신이 더 이상 기술이나 효율성 논의에 그치지 않고, 거버넌스 영역으로 확장되고 있다는 점이다.

시장조사업체 카운터포인트 리서치의 부사장이자 파트너인 닐 샤는 “이번 판결은 글로벌 시장에서 AI 기반 해고에 이의를 제기할 수 있는 보다 합리적인 기준이 될 수 있다”고 평가했다.

샤는 이어 기업들이 AI 도입과 인력 전략 간의 정합성을 재검토해야 할 필요가 있다고 강조했다. 그는 “AI 시대에 인사 부서는 채용과 해고 중심에서 벗어나, 재배치와 교육, 재숙련에 더 집중하는 방향으로 전환해야 한다”고 말했다.

이 같은 변화는 기업들이 인력 구조조정과 병행해 AI 투자를 가속화하는 흐름 속에서 나타나고 있다. 오라클, 마이크로소프트(MS), 타타컨설턴시서비스(TCS) 등 주요 기업들의 최근 행보는 AI 중심 운영 모델에 맞춰 인력을 조정하는 전반적인 추세를 보여준다.

새로운 리스크 ‘문서화와 메시지 간 괴리’

이번 판결은 법리적 판단을 넘어, 기업이 직면할 수 있는 보다 현실적인 리스크로 ‘일관성’ 문제를 부각시켰다.

산치트 비르 고기아는 “이 같은 사례에서 발생하는 부담은 비용보다 거버넌스에 가깝다”며 “특히 AI 도입과 연계된 인력 의사결정을 어떻게 문서화하느냐가 핵심”이라고 지적했다.

실무적으로 기업은 특정 직무가 왜 폐지됐는지, 어떤 대안이 검토됐는지, 재배치나 재교육이 실제로 충분히 검토됐는지를 명확히 입증해야 할 가능성이 커졌다. 동시에 내부 의사결정 논리와 외부 커뮤니케이션 메시지 간의 정합성도 확보해야 한다.

이러한 정합성은 향후 주요 검증 대상이 될 수 있다. 투자자나 대외 발표에서는 AI로 인한 생산성 향상을 강조하면서, 내부적으로는 구조조정이나 역량 불일치를 해고 사유로 설명할 경우, 분쟁 발생 시 해당 서사가 문제로 지적될 수 있기 때문이다.

샤는 “기업들이 인력 변화에서 AI의 역할을 축소하거나 모호하게 표현하려 할 수 있지만, 규제 당국의 관심은 더욱 높아질 것”이라며 “근로자에 대한 경제적 영향을 반영하고 충분한 보상을 보장하기 위해 규제는 지속적으로 강화될 것”이라고 말했다.

달라지는 것과 변하지 않는 것

이번 판결이 기업의 AI 도입 속도를 늦출 가능성은 크지 않다. 자동화, 생성형 AI, 지능형 워크플로우에 대한 투자는 산업 전반에서 계속 확대되는 추세다.

다만 변화가 예상되는 부분은 이러한 의사결정이 실행되고 전달되는 방식이다.

기업들은 해고와 AI의 직접적인 연관성을 명시적으로 드러내기보다, 구조조정이나 역량 전환과 같은 표현으로 인력 변화를 설명하는 데 보다 신중해질 수 있다. 그러나 내부 문서와 대외 메시지 간 불일치가 발생할 경우, 이러한 접근은 또 다른 리스크로 이어질 수 있다.

CIO에게 주는 시사점은 분명하다. 이제 AI 이니셔티브는 단순한 효율성 향상만으로 평가할 수 없다. 법적 리스크, 인력 전략, 기업 거버넌스와 직접적으로 맞물리는 영역으로 확대되고 있다.

고기아는 “향후 분쟁의 초점은 AI의 경제적 가치 여부가 아니라, 기업이 AI를 도입하고 그에 따른 인력 영향을 관리하는 과정이 공정했는지를 입증할 수 있는지에 맞춰질 것”이라고 분석했다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • ‘AI is more efficient’ is not enough reason to lay off staff, says Chinese court
    Enterprises cannot terminate employees solely to replace them with artificial intelligence, a court in China has ruled, complicating how enterprises seek to justify automation-driven layoffs. The case involved an employee whose role was partly automated, leading to a significant pay cut and their eventual dismissal after they refused reassignment, the court document said. “The termination grounds cited by the company did not fall under negative circumstances such as
     

‘AI is more efficient’ is not enough reason to lay off staff, says Chinese court

4 de Maio de 2026, 12:03

Enterprises cannot terminate employees solely to replace them with artificial intelligence, a court in China has ruled, complicating how enterprises seek to justify automation-driven layoffs.

The case involved an employee whose role was partly automated, leading to a significant pay cut and their eventual dismissal after they refused reassignment, the court document said.

“The termination grounds cited by the company did not fall under negative circumstances such as business downsizing or operational difficulties, nor did they meet the legal condition that made it ‘impossible to continue the employment contract,’” according to a Bloomberg News translation of the court’s statement about the case.

The Hangzhou Intermediate People’s Court found that AI adoption does not constitute a “major change in objective circumstances” required under Chinese labor law to end an employment contract, and that the employer’s justification failed to meet the legal threshold for termination.

The decision casts a legal light on a question many enterprises have so far treated as operational: whether AI-driven efficiency gains can directly translate into workforce reductions without additional obligations.

AI as a business case, not a legal case

At the center of the ruling is a shift in how courts may interpret AI adoption.

Rather than treating automation as an external disruption, the court framed it as a management decision, one that does not automatically transfer the burden of change onto employees.

“The court has narrowed an increasingly popular corporate shortcut—treating a voluntary automation choice as if it were an unforeseeable external shock that automatically justifies dismissal,” said Greyhound Research chief analyst Sanchit Vir Gogia.

That distinction could prove significant. If AI is seen as a strategic choice rather than an uncontrollable event, employers may need to demonstrate due process — consultation, retraining, and reasonable redeployment — before eliminating roles, he said.

The Hangzhou ruling does not set legal precedent outside China, but its logic is likely to travel.

“The outcomes will not bind courts in markets like India, the UK, or the US,” Gogia said. “But worker-side counsel now have a sharper way to frame the argument—that AI adoption is a choice, and companies must show their work before passing the cost to employees.”

In India, for example, existing labor frameworks already require notice and compensation in cases of cut-backs linked to technological change. The Chinese ruling may reinforce interpretations that such restructuring must follow established statutory processes.

In Europe and the UK, where consultation requirements and limits on automated decision-making already exist, the evidentiary burden on employers could also increase.

A governance issue, not just a cost lever

For CIOs, the ruling underscores a broader shift: AI-led transformation is moving beyond a technology and efficiency discussion into governance.

“This ruling could set precedent as a more logical framework for global markets to challenge AI-driven layoffs,” said Neil Shah, VP for research and partner at Counterpoint Research.

He added that enterprises may need to rethink how workforce strategies align with AI adoption.

“The role of human resources will have to pivot from hiring and firing to more focus on reassigning, training and reskilling in the AI era,” Shah said.

That shift comes as enterprises continue to accelerate AI investments, often alongside workforce restructuring. Recent moves by companies such as Oracle, Microsoft, and Tata Consultancy Services reflect a broader trend of aligning headcount with AI-led operating models, developments that CIO audiences have been tracking closely.

The emerging risk: documentation and narrative gaps

Beyond legal doctrine, the ruling highlights a more immediate enterprise risk—consistency.

Gogia said the burden created by such cases is less about economics and more about governance, particularly around how companies document workforce decisions tied to AI adoption.

In practice, that means employers may need to clearly demonstrate why a role was eliminated, what alternatives were considered, and whether efforts such as redeployment or retraining were meaningfully explored before termination. Just as importantly, companies may need to ensure that their internal reasoning aligns with external messaging.

That alignment could become a point of scrutiny. Organizations that emphasize AI-driven productivity gains in investor or public communications while attributing layoffs internally to restructuring or skill mismatch may find those narratives challenged if disputes arise.

Shah said that while companies may attempt to soften or obscure AI’s role in workforce changes, regulatory attention is likely to increase.

“There will always be loopholes… but regulators will have to stay ahead to factor in the economic impact on workers and ensure robust severance,” he said.

What changes and what doesn’t

The ruling is unlikely to slow enterprise AI adoption. Investment in automation, generative AI, and intelligent workflows continues to accelerate across industries.

What may change is how those decisions are executed and communicated.

Enterprises could become more cautious about explicitly linking layoffs to AI, instead framing workforce changes in terms of restructuring or capability shifts. But that approach carries its own risks if documentation and public messaging diverge.

For CIOs, the implication is clear: AI initiatives can no longer be evaluated solely on efficiency gains. They now intersect directly with legal exposure, workforce strategy, and corporate governance.

As Gogia put it, the next phase of disputes will not focus on whether AI has economic value, but on whether companies can demonstrate that their process for adopting it and managing its workforce impact was fair.

  • ✇Security | CIO
  • You can’t train your way out of the AI skills gap
    Most enterprises I talk to say they have an AI skills gap. That sounds plausible right up until you look at what companies are doing. They are spending millions on copilots, launching AI academies, hiring chief AI officers and rolling out internal training at scale. Yet for all that activity, most organizations still do not move faster, decide better or operate in fundamentally new ways. That is the real tension at the center of enterprise AI right now: Companies think the
     

You can’t train your way out of the AI skills gap

30 de Abril de 2026, 09:00

Most enterprises I talk to say they have an AI skills gap. That sounds plausible right up until you look at what companies are doing. They are spending millions on copilots, launching AI academies, hiring chief AI officers and rolling out internal training at scale. Yet for all that activity, most organizations still do not move faster, decide better or operate in fundamentally new ways. That is the real tension at the center of enterprise AI right now: Companies think they have a skills problem, but what they really have is a work design problem. I have seen this pattern repeatedly. The organizations that get real value from AI are usually not the ones that train the fastest. They are the ones who redesign work sooner.

The AI skills gap is real, but it is not the whole story. In many enterprises, the bigger failure is that AI is being layered onto jobs, workflows and operating models built for a pre-AI world. People are learning new tools, then being sent back into the same meetings, approvals, handoffs and reporting structures that made work slow in the first place. Training may improve local productivity. It does not automatically redesign how the business runs.

That is why so many AI programs feel busy without becoming transformative. The organization can point to courses completed, licenses deployed and pilots launched, but the underlying system still behaves the same way. Decision latency stays high. Bottlenecks remain intact. Managers absorb more complexity, not less. Employees become faster in small ways while the enterprise remains slow in all the ways that matter.

This gap is showing up clearly in the research. Deloitte’s 2026 State of AI in the Enterprise report says insufficient worker skills are the biggest barrier to integrating AI into existing workflows. Yet the most common organizational response is education and reskilling, not role or workflow redesign. In fact, Deloitte explicitly notes that companies are much more focused on AI fluency than on re-architecting how work is done. The same tension appears in Wharton’s 2025 AI Adoption Report: Executive sponsorship is rising; chief AI officer roles are now present in 6 out of 10 enterprises and capability building is still falling short of ambition. The signal is hard to miss. Enterprises know AI matters. Many are investing. But they are still treating adoption as a learning problem when it is really an operating model problem.

Training creates users. Redesign creates advantage

Training matters. Every enterprise needs a baseline level of AI fluency. People need to understand where AI is strong, where it is weak and how to use it responsibly. They need to know how to challenge outputs, apply judgment and separate acceleration from automation. None of that is optional anymore.

But training alone does not create an enterprise advantage. At best, it creates pockets of local efficiency.

An individual contributor may draft faster. A manager may synthesize information faster. An analyst may produce a first pass in less time. Those gains are real, but they do not automatically translate into better operating performance. In many organizations, the efficiency never reaches the P&L. It gets trapped inside legacy workflows, approval layers, meeting culture and fragmented decision rights.

That is the real issue. AI may already be improving work at the individual level, while the enterprise itself remains structurally unchanged. The Writer enterprise AI adoption survey found that executives see AI super-users as at least five times more productive than their peers, yet only 29% of organizations report significant ROI from generative AI. The contrast is telling. The constraint is no longer whether employees can use AI. The constraint is whether the organization is designed to convert those gains into faster decisions, shorter cycle times, higher throughput and better business outcomes.

This is where many AI initiatives quietly stall. Leaders can point to adoption metrics, training completion rates and growing license counts. Employees can honestly say they are using the tools. But the business still does not feel materially more responsive. Revenue does not move faster. Product cycles do not compress enough. Decision latency remains high. Management complexity increases instead of falling.

That is why I believe the wrong question is, “How do we train our people on AI?”

The better question is, “Which work should humans continue to own, which work should AI accelerate and which workflows should be redesigned entirely now that AI exists?”

That is the shift that matters. It moves the conversation from individual capability to institutional performance. It moves AI from a training initiative to an operating model decision.

And that is where CIOs must lead. The organizations creating advantages with AI are not simply teaching employees new tools. They are redesigning roles, workflows and management systems so that individual productivity gains become enterprise-level outcomes. Companies do not fall behind because AI arrived. They fall behind because they kept the same work design after it did.

The real AI shift is separating judgment from execution

The most important AI transformations I have seen do not start with tools. They start with a harder leadership discipline: Separating judgment work from execution work.

Once AI can reliably handle portions of execution, the role itself must be reconsidered. Not eliminated. Reconsidered. The question is no longer just how to make people faster inside the job as it exists. The question is whether the job was designed correctly in the first place.

That is where the real work begins. Leaders must deconstruct work below the level of titles and org charts. This is a much harder challenge than deploying a copilot. It forces decisions about spans of control, management layers, performance expectations and career paths. It changes what excellence looks like across the enterprise. And it changes what companies should reward.

If AI takes on more drafting, synthesis, retrieval and coordination, then the value of human work moves up the stack. The ability to frame the problem, define quality and make accountable decisions becomes more valuable than manually producing every intermediate step.

This is also why so many employees feel uneasy even when leaders talk about AI in optimistic terms. They are being told to use new tools, but not what the organization will still need uniquely from them. They are hearing about productivity, but not about role evolution. Training without redesign does not feel like empowerment. It feels like a shot across the bow of their career.

That is why the most important workforce conversation in AI is not about tool usage. It is about role clarity. People need to understand where human judgment still creates value, where AI should accelerate execution and how their path to relevance and mastery changes as a result.

That is not an HR side discussion. It is one of the central leadership tasks of the AI era.

CIOs must lead the redesign, not just the rollout

This is where CIOs have a larger role than many companies still recognize.

AI adoption is often framed as a cross-functional initiative, and of course it is. But when AI moves from experimentation into execution, the CIO is often the only executive with a clear view of the full system: Workflows, dependencies, security, data architecture, control points, operational friction and how work moves across the enterprise. That perspective matters because the next phase of AI is not about individual productivity. It is about institutional redesign.

That means CIOs cannot limit their role to rollout, enablement and tool selection. They must help redesign how the business operates.

In my view, that starts with three questions:

  1. Where is AI simply making existing work cheaper or faster, and where could it allow the business to operate differently? Those are not the same thing.
  2. Which roles need to be rebuilt? Not renamed. Rebuilt. If analysts spend less time gathering information, what should the organization expect more of in return? If leadership cannot answer that, it is not redesigning work. It is just hoping individual productivity turns into enterprise value on its own.
  3. What new management disciplines does AI require? As AI becomes part of execution, leaders need clearer standards for validation, accountability and quality control. AI can compress execution, but it can also multiply errors at scale. That raises the premium on operating discipline, not lowers it.

This is why I think the skills-gap narrative can be misleading. It invites leaders to believe the problem is mostly educational, as if enough courses, certifications and training hours will somehow carry the organization into the future. They will not. They are necessary, but they are nowhere near sufficient.

The companies that pull ahead will treat AI as a redesign moment. They will rethink work at the level of tasks, decisions, teams and operating models. They will create roles with more judgment and less administrative drag. They will redesign career paths, so people are not just trained on AI, but advanced through the responsible use of it. And they will measure success not only through adoption, but through decision velocity, throughput, exception rates and business outcomes.

Most of all, they will stop asking employees to bolt AI onto broken systems.

That is the real opportunity in front of CIOs. Not just to deploy the tools. Not just to sponsor training. It is to help redesign the enterprise around a new division of labor between humans and machines.

The AI skills gap is real. But education alone will not close it.

Only better work design will.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • You’ve got the F1 car. Now where’s your driver?
    For years, the AI conversation has centered on tools — what can this technology do, and how fast can we deploy it? That chapter is closing. Nearly every organization now has access to powerful AI, and executives across industries say the bigger constraint is finding people who know how to turn those tools into outcomes.  Think of it this way: An F1 car is fast, impressive and loaded with potential. But without a skilled driver behind the wheel, it’s just an expensive ma
     

You’ve got the F1 car. Now where’s your driver?

30 de Abril de 2026, 06:00

For years, the AI conversation has centered on tools — what can this technology do, and how fast can we deploy it? That chapter is closing. Nearly every organization now has access to powerful AI, and executives across industries say the bigger constraint is finding people who know how to turn those tools into outcomes. 

Think of it this way: An F1 car is fast, impressive and loaded with potential. But without a skilled driver behind the wheel, it’s just an expensive machine sitting on the track. You need someone who understands the course, who knows when to brake and when to accelerate and who can navigate risk at high speed. 

The playing field has leveled. Talent is the new edge. 

In KPMG’s Q4 2025 AI Quarterly Pulse Survey, leaders identified AI prompt engineers (71%), AI performance analysts (59%), and AI trainers/data curators as the most anticipated emerging roles for 2026 — a clear signal that the real bottleneck is talent, not technology. 

That gap is only widening. Access to AI tools is no longer a differentiator. Neither is implementation. What separates the leaders from the rest is something harder to scale — the expertise, curiosity and hands-on engagement of the people putting those tools to work every day. 

Consider a senior corporate tax planning adviser who started out skeptical about AI. A year ago, her team would manually reconcile large data sets at quarter end, spending late nights cleaning and checking numbers. Today, she and a small group of “AI hobbyists” on her team have built and refined prompts, models and workflows that automate much of that effort. Her role is less about grinding through spreadsheets and more about reviewing anomalies flagged by AI, probing the “why” behind the patterns and translating those insights into client-ready strategies. She’s still doing tax — but the value she creates now comes from judgment and interpretation, not manual effort. 

The organizations pulling ahead aren’t the ones with the biggest tech budgets — they’re the ones whose people have become hobbyists like her. They’re tinkering. They’re documenting what works and what doesn’t. Building repeatable patterns, sharing prompts and staying relentlessly curious. 

This matters because the technology isn’t standing still. Models are improving at a pace that makes static skills obsolete almost immediately. The professionals who treat AI fluency as a one-time training exercise will fall behind. The ones who approach it as a lifelong practice — fingers on keys, constantly experimenting — will compound their advantage over time. They’ll be the ones who can quickly evaluate new models, identify where they fit and redesign processes to capture value before competitors do. 

Leaders are starting to price that curiosity into the market: 76% say they would pay up to 10% more for candidates with strong AI skills, and 22% would pay 11-15% more. 

Now get out of the chat window

Those premiums don’t reward people who dabble — they reward people who go deep. And going deep means moving beyond surface-level use. It’s not enough to ask a chatbot questions and review its answers. The real value creation starts when you break out of that box — when you connect AI to your data, automate the insights that used to take days and build systems that flag anomalies before they become problems. It’s when AI moves from being a novelty in someone’s browser to being embedded in workflows, controls and decision-making. 

That’s where platform thinking comes in. The next wave of value won’t come from isolated tools; it will come from integrated systems that bring together data, AI capabilities, governance and human expertise in one place. When organizations unify their data, models and workflows on a common platform, they make it easier for people to experiment, share what works and scale good ideas quickly and safely across the enterprise. At KPMG, we’re building that kind of environment through Digital Gateway — but the principle applies universally. The platform is only as good as the people on it. 

We see the same shift inside major corporations. An early-career analyst at a large financial services firm used to spend most of his time copying data between systems, preparing slide decks and running one-off analyses requested by senior leaders. After his organization rolled out an AI-enabled platform, his day looks very different. He now configures AI agents to monitor risk indicators across portfolios, designs prompt templates that business users can reuse and works with compliance to ensure outputs meet regulatory expectations. He hasn’t left the world of analysis — he’s moved up the value chain, from producing numbers to orchestrating the system that produces and explains them. 

The bottom line 

If you’re wondering where to start, the answer is simple: Get behind the wheel. Encourage your teams to interact with every model, every tool, every capability they can access. Give them permission to experiment within guardrails and reward the behaviors that lead to better client service, sharper insights and smarter risk management. Foster a culture where curiosity isn’t a nice-to-have — it’s a performance expectation. 

Because at the end of the day, it’s not the car that wins the race. It’s the driver. And in the AI era, talent — curious, empowered and in the driver’s seat — is the only sustainable advantage. 

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Why I, the CEO, am personally building our AI strategy
    Last year, I did something that raised a few eyebrows on my leadership team: I made myself the de facto product manager for our AI strategy. Not because we don’t have talented product people — we do. But because I’m convinced that AI is one of those rare inflection points where the CEO needs to be hands-on, not just “aligned.” I’ve been experimenting with AI tools since ChatGPT landed in 2023. Unlike other tech hype cycles — I remember an investor asking me about our cry
     

Why I, the CEO, am personally building our AI strategy

29 de Abril de 2026, 07:00

Last year, I did something that raised a few eyebrows on my leadership team: I made myself the de facto product manager for our AI strategy. Not because we don’t have talented product people — we do. But because I’m convinced that AI is one of those rare inflection points where the CEO needs to be hands-on, not just “aligned.”

I’ve been experimenting with AI tools since ChatGPT landed in 2023. Unlike other tech hype cycles — I remember an investor asking me about our cryptocurrency strategy back in 2021, and my answer was “none” — this one felt different from the start. Over the 2024 holidays, I went deep: prototyping with Cursor, brainstorming product ideas with ChatGPT, using voice mode to dictate rough drafts while away from my desk. What I took away was a conviction that AI was no longer a “keep an eye on it” item. It was clear then that AI was a right-now, every-department, every-person transformation.

Through 2025, AI became a core part of all my workflows. Now, AI is an organizational accelerator across the board at NetBox Labs. Personally, I’m shipping more product and technology than I have in over a decade — not because I’m working harder, but because AI tooling has fundamentally changed what one person can do.

And transformations like that can’t be managed by committee.

Why AI is too important to delegate

Here’s what I’ve learned watching strategy play out across startups and enterprises: the things that get delegated early tend to get domesticated early. They get scoped into a neat workstream, assigned a quarterly OKR and slowly lose their disruptive potential. AI doesn’t fit in a box right now and forcing it into one is a mistake.

But this goes beyond product management. I’m not just PM’ing our AI strategy — I’m directly building products and features, driving internal experimentation and pushing AI into every corner of how we operate. AI tooling and capabilities are evolving too fast, in too many dimensions, to delegate. The landscape shifts week to week: new models, new capabilities, new paradigms for how humans and AI work together.

As a CEO, you need to develop your own feel for the impact of AI — what it can and can’t do today, where it’s headed, how it changes the work — and bring those learnings across teams. You can’t get that from a briefing doc.

I sit at the intersection of product, engineering, go-to-market and operations. I see the connections that individual teams can’t — where an AI experiment in marketing could reshape how we think about our product onboarding, or where an engineering prototype could unlock a whole new customer conversation. When I’m personally involved, building with these tools myself, those dots get connected faster.

This isn’t about ego or micromanagement. It’s about pattern recognition at the speed the moment demands. The companies that will win are the ones whose leadership treats AI not as a departmental tool but as a strategic capability.

Speed now matters more than perfection

If there’s one lesson the last two years have drilled into me, it’s this: the cycle time for strategy has compressed from years to months, or even weeks. The AI tools available today are meaningfully better than what existed six months ago. The tools six months from now will make today’s tools look primitive. Waiting for the “right” moment to build a comprehensive AI strategy is a guaranteed way to fall behind.

At NetBox Labs, we’ve adopted a bias toward shipping. Since 2024, we’ve shipped many AI features — some of which have gained rapid adoption, like NetBox Copilot or the NetBox MCP server, and others that haven’t resonated or were overtaken by the evolving AI landscape. That’s fine. The point is to stake the ground, learn and iterate — not sit in a conference room perfecting a roadmap that’ll be obsolete by the time you execute it. Recently, one of our customers asked us for better “best practices” content for working with NetBox’s APIs. In less than a day, we shipped a set of skills for AI agents that has seen quick interest and adoption. That kind of cycle time — customer request to shipped product in hours — is what AI enables.

This same principle applies internally. I tell my team: don’t wait until you’ve mastered a tool to start using it. Use it now, stumble, figure out what works and share what you learn. There are no “rules” for how AI fits into workflows yet. We’re all learning as we go, and the patterns will look different by this time next year.

Key takeaways: How to encourage company-wide AI adoption

If you’re a CEO or senior leader thinking about how to drive AI adoption across your organization, here’s what I’ve found works.

  • Make it personal, starting at the top. I don’t just endorse AI use — I show my work. When I build a feature using Claude Code or prototype a product idea by feeding requirements into Claude, I talk about it openly. I even presented my use cases at our recent company offsite. Leaders using AI tools visibly and vocally give everyone else permission to do the same.
  • Reframe the culture around AI. One thing I noticed early on is that people feel sheepish about using AI, like it’s cheating. We had to actively dismantle that mindset and frankly, are still working on dispelling it. Using a calculator isn’t cheating at math. Our job is to get stuff done, and teams that make effective use of these tools will outperform teams that don’t. I want to hear how people are using AI and how it’s helped them succeed — not whispered confessions, but demos and stories shared in team workshops.
  • Lower the barrier to experimentation. If someone on your team finds a tool that could accelerate their work, don’t make them write a business case. Let them spend a few dollars and a few hours trying it. You can’t be wasteful and you can’t get distracted, but don’t be shy about trying new things. The cost of a missed opportunity dwarfs the cost of a failed experiment.
  • Show people where to start. “I’m just not sure how to get started” is the most common thing I hear. So I share concrete examples from my own workflow: using agents to research and brainstorm to validate product feasibility, building detailed PRDs with Claude Code and then feeding them back in to generate implementation plans, continuing further to directly build and ship features, even analyzing data by connecting Claude Cowork with our CRM, Linear and call transcripts. These aren’t exotic use cases. They’re everyday work, done faster.
  • Carve out time and space for people to experiment. At our recent company offsite, we dedicated an entire afternoon to learning from one another in-person. This helped everyone learn from “experts” within the company who could help them get started or uplevel their use case. Prioritizing making time for this experimentation demonstrated to the team that this truly is a company priority — not a fad.
  • Connect it to individual growth. This is something I care about deeply beyond the business case. Most of us will have careers that extend well beyond our current roles. Being AI-native is going to be a defining professional skillset. I want NetBox Labs to be a place where everyone develops that skillset — not just because it helps our company, but because it matters for their careers.

The real risk is inaction

Companies are already splitting into two camps: those that fully embrace AI and weave it into the fabric of their business, and those that fall behind. The advantages in speed, innovation and operational efficiency are creating a gap that only widens with time.

So yes, I’m building our AI strategy — hands on the keyboard, not just in the boardroom. Not forever — eventually this will be so deeply embedded in how we operate that it won’t need a dedicated champion. But right now, in this window, it needs a CEO who’s willing to prototype on a Saturday, demo an imperfect feature on a Monday and keep pushing the whole organization to move faster than feels comfortable.

If you’re a CEO in 2026, that’s the job.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Shadow AI is already inside your organization. Now what?
    For most CIOs, AI adoption is no longer a question of if. It is a question of how fast. While many organizations are actively rolling out approved tools and building roadmaps, a second reality is unfolding in parallel. AI is already being used across the enterprise without formal approval, without governance and often without visibility. This is the rise of shadow AI. Unlike previous waves of shadow IT, this is not just about unsanctioned tools. It is about employees us
     

Shadow AI is already inside your organization. Now what?

27 de Abril de 2026, 07:00

For most CIOs, AI adoption is no longer a question of if. It is a question of how fast. While many organizations are actively rolling out approved tools and building roadmaps, a second reality is unfolding in parallel. AI is already being used across the enterprise without formal approval, without governance and often without visibility.

This is the rise of shadow AI. Unlike previous waves of shadow IT, this is not just about unsanctioned tools. It is about employees using AI to influence decisions, generate content and interact with sensitive data in ways that extend beyond traditional controls. The risk is not simply that it exists. The risk is that it exists without oversight

Why shadow AI is spreading faster than you think

In most organizations, the growth of shadow AI is not driven by negligence. It is driven by a set of understandable and often rational decisions.

One factor is short-term cost avoidance. Employees can access powerful AI tools for little or no cost, delivering immediate productivity gains. At the same time, enterprise-grade solutions require licensing, integration and security investments. In the absence of a clear mandate, many organizations tolerate the tradeoff.

Cultural dynamics also play a significant role. Leaders are hesitant to introduce restrictions that could be perceived as limiting innovation or frustrating high-performing employees. In a competitive talent market, access to modern tools is often seen as part of the employee experience, not just a technology decision.

Governance gaps further complicate the issue. In many cases, ownership of AI is unclear. Security teams focus on risk, legal teams focus on compliance, HR considers ethical implications, and IT is expected to enable productivity. Without clear accountability, decisions stall while usage continues to grow.

The pace of change adds another layer of complexity. The AI landscape is evolving so quickly that leaders are understandably cautious about investing in governance models or platforms that may be outdated within months. This often leads to analysis paralysis, where organizations delay action while waiting for standards to mature.

At the same time, the benefits of AI are becoming increasingly visible. Employees are working faster, automating repetitive tasks and in some cases delivering higher-quality outputs. This creates what I would describe as a productivity paradox. Leaders recognize the risks but are reluctant to slow down tools that are clearly improving performance.

Finally, practical constraints cannot be ignored. IT teams are already stretched across multiple priorities, and building an AI governance model requires both time and investment. Funding for governance tools, monitoring capabilities and cross-functional oversight is not always readily available, especially when the return is framed as risk mitigation rather than revenue generation.

The result is a growing disconnect between how AI is actually being used and how it is formally managed.

The real risk is not usage. It’s invisibility.

It is important to be clear. Shadow AI is not inherently negative. In many cases, it reflects a workforce that is curious, resourceful and motivated to improve how work gets done.

The challenge is not the presence of AI. The challenge is the lack of visibility into how it is being used.

When AI operates outside of governance, several risks emerge. Sensitive data may be entered into external models without proper safeguards. Outputs may be inaccurate or biased, yet still influence decisions. Intellectual property may inadvertently be shared. Over time, these risks compound, especially as usage scales.

Industry research reinforces this concern. According to IBM, ungoverned AI systems are more likely to be breached and more costly when they are. Similarly, frameworks such as the National Institute of Standards and Technology AI Risk Management Framework emphasize the importance of governance, transparency and accountability as foundational elements of responsible AI adoption.

At the same time, the idea of banning AI is increasingly unrealistic. Employees will continue to experiment with tools that make them more effective. The question for leaders is not whether AI will be used. It is whether its use will be visible, guided and aligned to enterprise priorities.

From shadow to strategy

The goal is not to eliminate shadow AI. The goal is to bring it into the light and channel it productively.

This begins with acknowledging that employees are often ahead of formal policies. Rather than responding with strict controls, organizations should focus on creating safe and supported pathways for adoption. Providing approved, enterprise-grade tools gives employees an alternative to external platforms. Clear guidelines help define acceptable use without creating confusion. Education builds awareness of both the benefits and the risks.

Monitoring also plays a role, but it must be implemented thoughtfully. The objective is not to create a culture of surveillance. It is to understand usage patterns, identify risks early and guide behavior in a way that builds trust rather than fear.

Organizations that take this approach are better positioned to move quickly without losing control. They are shaping how AI is used, rather than reacting after issues emerge.

What CIOs should do next

Addressing shadow AI does not require a perfect or fully mature strategy from day one. It requires momentum and clarity.

Start by making the invisible visible. Even a lightweight assessment can provide valuable insight into where AI is being used and for what purposes. This does not need to be complex. The goal is to understand reality before defining policy.

Next, establish clear ownership. Whether governance sits within IT, security or a cross-functional team, accountability must be defined. Without it, progress will remain slow and fragmented.

It is also important to invest in enablement, not just enforcement. Employees will adopt the tools that help them work more effectively. If the organization provides secure, approved options, adoption will naturally shift in that direction.

Finally, communicate openly. Employees are far more likely to follow guidelines when they understand the reasoning behind them. Transparency builds alignment and reduces the perception that governance is simply a barrier to productivity.

If you are looking for practical frameworks, resources like the National Institute of Standards and Technology AI Risk Management Framework and World Economic Forum guidance on responsible AI adoption provide helpful starting points.

The bottom line

Shadow AI is not a future concern. It is already embedded in how work gets done across most organizations. Ignoring it does not reduce the risk. It simply makes the risk harder to see.

The organizations that succeed will not be the ones that attempt to shut it down. They will be the ones who recognize it early, bring it into the open and align it with their broader strategy.

In doing so, they will turn what appears to be a risk into a source of advantage.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The 4 disciplines of delivery — and why conflating them silently breaks your teams
    A mid-size bank has three core applications: a loan origination system, an account servicing platform and a branch operations system. They’ve served the domestic market for over a decade —patched, extended and held together through years of incremental fixes. They work, but just barely. Here’s the deeper problem that nobody talks about: business leadership sees three applications. What they don’t see is that distinct product capabilities are buried inside each one, neve
     

The 4 disciplines of delivery — and why conflating them silently breaks your teams

22 de Abril de 2026, 10:00

A mid-size bank has three core applications: a loan origination system, an account servicing platform and a branch operations system. They’ve served the domestic market for over a decade —patched, extended and held together through years of incremental fixes. They work, but just barely.

Here’s the deeper problem that nobody talks about: business leadership sees three applications. What they don’t see is that distinct product capabilities are buried inside each one, never identified as standalone products. The loan origination system doesn’t just handle lending — credit decisioning, pricing, document generation, compliance workflows and customer identity verification are all tightly coupled within it. The account servicing platform has transaction processing, fee management, dispute handling, statements and regulatory reporting tangled together. The branch system has customer onboarding, relationship management and product cross—sell logic embedded as features rather than recognized as products.

Nobody ever drew the line between these capabilities because they didn’t need to. It all worked well enough for one country.

Then leadership announces: “We’re going global.”

And immediately, every assumption baked into the original design — single currency, one regulatory regime, one set of payment rails — needs to be rethought. The existing products must keep running and improving for domestic customers while the platform evolves for new markets. Some capabilities are foundational and must come first because everything else depends on them — a multi—currency ledger, a compliance engine that supports jurisdiction—specific Know Your Customer (KYC) and Anti-Money Laundering (AML) rules. Others, like localizing the domestic customer experience or designing regulatory reporting frameworks for new markets, can be built in parallel. And some, like account origination in new markets or integration with local payment networks, are blocked until the foundational layers are ready.

The sequencing is critical. Getting it wrong means months of wasted engineering effort — teams building capabilities that can’t function because the dependencies underneath them don’t exist yet.

If this sounds familiar, it should. Replace “bank” with “insurance company,” “healthcare platform” or “logistics provider” and the pattern holds. A product built for one context, patched over time, now being asked to scale beyond what it was ever designed for. The technology differs, but the organizational challenge is identical.

This is where four distinct disciplines must show up clearly — product management, technical architecture, program management and release management. Each answers a fundamentally different question. Each requires different expertise. And in my experience across enterprise environments, this is exactly where most organizations stumble. Not because they lack talent, but because these disciplines get conflated, left undefined or silently absorbed by people who were never meant to own them.

4 disciplines, 4 different questions

The confusion starts when organizations treat these disciplines as interchangeable — or worse, assume one person or team can own all of them. They can’t. Each discipline exists because it answers a question that the others are not equipped to answer.

  1. Product management answers: What are we building, for whom and why? In our banking example, this means deciding which markets to enter first, which product capabilities to extract from the monoliths and rebuild versus extend, and what customers in Germany need that differs from customers in the US. Product management owns the vision, the principles that guide decision-making and the strategy that translates vision into priorities. Without it, every other discipline is guessing.
  2. Technical architecture answers: How do we build it, so it meets the quality attributes the business requires? This is where scalability, compliance, performance and maintainability get translated into structural decisions. In our example, the architect determines that the multi-currency ledger is foundational and must be built first, that data residency constraints require region-specific deployments and that the monoliths need to be decomposed along product boundaries rather than along application boundaries. Architecture defines what is technically feasible, what constraints exist and what sequence the technical dependencies demand.
  3. Program management answers: How do we coordinate across teams, dependencies and constraints to deliver? The program manager takes the product priorities and the architectural constraints and maps the critical path. In the banking scenario, this means identifying that the compliance engine team and the ledger team are on the critical path, that three other teams are blocked until those foundations land and that two workstreams can run in parallel. Program management orchestrates — sequencing work, managing cross—team dependencies, surfacing risks and making sure everyone is working from the same timeline.
  4. Release management answers: When and how does each capability get to production safely? This is not the same as program management. In a global expansion, release management plans phased rollouts market by market, manages regulatory approval gates that differ by jurisdiction, defines rollback strategies when a release works in one region but fails compliance in another and coordinates environment strategy across multiple deployment targets. Release management is the discipline that ensures what was built reaches customers without breaking what already works.

These four disciplines form a chain. Product management defines the what and why. Architecture translates that into the how. Program management orchestrates the when and in what order. Release management executes the delivery. Remove any link and the chain breaks.

What happens when the lines blur

Back to our bank. Leadership has declared the global expansion. The vision is clear at the highest level — go global. But product management hasn’t defined which markets come first — is it Germany because of market size, or Singapore because of regulatory ease of entry? It hasn’t clarified which product capabilities to prioritize — does the bank lead with lending, which has the highest revenue potential, or with account origination, which is a prerequisite for everything else? And nobody has addressed how the domestic experience evolves alongside the expansion — do existing US customers get a redesigned platform that shares the new global architecture, or does the domestic product stay frozen while all investment goes toward new markets?

There’s energy and ambition, but no product strategy to channel it.

This is where the blur begins.

The technical architects, facing a vacuum of product direction, start making decisions that aren’t theirs to make. Without knowing which markets come first, the team builds a fully abstracted integration layer capable of connecting to any payment network globally when the first wave of expansion only required support for two European markets. Without knowing which product capabilities to prioritize, the architects begin decomposing all three monoliths simultaneously — lending, origination and servicing — spreading the team thin across parallel efforts when a clear product priority would have focused them on one decomposition at a time. And without knowing whether the domestic experience will converge with the new global platform or remain separate, they hedge by designing a multi—tenant architecture that can support both models — doubling the complexity of every design decision when a clear product direction would have eliminated one path entirely.

These aren’t bad engineering decisions in isolation — they’re rational responses to an absence of product clarity. But they result in an over-engineered foundation that takes twice as long to deliver and delays everything downstream.

The problem deepens when product management does exist but is fragmented. The bank has several product managers —one for lending, one for account origination, one for servicing, one for payments. Each is convinced their product capability is the priority. The lending product manager pushes to go first because of revenue impact. The account origination product manager argues that lending can’t function without accounts being opened first. The servicing product manager insists that launching origination without servicing in place means onboarding customers the bank can’t support. Each of them is right within their own scope — but nobody owns the cross—product view that determines the sequence. And layered on top of this is a constraint that nobody wants to own: these product capabilities are running in production today for domestic customers. Any refactoring or decomposition must happen without breaking what already works. The interdependencies aren’t just technical — they’re live business operations with real customers on the other end.

Meanwhile, program management sees the timeline slipping before a single line of code has shipped. Lacking clear product priorities, the program manager fills the gap by making market sequencing decisions based on what they can control — team availability, existing vendor relationships, which teams have capacity. The first market becomes whichever one is easiest to staff for, not the one with the highest strategic value. The sequencing is now driven by operational convenience rather than product strategy. Well—intentioned, but the wrong lens.

On the other side, architecture gets bypassed entirely when leadership pushes for speed. “Just make it work for Germany” becomes the directive. Instead of waiting for the decomposed platform the architects were designing, the team clones the domestic codebase, hardcodes German-specific fee structures and product rules directly into the application logic and stands up a separate database instance with locale-specific columns bolted onto the original schema. It works for Germany. But when the second market launches, none of it is reusable — the fee logic can’t be swapped, the product rules can’t be reconfigured, the schema can’t be extended and the organization now has two diverging codebases to maintain instead of one platform to scale.

And release management? It gets collapsed into deployment. Nobody plans for the fact that regulatory approval cycles in the EU take weeks longer than domestic release cycles. Nobody defines what happens if a release passes validation in one market but fails in another. The first production incident in a new market catches everyone off guard — not because the team was incompetent, but because the discipline that would have anticipated it was never given a seat at the table.

None of these failures are caused by a lack of talent. Every person in this scenario is skilled at their actual role. The architect is a strong architect. The program manager is an experienced coordinator. The problem is that each one is being pulled into a role they weren’t meant to play, filling gaps with their best judgment but without the expertise that role demands. An architect making product prioritization decisions will optimize for technical elegance over market fit. A program manager making market sequencing decisions will optimize for execution efficiency over strategic value. The decisions get made, but with blind spots that compound over time.

The cost of dysfunction and silence

The blur described above isn’t just an organizational inconvenience. It has measurable costs — both to the business and to the people inside it.

When product direction is absent and architects over-engineer to compensate, the cost shows up as delayed time to market. The bank’s competitors enter new markets while the engineering team is still building abstractions for markets that may never materialize. When program management sequences work by team availability rather than strategic value, the cost shows up as wasted effort — entire teams delivering capabilities that can’t be used because the dependencies they sit on top of were deprioritized. When architecture gets bypassed in the name of speed, the cost shows up months later as rework. That cloned codebase that got the first market live? It now needs to be untangled and rebuilt properly before the second market can launch. The shortcut didn’t save time. It borrowed it.

These costs compound. Every decision made in the wrong discipline creates downstream consequences that the right discipline would have anticipated. An architect would have known that the cloned codebase wouldn’t scale. A product manager would have known that the third market had more strategic value than the first. A program manager would have flagged the dependency that nobody planned for. A release manager would have built regulatory approval time into the timeline. The knowledge existed in the organization — it just wasn’t in the room when the decision was made.

Then there is the cost of silence.

In many organizations, the people who feel the absence of these disciplines most acutely are the ones least empowered to demand them. An engineer who needs product direction but doesn’t receive it will quietly make assumptions rather than escalate — because escalating feels like admitting they can’t do their job. An architect who sees foundational problems but stays silent because “that’s a product decision, not mine to make.” A program manager who builds timelines around undefined scope and hopes it clarifies later because raising the ambiguity feels like slowing the team down.

Over time, this silence becomes cultural. People stop asking for what they need and start working around its absence. They fill the gaps themselves — not because they want to, but because the work must move forward. And so, the architect becomes a part—time product strategist. The program manager becomes a part—time architect. The engineer becomes a part—time everything. The gaps get filled, but by people operating outside their expertise, accumulating blind spots with every decision.

The human toll is real. The people filling these gaps are often the most capable and committed people in the organization. They stretch because they care. But over time, the constant gap—filling leads to fatigue, frustration and eventually disengagement. Some burn out. Others stop stretching and start going through the motions — learned helplessness born from repeatedly solving problems that shouldn’t have been theirs to solve. And others simply delegate and walk away, not out of apathy but out of exhaustion from carrying accountability without authority.

This isn’t a story about bad people or bad intentions. It’s a story about a structural problem that manifests as individual pain.

Getting it right: Collaboration with clear accountability

The fix isn’t adding more process. It’s removing confusion about who owns what — and then building the habit of asking for what you need rather than silently filling the gap.

It starts with product management. In our banking example, this means someone explicitly owns the answers to: which markets first and why? Which product capabilities get extracted from the monoliths and in what priority? What does the customer in Germany need that the domestic customer doesn’t? What is the minimum viable product per market? These aren’t questions that can be deferred or delegated to architecture or engineering. They are product decisions, and until they are made, every downstream discipline is working with incomplete information. When multiple product managers each own a piece of the portfolio — lending, origination, servicing, payments — someone must own the cross—product prioritization. That means resolving the interdependencies: origination before lending, servicing alongside origination, payments integrated at the right point. Without this sequencing at the product level, program management is left trying to orchestrate work that hasn’t been prioritized, and architecture is left designing for a scope that nobody has bounded.

But product management setting direction is only half the equation. That direction must be translated into work that engineering teams can build. In most organizations, this translation is the responsibility of product owners — the people who take the product strategy and break it down into refined backlogs, epics and user stories with clear acceptance criteria. When this translation doesn’t happen, engineering teams are left staring at a high—level product vision with no actionable path forward. They can’t estimate, they can’t sequence their own work, they can’t identify technical risks early enough to mitigate them. The result looks like inaction from the outside, but from the inside it’s paralysis — teams that want to deliver but don’t have enough definition to start. Product management can set the right direction and architecture can define the right structure, but if product owners aren’t refining and defining the work at the level engineering needs, the delivery pipeline stalls before it begins.

Once product direction exists and is translated into actionable work, architecture can do what it’s meant to do — translate that direction into technical decisions. If product management says, “Germany and France first, with account origination and servicing as priority capabilities,” the architect can now make bounded decisions: what foundational components must be in place, how to decompose the relevant parts of the monolith, what can be reused and what needs to be rebuilt. Architecture stops being speculative and starts being purposeful. The architect also feeds back into product — surfacing technical constraints that affect feasibility, identifying dependencies that influence sequencing and flagging where product assumptions don’t survive contact with the existing codebase. This feedback loop is critical. Product direction without architectural reality—checking leads to roadmaps that look good on slides but fall apart in execution.

Program management takes the product priorities and the architectural constraints and builds the execution plan. The program manager maps which teams are on the critical path, which workstreams can run in parallel, where cross—team handoffs create risk and what decisions are blocking progress. In our example, the program manager identifies that the ledger team and the integration layer team are sequential dependencies for account origination — and that the servicing team can begin designing their decomposition in parallel. Program management doesn’t make product decisions or architecture decisions. It orchestrates the decisions that have already been made, sequences the work and surfaces risks early enough to act on them.

Release management takes the execution plan and ensures safe delivery. In a global expansion, this means planning phased rollouts market by market — not just deploying code but validating that each release meets market—specific requirements before it reaches customers. It means defining what happens when a release works in one market but encounters issues in another. It means coordinating environment strategy, managing feature toggles by region and building rollback plans that account for the fact that rolling back in one market can’t break another. Release management is the discipline that bridges the gap between “it works in our environment” and “it works for our customers.”

The key insight is that these disciplines are collaborative, not siloed. Architecture informs product about what’s technically feasible. Program management surfaces risks back to product and architecture. Release management feeds production learnings back upstream. The information flows bidirectionally. But accountability does not. Product management owns the what and why. Architecture owns the how. Program management owns the orchestration. Release management owns the delivery. When these accountabilities are clear, people stop filling gaps they shouldn’t be filling and start demanding the inputs they need to do their own job well.

The shift isn’t about hierarchy or blame. It’s about giving every discipline the clarity to operate at its best — and the permission to push back when that clarity is missing.

The goal? Make the invisible visible

Product management, technical architecture, program management and release management are not interchangeable. They are not different labels for the same work. They are distinct disciplines, each with its own expertise, its own accountability and its own contribution to getting complex products built and delivered.

The banking example in this article is specific, but the pattern is universal. Any organization scaling a product beyond what it was originally designed for — whether expanding globally, modernizing a legacy platform or decomposing monoliths into products — will hit the same inflection point. The disciplines described here either show up clearly or they don’t. When they do, teams move with purpose. When they don’t, talented people burn energy compensating for structural confusion.

The goal isn’t to add bureaucracy. It’s to make the invisible visible. Name the disciplines. Define who owns each one. Make it safe to say, “I need product direction before I can make this architecture decision” or “I need the architectural constraints before I can build this timeline.” The moment people stop silently filling gaps and start openly asking for what they need, the system begins to self-correct.

These four disciplines aren’t overhead. They’re the operating system of delivery. And like any operating system, they work best when each component does its job and stays out of the way of the others.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Why the CIO is uniquely positioned to lead the digital workforce
    In March, OpenAI’s GPT-5.4 set a new high-water mark on GDPval, exceeding industry professionals in 83.0% of comparisons across 44 occupations. In April, Anthropic’s Claude Opus 4.7 signaled a similar advance of a 13% lift on a 93-task coding benchmark, a 14% gain on complex workflows. Anthropic’s limited-release Claude “Mythos Preview” scores an additional 13% on top of Opus 4.7 on the most challenging coding benchmark called SWE-bench Pro. This progress has changed the e
     

Why the CIO is uniquely positioned to lead the digital workforce

21 de Abril de 2026, 07:00

In March, OpenAI’s GPT-5.4 set a new high-water mark on GDPval, exceeding industry professionals in 83.0% of comparisons across 44 occupations. In April, Anthropic’s Claude Opus 4.7 signaled a similar advance of a 13% lift on a 93-task coding benchmark, a 14% gain on complex workflows. Anthropic’s limited-release Claude “Mythos Preview” scores an additional 13% on top of Opus 4.7 on the most challenging coding benchmark called SWE-bench Pro. This progress has changed the enterprise conversation because AI is now doing much more than generating answers. It’s at a point where it can complete meaningful pieces of work.

In sharp contrast, the previous two years were defined by hesitation across the C-Suite. In early 2024, IBM found that 42% of enterprise-scale companies had actively deployed AI, while another 40% were still exploring or experimenting. In 2025, IBM reported that only 25% of AI initiatives had delivered expected ROI and only 16% had scaled enterprise-wide. McKinsey’s 2025 survey showed the same pattern at a broader level: nearly nine in ten organizations were using AI in at least one function, yet most remained in piloting or early scaling, and only 39% reported any business impact. The mood across 2024 and 2025 was distinctly shaped by curiosity and pilot activity with persistent questions about payoff.

From pilots to production

Against that backdrop, 2026 has ushered in a new era of workflow accuracy with strong gains in places where enterprises can see immediate value: spreadsheets, document-heavy analysis and software development. For example, OpenAI’s GPT-5.4, released in March 2026, is 33% less likely to be false than GPT-5.2 on a set of real-world prompts. Anthropic positioned Opus 4.7 as a model that plans more carefully and sustains longer-running work. At the same time, Deloitte reported that workforce access to sanctioned AI tools had risen from fewer than 40% to around 60% in one year, and that 85% of companies expect to customize autonomous agents for their own businesses.

Markets have picked up the implications quickly. By the first week of February, stocks had lost about $1 trillion in market value as investors worried that fast-advancing AI tools could upend the sector. Part of the shock came from how directly the new systems were moving into core business workflows. Reuters reported that Anthropic launched plug-ins for legal, sales, marketing and data-analysis tasks, then added more plug-ins for investment banking, wealth management, HR, private equity, engineering and design. OpenAI, meanwhile, formed an alliance with BCG, McKinsey, Accenture and Capgemini to give AI pilots greater legitimacy and a clearer path to scale through consulting firms that enterprises already trust to guide major transformation efforts. Investors were reacting to a simple idea: software was beginning to perform work that had once lived inside teams and SaaS products.

The CIO becomes steward of digital labor

As AI takes on more structured cognitive work, enterprises gain a new layer of digital labor. Someone must decide where that labor fits, how it connects to core systems and data, how its output is measured, where human oversight remains essential and how risk and accountability are managed. Those responsibilities sit naturally with the CIO because they span the very domains the role already oversees: enterprise platforms, security, governance, integration, operating workflows and the architecture that links technology to business execution. The CIO is also one of the few leaders with visibility across functions, which makes the role especially well-suited to determining where digital labor can scale, where it needs guardrails and how it should reshape the way work gets done. The mandate now extends beyond running systems. It includes stewarding systems that increasingly execute work.

This pushes the CIO deeper into business strategy. Now that AI is accurate enough to redesign workflows, the challenge has become operational, economic and organizational. Which tasks should move to agents first? Where does human judgment create the most value? Which functions benefit most from faster analysis and machine-assisted execution? The answers shape speed, margins, customer experience and competitive differentiation. In this environment, the CIO becomes one of the executives most responsible for translating technical progress into business-model advantage.

The next source of advantage will come from converting company-specific judgment into executable systems. Frontier models are spreading quickly across the market, which brings up a different question: whose policies, pricing logic, approval paths, customer context and exception rules are being encoded into workflows that agents can execute with confidence? Much of a company’s edge lives inside those decisions. The CIO stands at the center of that conversion because turning institutional know-how into reliable machine action requires data access, process redesign, system integration and governance working together.

As AI access broadens and use becomes routine, the CIO’s role increasingly includes leading cultural change. Teams need training, new operating norms, trusted guardrails and clear accountability for outputs shaped by AI. Roles are beginning to shift toward judgment, exception handling, taste and decision-making. The most effective CIOs will treat this as work redesign rather than a tool rollout. They will build a blended workforce in which people and digital workers are orchestrated together with intention.

Turning AI capability into operating advantage

AI’s promise is growing faster than most enterprises’ ability to capture its value. Yet only 12% of CEOs report higher revenues from AI. Given the CIO’s role as an execution leader, the gap between what the technology can do and what the business realizes is exactly where CIO leadership matters most. The enterprise needs someone who can turn AI from enthusiasm into operating discipline by selecting workflows with measurable upside, embedding governance into deployment, managing vendors and models coherently and proving that digital labor can scale safely inside the business. This is where CIOs can truly shine. The organizations that win this phase will treat AI as a managed workforce layer with standards, accountability and clear ownership.

The next management discipline will look like workforce management fused with managerial accounting. Leading CIOs will track digital labor through business metrics: cost per accepted outcome, cycle-time, error and rework levels, escalation patterns and the share of output that still requires human repair. Those measures show where AI is compounding value and where it is creating hidden friction to find where human oversight continues to carry the greatest economic return. The enterprises that build this measurement layer early will scale AI with evidence, steer investment with far more precision and learn faster than competitors how to allocate work across people and machines.

AI is making the next chapter of IT leadership bigger than infrastructure and more consequential than another round of digital transformation rhetoric. As software begins to perform meaningful work, the CIO becomes the steward of the digital workforce. The role now extends into strategy, growth, talent, culture and operating model design. In 2024 and 2025, enterprises were asking whether AI would ever justify itself. In 2026, the more urgent question is where AI can reshape workflow economics first. CIOs will be the executives who answer it.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The metric missing from every AI dashboard
    Across industries, the conversation around AI has centered on capability. How fast can we implement it? Where can we automate? How much efficiency can we unlock? Those are reasonable questions. But they are not the only ones that matter. A recent Gartner report found that 91% of CIOs and IT leaders say their organizations dedicate little to no time scanning for the behavioral byproducts of AI use. The same research makes something else clear: Preserving the resilience a
     

The metric missing from every AI dashboard

20 de Abril de 2026, 09:00

Across industries, the conversation around AI has centered on capability. How fast can we implement it? Where can we automate? How much efficiency can we unlock? Those are reasonable questions. But they are not the only ones that matter.

A recent Gartner report found that 91% of CIOs and IT leaders say their organizations dedicate little to no time scanning for the behavioral byproducts of AI use. The same research makes something else clear: Preserving the resilience and safety of the workforce in the AI era is not simply a well-being initiative. It is tied directly to productivity.

As an industry, we measure performance gains very carefully. Simultaneously, we measure psychological strain much less closely. When we fail to measure something so important, something that directly affects productivity, culture and trust, that goes beyond a gap in analytics. It is a governance blind spot. That blind spot greatly concerns me.

The invisible psychological cost of acceleration

When AI systems enter workflows, the early data often looks promising: Output improves; turnaround time shortens; quality rises. What takes longer to surface is the human response to that acceleration.

As AI begins handling tasks that once required deep technical judgment, employees can start to wonder, internally, what happens to the expertise they spent years building. Cognitive offloading increases efficiency, and it shifts the relationship between a person and their work. When that shift happens too quickly, even capable employees can feel a subtle loss of mastery. That feeling rarely shows up in a dashboard. Instead, it can subtly change how people show up at work.

Job insecurity concerns often follow, though not always in obvious ways. It is not just about the fear of losing a role. More often, it is about uncertainty. When responsibilities blur and systems take on decision-making tasks, ambiguity increases.

Many AI systems operate as “black box” models: Systems whose internal reasoning is not fully transparent. When employees are expected to act on outputs they cannot fully explain, accountability can feel heavier. If something goes wrong, who is responsible? Lack of explainability increases perceived risk, and perceived risk increases stress.

Layer onto that the rise of AI-powered monitoring tools. Even when introduced with good intentions, continuous evaluation can feel different from periodic feedback. Some employees experience it as support. Others experience it as surveillance. This perception matters. Trust may start to erode until it’s razor-thin.

The real-world impact of AI’s mental health strain

Slowly, employee behavior begins to adjust to this environment. Research highlighted by HR Reporter found that when employees feel threatened by AI adoption, they may respond with knowledge-hiding behaviors instead of collaboration. Self-protection begins to replace openness. Not because people are unwilling to contribute, but because they are trying to preserve their own relevance.

Motivation shifts as well. A recent Harvard Business Review study found that while generative AI improved task quality and productivity, it reduced intrinsic motivation by about 11% and increased boredom by roughly 20%. Additional research published in Behavioral Sciences suggests that sustained reliance on AI tools can alter emotional engagement with work over time. Therein lies the tension: Output improves as engagement declines.

Not to mention workload issues. AI is often introduced with the promise of reducing effort. Yet as Harvard Business Review recently noted, AI does not necessarily reduce work. It can create an intensity that boomerangs back on the workforce. When friction drops, expectations expand. Employees take on more work because they can. They operate at sustained speed because the system allows for that. Unfortunately, what looks at first like efficiency can slowly become fatigue.

None of these dynamics exists in isolation. They actually reinforce one another. Reduced confidence feeds insecurity. Insecurity alters behavior. Intensified workload accelerates exhaustion. And not everyone acclimates at the same pace.

What leaders risk overlooking

In many organizations, performance dashboards light up before psychological ones even exist. We track uptime, output, cost savings and deployment velocity. We rarely track confidence, perceived relevance or how long it takes someone to recover after a public error.

Stress does not always present as resistance. For managers, that distinction matters. Sometimes it shows up as overextension, employees taking on more than is sustainable because they feel pressure to prove continued value in an AI-enabled environment. A manager relying heavily on AI-generated analysis may not notice that dynamic until it has already done damage.

Isolation is another signal worth watching. As AI mediates more interactions, peer collaboration can quietly thin out. Work becomes efficient but less communal, and over time, that shift erodes belonging and morale in ways that don’t show up on any dashboard.

Leadership itself is not immune. AI can draft performance reviews, summarize meetings and generate strategy outlines at remarkable speed. But as McKinsey has observed, while AI can write, design and code, it cannot do the hard work of leadership.

Mentorship, context-setting and ethical judgment remain deeply human responsibilities. If leaders outsource too much of the relational aspect of leadership to AI systems, employees may experience a subtle loss of support. None of this happens overnight, which makes it extremely easy to miss.

Resilience as governance

Research published in Nature defines psychological resilience as the ability to recover or grow stronger in the face of adversity. Importantly, the study suggests that individuals with higher psychological resilience are more likely to maintain confidence and optimism when facing perceived career threats posed by AI.

Resilience, then, is not abstract. It is measurable. It influences how people interpret change. If we accept that adaptation stress is predictable in an AI-enabled environment, then resilience cannot be left to chance.

Resilience must be built into how AI is deployed from the start. That begins with clarity. When leaders are explicit about how AI will be used, what will change and what will remain human-led, speculation has less room to grow. Ambiguity answers itself quickly, and usually with anxiety.

Clarity also extends to accountability. Employees need to understand where AI outputs end and where human judgment still carries responsibility. When that boundary is blurred, stress increases because no one is fully sure where decisions should live.

Over time, the conversation has to move beyond protection and toward growth. Reskilling is not only about preserving roles; it signals that relevance can evolve. When organizations invest in helping people adapt alongside technology investments, they reinforce stability rather than erode it.

Trust must be protected as carefully as performance. Surveillance capabilities and AI-enabled analytics should be implemented with intention and oversight. And, if we are serious about resilience, we should measure it.

Just as we track deployment velocity and system performance, we can track engagement, skill confidence and recovery time after errors in high-speed environments. Behavioral byproducts are not soft signals. They influence performance as directly as any technical metric.

Gartner research is direct: Preserving workforce resilience and safety in the AI era is a core responsibility, not just for well-being but for productivity itself. If 91% of CIOs report dedicating little to no time scanning for these behavioral effects, then there is an opportunity and perhaps an obligation to lead differently. Resilience should sit beside capability on the technology agenda.

A final reflection

Change has a way of exposing what we have not prepared for.

When I think about the pace of AI adoption, I do not feel alarmed. I feel thoughtful. Technology has always advanced faster than our comfort with it. What matters is not whether it moves quickly; it is whether we move wisely.

In moments of rapid change, it is tempting to focus only on what is measurable. Speed. Output. Efficiency. The bottom line. Those are tangible. But what often determines long-term success is less visible: Whether people feel steady, capable and trusted as the ground shifts beneath them.

AI will certainly continue to improve. What is less certain is whether leaders will give equal attention to the human side of the transformation. Confidence cannot be automated. Trust cannot be generated by a model. Those remain leadership responsibilities.

If we approach AI with both ambition and care, we can build organizations that are not only more capable but more durable. That is a standard worth holding.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • We finally built the time machine. Now we’re wasting the time we created
    Technology has always sold the same promise: freedom from tedious labor to accomplish greater things. For most of human history, that freedom was slow to realize, as eliminating one time-consuming task usually meant another would take its place. Then, over the past two decades or so, technology finally began fully delivering on its promise. Grocery delivery. Ride-sharing. Tap-to-pay. Automated bill payments. The list goes on, and the math is real — we’ve collectively re
     

We finally built the time machine. Now we’re wasting the time we created

17 de Abril de 2026, 07:00

Technology has always sold the same promise: freedom from tedious labor to accomplish greater things. For most of human history, that freedom was slow to realize, as eliminating one time-consuming task usually meant another would take its place. Then, over the past two decades or so, technology finally began fully delivering on its promise.

Grocery delivery. Ride-sharing. Tap-to-pay. Automated bill payments. The list goes on, and the math is real — we’ve collectively recovered countless hours every week that used to disappear into errands, waiting rooms and the small frictions of modern life.

We finally built the time machine. Mission accomplished, right?

Maybe not. I came across a comment recently that gave me pause. Someone posted that they had expected AI to do the dishes and walk the dog so they could spend more time developing art and music — but instead it’s the other way around. That inversion concisely sums up exactly what’s been keeping me up at night. Now that we have truly begun to free up our time, what are we doing with it? Are we wasting it?

Mostly, the answer seems to be: yes. We’re spending countless hours watching screens. Gaming. Scrolling. Consuming at an industrial scale. I find this troubling, and not because entertainment is inherently bad, but because the primary beneficiaries of our automation revolution have been entertainment platforms and targeted marketing engines.

By nearly every measure, our technology revolution has a primary casualty, and it’s the meaningful use of time we swore we’d reclaim. Now we are standing at a second, far more consequential inflection point: the arrival of agentic AI. And I worry we’re about to make the same mistake again, only at a much larger scale.

Spotting the visible pattern

In identity verification and fintech, I see automation reclaim enormous amounts of time that used to go into manual processes, compliance checks and customer friction. However, that time and cognitive overhead rarely flows into solving more difficult problems. Instead, it flows into growth metrics, engagement loops and increasingly sophisticated ways to sell things to people.

This is the time machine problem. We automate away the burden. We fill the gap with meaningless consumption. We call it progress.

The question agentic AI forces us to confront is whether that cycle is inevitable, or whether we’re simply allowing the choice to be made for us.

Considering what agents could actually do

What if an agent could find and book the right flight, complete the purchase and handle the verification steps along the way — without you lifting a finger? What if a similar agent could scan your financial accounts, spot the better insurance rate and switch you over after a quick prompt to ensure you’re approved? And the use cases hardly stop there – what about agents that solve rush-hour traffic, streamline appointments at the doctor’s office and research potential schools for a busy family?

These are hardly science fiction scenarios. They’re the obvious applications of technology that already exist. The friction points that genuinely stress people out, like gridlock, waiting in queues and struggling through an ocean of information, are exactly the problems a well-built agent could dissolve.

Instead, agents are being optimized as better marketers. Smarter recommendation engines. More persuasive nudges toward purchase, now that we have extra time to consume. This is technology coming at people rather than working for them.

The distinction matters more than it might appear at first glance. An agent that knows when to leave the house to avoid traffic is expanding your life. An agent that knows your behavioral triggers and monetizes them is exploiting it.

Understanding where trust becomes structural, not philosophical.

If agentic AI is going to have genuine access to your time, your decisions, your schedule and eventually your finances, then we need to understand who built the agent and what it’s designed to do.

This is not paranoia; it’s the same logic that underpins Know Your Customer and Know Your Business requirements in financial services. When money moves, we verify the parties involved. When agents start moving through our lives with real decision-making authority, we must apply the same rigor.

I think of it as Know Your Agent: KYA.

The questions KYA needs to answer are straightforward, even if the engineering isn’t yet. Did a known, verifiable company or an unknown developer operating without accountability build this agent? Is this agent acting on behalf of someone with interests aligned with yours? Has the agent been updated or compromised since you last trusted it?

In identity verification, we’ve spent years getting upstream of fraud by verifying origins, not just behaviors. The same principle applies to agents. Behavioral monitoring is useful, but it catches problems after they’ve started. Verifying the origin and mandate of an agent catches them before they ever reach you.

Without KYA as a foundation, we’re intertwining agents into our lives and hoping for the best — extending trust to systems whose actual purpose we can’t confirm.

The choice in front of us requires action

Companies building agentic AI are making decisions right now about what those agents optimize for. Those decisions aren’t inevitable; they’re just choices. And those companies are heavily incentivized toward engagement and monetization, rather than genuine human utility.

Consumers can push back, but only if they have the information to do so. Regulatory frameworks will eventually catch up, but they rarely lead. The most durable pressure for change comes from clearly articulating what we actually want from this technology and building the verification infrastructure to enforce it.

We’ve proven we can build the time machine. The more challenging question is whether we have the wisdom to decide where it takes us.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The best employee motivators connect emotions to responsibilities
    Effective leaders motivate employees. But what exactly does that mean? According to your generic internet dictionary, motivate (v) means to give (someone) a reason for doing something. That isn’t a bad definition as these things go, but it’s too transactional to be satisfactory. If you as CIO are trying to motivate an employee, you probably aren’t trying to get them to understand and accept an assigned task. More likely you want the employee to bring more energy t
     

The best employee motivators connect emotions to responsibilities

14 de Abril de 2026, 06:30

Effective leaders motivate employees. But what exactly does that mean?

According to your generic internet dictionary, motivate (v) means to give (someone) a reason for doing something.

That isn’t a bad definition as these things go, but it’s too transactional to be satisfactory.

If you as CIO are trying to motivate an employee, you probably aren’t trying to get them to understand and accept an assigned task. More likely you want the employee to bring more energy to all their efforts, as well as more creativity and focus.

Above all you want them to care about their contributions to the overall organization’s success. Motivation is about connecting emotions to responsibilities.

Effective leaders have six concrete techniques they can use to motivate employees. Here’s a rundown of these magical motivators, in decreasing order of utility.

Approval

Most people, most of the time, value the opinions of others. That, along with the fact that it costs nothing to use, is why, when it comes to motivation, approval is the leader’s workhorse.

Effective leaders use approval through the simple expedient of noticing and complimenting an employee for well-done efforts and results.

Effective leaders actively seek opportunities to give compliments. But there’s a balancing act. A leader’s compliments must pass three tests: (1) they have to be specific, connecting to something important; (2) they have to have been difficult enough to accomplish to be worth mentioning; and (3) they must be given in a public setting.

Approval done right: “Jim, thanks for helping Melissa figure out how and when to use Teams Channels. She seems far more comfortable with it than she was before, and this helps the whole team be more effective. Well done. And Melissa, thank you for wanting to acquire this skill. While I’m at it, everyone else: If you haven’t figured out Teams Channels, now you know who to ask.”

It’s specific, important, and given in a public setting.

That’s as opposed to: “Thanks for everything you’re doing on the project, Jim. Keep up the good work!” Which is vague and not attached to anything important.

One more bit of guidance: Don’t turn yourself into a Compliments Factory. Approval must be difficult to earn. It must be meaningful, but not so difficult to earn that you get a reputation for being impossible to please.

Exclusivity

Back in the day, the US Marine Corps ran television ads extolling “The few, the proud, The Marines!” Becoming a Marine was a coveted achievement, and this motivated many aspiring soldiers to expend the great deal of effort necessary to become a member of this exclusive group.

Exclusivity is a powerful motivator. Use it with caution, though, because it can backfire: US Army troops are likely to be less than enthusiastic for the implied insult — that they are, compared to the Marines, second-class soldiers.

Which isn’t a problem unless you need those in your Exclusive team and those who haven’t achieved exclusivity status to collaborate.

Greed

As motivators go,greed is the most widely misunderstood. Those who like it figure that if they dangle the possibility of spot bonuses, everyone eligible will jump through hoops to get one.

Maybe they will; maybe they won’t. But if they do, and you award the requisite monetary reward, all that does is set a new baseline; the bonus becomes an entitlement, while everyone else, who didn’t get a bonus, wonders when it will be their turn to drink at the spigot.

Which doesn’t mean there’s never a time or context for which awarding a spot bonus makes sense. It’s all in how you communicate the rational for giving one. The trick is recognizing that money isn’t a motivator for additional effort. It’s the organization’s loudest voice, which makes it an excellent tool for demonstrating a leader’s sincerity when they say, “Thank You!”

Fear and anger

While these are separate “magic motivators,” for our purposes, we’ll lean on their complementarity to address them together as one.

If you want to get someone to up their level of effort, fear and anger are your go-to motivators. If we’re afraid of something, we’ll find ourselves running away from it faster and longer; if we’re angry about it, we’ll run at it faster and longer.

That’s all to the good, except for fear and anger’s inescapable side effect: Fear and anger make a person stupider, too.

Fear and anger lead to mistakes that can be more costly than what you’d get from using approval or exclusivity.

Also, take care when you choose fear or anger’s target. Make someone afraid of you and you’ll turn yourself into a bully. Don’t do that. Bully bosses make sure everyone tells them what they want to hear, while nobody tells them what they need to know.

Make them afraid of the situation instead. “If you don’t pick up the pace, I’ll fire you!” turns the leader into a bully. That’s as opposed to this: “If we don’t all up our game the company will be thinking of us as layoff targets.” It might still make the newly motivated employees dumber, but they won’t be afraid of you.

Guilt

What are you, their mother?

“You let the team down. What are you going to do to make it up to them?”

Maybe it’s motivating, but it also isn’t fostering a culture of adulthood. You should want employees who take responsibility because that’s how successful employees in your organization approach their work, not who wring their hands, hoping nobody will notice or be too critical of them.

Isn’t all this being manipulative?

If by “manipulative” you mean consciously using techniques that improve performance, as opposed to unconsciously using techniques that improve performance, then yeahbut. Sure, consciously deciding how to interact with employees to get them to perform better is manipulative. That isn’t the question you should be asking.

The better question is, Why should being intentional about such things seem unethical? If it strikes you that way, you’re probably imagining a leader soliloquy along the lines of, “Hah! You’re performing better than you did last year. Fooled you!

As is so often the case, intentions are everything. If one leader uses these techniques to trick employees into performing better while a second one uses them to encourage employees to bring their best energy and creativity to the party, then we might judge one of the two to be a better person.

But being a better person is qualitatively different from being a better leader. If you’re looking to be a better person, contact your favorite clergy. If you’re looking to be a better leader, stick with me.

❌
❌