Visualização de leitura

Cuenta atrás para presentar candidaturas en España a los CIO 50 Awards

Un año más, vuelve la convocatoria de premios de referencia para distinguir a los mejores directivos de sistemas de información (CIO) en España y los proyectos de TI más innovadores realizados en el país. La iniciativa, conocida como los ‘Oscar de la industria de TI’, forma parte del proyecto global CIO Awards con el que la publicación internacional CIO, del grupo editorial Foundry, pone en valor la labor de ejecutivos de primer nivel capaces de impulsar valiosos resultados empresariales mediante el liderazgo digital, la visión estratégica y la innovación tecnológica.

Los premios esta vez recalan en España bajo el nombre de CIO 50 Awards. El plazo de recepción de candidaturas para la edición de 2026 está abierto hasta el próximo 29 de mayo y la cita de entrega de los galardones tendrá lugar el 8 de octubre en Madrid, en el marco de una gran conferencia que se celebrará en paralelo y estará centrada en la temática “Liderazgo tecnológico responsable, resiliencia y gobierno digital en el contexto español”. Durante la jornada, los galardonados en otras ediciones de los premios y los candidatos podrán compartir sus historias de éxito con otros líderes de TI, creando una experiencia de aprendizaje entre iguales de valor incalculable.

Quién puede participar

Pueden optar a los CIO 50 Awards los CIO y otros directivos/gerentes de tecnología de empresas, administraciones públicas u organizaciones sin ánimo de lucro (ONG).

Los directivos que se presenten a la convocatoria deben desempeñar una labor al más alto nivel en lo que respecta a estrategia y ejecución tecnológica y de transformación, pues los premios CIO 50 reconocen a aquellos líderes que definen la dirección de la organización, contribuyen a decisiones a nivel del consejo directivo y ejercen influencia en inversiones tecnológicas de gran envergadura. Un requisito para presentar candidatura es que los CIO lleven al menos un año en la organización para la que trabajan actualmente.

Los consultores, proveedores de TI, de software o de hardware y las empresas de estudios de mercado o servicios de información no podrán optar a los CIO 50.

Cómo se elige a los premiados

Como en las ediciones anteriores, las candidaturas serán valoradas por un jurado independiente que analizará aspectos como los desafíos afrontados en los proyectos y las soluciones implementadas; los beneficios y mejoras logrados; el impacto en el negocio (optimización de costes, mejora de márgenes, crecimiento de ingresos); los aumentos en la productividad y la transformación de los procesos empresariales gracias a las TI.

El jurado está conformado por Fernando Muñoz, director del CIO Executive by Foundry; Esther Macías, directora editorial de CIO y COMPUTERWORLD en España; los históricos CIO, ya retirados, José María Tavera, que lideró la estrategia de TI de gigantes como Telefónica o Acciona, y José María Fuster, quien estuvo al frente de las TI del Banco Santander y ahora es patrono de la Fundación Real Academia de Ciencias de España; Dimitris Bountolos, CIIO de Ferrovial y ganador de la categoría CIO del año de la edición 2025 de los CIO 100 Awards Spain; Gracia Sánchez-Vizcaíno, CIO para Iberia & Latinoamérica de Securitas Group; Mar Hurtado de Mendoza, vicepresidenta global de reclutamiento en IE University y profesora adjunta de esta escuela de negocio; y Patricia Arboleda, presidenta de Women in Tech – Spain.

Una distinción local con alma global

La historia de los galardones CIO 100 y CIO 50 a la excelencia en TI empresarial se remonta a hace más de tres décadas, cuando comenzaron a otorgarse a directivos de Estados Unidos, para extenderse después a otros mercados como Alemania, Reino Unido, España, Singapur, Australia, Corea del Sur e India.

Se trata de una iniciativa clave para reconocer logros, compartir conocimiento y conectar a una influyente comunidad de responsables de la toma de decisiones en tecnologías de la información.

En la actualidad, la publicación CIO, del grupo Foundry, tiene abierto el proceso de recepción de candidaturas a los premios CIO 100 y CIO 50 en los siguientes países/regiones:

  • CIO 100 USA (agosto de 2026).– Fase de solicitud cerrada; La inscripción para la conferencia está abierta aquí. Más información
  • CIO del Año Alemania (octubre de 2026).– Fecha límite de presentación de candidaturas: el 15 de mayo de 2026. Más información
  • CIO 100 UK (septiembre de 2026).– Fecha límite de presentación de candidaturas: el 21 de mayo de 2026. Más información
  • CIO 50 España (octubre de 2026).– Fecha límite de presentación de candidaturas: 29 de mayo de 2026. Más información
  • CIO 100 India (septiembre de 2026).– Fecha límite de presentación de candidaturas: 5 de junio de 2026. Más información
  • CIO 100 Australia (septiembre de 2026).– Fecha límite de presentación de candidaturas: 19 de junio de 2026. Más información
  • CIO 100 ASEAN (noviembre de 2026).– Fecha límite de presentación de candidaturas: 27 de julio de 2026. Más información
  • CIO 50 Japón (diciembre de 2026) – Fecha límite de presentación de candidaturas: mediados de agosto de 2026.

What CISOs need to get right as identity enters the agentic era

Identity has always been central to security, but the proliferation of AI agents is rapidly changing the challenge of managing and securing identity, spurring CISOs to rethink their identity strategies — even how it is defined.

“Identity is now both a control surface and an attack surface. We’ve had non-human identities as API keys, tokens, service accounts, but now we have agents, and that’s a new class,” says Dustin Wilcox, senior VP and CISO at S&P Global.

The challenge is attributing actions to non-human identities because the typical signals don’t apply. “The techniques to identify a person, like the telemetry of how they use the keyboard, we won’t be able to do that when it’s an agent that’s working entirely digitally,” Wilcox tells CSO.

And as agents proliferate, it becomes difficult for CISOs to maintain a complete picture of how many exist, what they’re used for, and what they’re authorized to do.

“With a human identity, you can validate access needs directly. With service accounts, and now with agents, that clarity is harder to achieve,” says Docusign CISO Michael Adams.

“Treating them as if they fit existing models can create gaps in visibility and control. At the same time, AI systems are contributing to rapid growth in non-human identities, including the creation of new credentials and tokens, which many inventory processes weren’t designed to track,” he adds.

“And on the human side, generative AI is making social engineering more convincing, eroding some of the behavioral signals defenders have historically relied on. The result is an expanding attack surface at the same moment traditional indicators are becoming less reliable,” Adams tells CSO.

The advice for CISOs is to adopt an identity-first security model that treats identity as the foundational layer of the security architecture.

“Every access decision flows through identity and is continuously verified, not just checked at the door,” says Adams.

Identity becomes the primary control plane

CISOs are now managing a new class of identities that includes copilots, autonomous agents, and AI-powered workflows that don’t fit neatly into existing frameworks. And they can access systems, take actions, and make decisions at machine speed.

Wilcox and Adams are speaking at the CSO Cybersecurity Awards & Conference, May 11–13. Reserve your place.

As a result, Adams says CISOs will increasingly need to adopt an identity-centric security architecture and there are several key tenets to consider.

Build a strong foundation before layering on complexity. The instinct when modernizing an identity program, says Adams, is to reach for sophisticated tooling. Instead, his advice is to get the fundamentals in place — clean directories, enforced least privilege, and reliable offboarding processes.

“Organizations that jump to continuous verification without establishing basic identity hygiene may find themselves building on an unstable foundation,” he says.

Design for the new class of identities. When designing role models and access policies, the temptation is to mirror existing structures.

“That often carries years of permission creep into a new architecture. Starting from least privilege rather than from legacy helps ensure users receive only the access required for their job functions,” he says. “It’s important to challenge ‘it’s always been done this way’ where appropriate.”

Get your non-human identity inventory in order. Build a full inventory of non-human identities and include who is responsible for each identity, and what each one is authorized to do. Do this before any more agents are operating.

“This is as much a governance challenge as a technology one,” he notes.

Treat MFA as a starting point, not a destination. The identity roadmap needs to include phishing-resistant alternatives to SMS or push-based MFA. Least privilege, micro-segmentation, and continuous monitoring are part of the playbook.

“Assume credentials may be compromised and architect accordingly,” Adams advises.

AI and the shifting security balance

Identity systems have long been targets for attack. But as identity becomes the primary control plane, the risk becomes more concentrated and requires a different approach.

“I’d encourage every CISO to think deeply about the intersection of identity and AI,” says Adams, adding that systems need to be redesigned around the principle of intent instead of actual behavior to ensure agents operate within appropriate boundaries.

“That requires behavioral monitoring and real-time access evaluation — capabilities many organizations are still building toward,” he notes. “That’s the work ahead.”

Wilcox is ultimately optimistic that AI offers security practitioners more tools to combat malicious actors. If CISOs can get this right, it’s a way to level the playing field with the attackers in a way not previously available.

“We’ve had this asymmetric playing field where they’ve had the advantage for as long as I can remember. Now we can use AI both strategically and tactically to improve our defenses,” he says.

Agentic AI is rewriting the identity security playbook in real-time, and your peers are already adapting. Hear Dustin Wilcox, Michael Adams, Renee Guttmann, and other leading CISOs share what’s actually working at the CSO Cybersecurity Awards & Conference, May 11–13. Secure your seat before it fills up.

CIO 100 Leadership Live Los Angeles: CIOs confront the AI execution gap

Enterprise AI initiatives are producing uneven results as organizations struggle to convert widespread experimentation into focused, repeatable business outcomes. This was the throughline during the CIO 100 Leadership Live Los Angeles conference on April 16, at the Torrance Marriott Redondo Beach.

A consensus emerged around key constraints limiting enterprise AI’s contribution to transformation objectives. Among them are misalignment between AI initiatives and operating models, fragmentation and confusion around business ownership, the need for dynamic data governance, and, perhaps most importantly, the need for a corporate culture that allows the human element to keep pace with the rate of change in today’s agentic economy.

Leadership moments define transformational outcomes

Keynote speaker and serial entrepreneur Chris Dyer kicked off the conference with a key insight about IT leadership today: Execution is shaped by how leaders act in those few defining moments when priorities, risk tolerance, and accountability are tested.

“You’ll remember less than 1% of this year,” he said. “But that fraction will define how your team sees you, whether your best people stay, and whether your biggest initiatives gain traction or quietly stall.”

Leadership consistency establishes the conditions for adoption and trust, but teams absorb most about a leader in moments of difficulty and crisis, he said, explaining that rapidly changing business conditions often result in competing initiatives and shifting priorities that can weaken execution discipline and dilute operational impact.

Scaling AI requires enterprise-wide structural change

The conversation on culture continued during a session that featured three executives from PwC US: Danielle Phaneuf, Alok Mirchandani, and Roshini Rajan.

According to Mirchandani, most AI efforts remain confined to isolated use cases that do not scale into coordinated execution. “The struggle is not the technology,” he said. “It’s how you move off that use case mentality and actually drive scale and adoption.”

When implemented successfully, AI systems cut across workflows, requiring coordination between business context, data, and execution. That coordination does not sit within a single discipline. It requires individuals who understand the end-to-end process and can direct how AI is applied within it.

Moreover, depth in a single domain is no longer sufficient when outcomes depend on how multiple systems and processes interact. Because of this, broader roles are beginning to emerge to connect those elements in real-time, particularly as AI moves into production where decisions and actions must align across the organization.

“The nature of work is fundamentally changing,” said Phaneuf. “It’s not about mastering a single discipline anymore. It’s about understanding the big picture and orchestrating the right outcomes.” Hiring models, training approaches, and team structures are adjusting to reflect that shift.

To that end, performance measurement is aligning with output rather than activity. “It’s no longer about effort,” Roshini said. “It’s about the velocity and throughput you’re driving.”

The evolving economics of AI PCs

Enterprise AI deployment is forcing IT leaders to rethink where workloads reside. The discussion largely centers on cloud versus on-prem, with security concerns around intellectual property, data, and personal information driving renewed interest in private environments.

Device manufacturers are positioning the edge as a third option, introducing new performance and cost dynamics. Charles Thomas, HP’s North American AI channel business manager, discussed the edge versus cloud decision in those terms.

The underlying math is not difficult to follow, he said. Organizations that route every AI task through centralized on-prem or cloud infrastructure face compounding costs as adoption scales.

Local processing, enabled by neural processing units (NPUs), offloads a meaningful share of those workloads before they hit the network. Unlike a CPU, which handles general computing tasks, or a GPU, which excels at graphics and parallel processing, NPUs are purpose-built to accelerate AI and machine learning operations, Thomas said.

The performance baseline for edge AI processing is moving faster than most enterprise procurement cycles can track. Today, NPUs are effectively the market floor, with current commercial devices reaching platform-level AI throughput as high as 180 trillions of operations per second (TOPS) when dedicated NPU and integrated graphics acceleration are combined. As a result, the raw compute argument for moving workloads to the device is no longer speculative.

Transformation requires business ownership and customer proximity

Transformation outcomes reflect how well organizations align ownership, process design, and execution across business functions. When technology exposes gaps in coordination, it may say more about leadership, vision, and execution than it does about how well new and legacy technologies coexist.

Session moderator James Rinaldi, executive director at UCLA and former CIO at NASA’s Jet Propulsion Laboratory, framed the challenge in terms of scale and complexity, noting that transformation efforts, by definition, span multiple functions with competing priorities and shared dependencies.

His panelists agreed.

“These are not IT projects. They’re business projects with people,” said Keith Golden, former CIO at RGP. “You’ve got to bring the business into these conversations every step of the way.”

Systems behave according to the decisions organizations make about workflows and priorities. Anthony Moses, former CIO and global chief strategy and innovation officer at Yamaha Motor Finance, emphasized the executive responsibility that comes with implementation.

“Everything is like a blank Excel sheet when you buy a platform,” he said. “In reality, you own what the system will do.”

Purchasing a platform is about acquiring potential rather than capability. Translating one into the other requires people who understand the technology and the operational context it is meant to serve. It is the organizational equivalent of a craftsperson who knows not just how to use a tool but how to use it to accomplish a future desired state.

That distinction matters because the vision an IT leader brings to deployment rarely arrives intact at the employee level. A solution shaped around a specific outcome only delivers that outcome if the workforce it is designed to serve understands the intent, internalizes the logic, and has the capacity to work within it effectively.

AI value emerges in targeted use cases

The question of where AI is delivering on its promise moved from structural to operational during a panel focused on the business case for AI. The session drew on perspectives from three practitioners navigating that question in real-time: Bhupesh Arora, now at South Jersey Industries; Lucy Avetisyan, CIO at UCLA; and Feroz Merchhiya, CIO and CISO for the City of Santa Monica.

Their collective assessment reflected the broader tension of how to ensure AI is producing results in well-defined contexts, while organizations struggle to see across their own initiatives clearly enough to scale what is working.

“We have everything from pilots to production,” said Arora. “Some have a true business case.”

Customer service operations, as an example, are producing tangible results, with automation reducing call volumes and improving response times during peak demand while limiting the need for additional hiring. “There’s a hard dollar value to it,” he said.

Arora also pointed to a more ambitious application within the company’s wholesale natural gas trading operation, where manual Excel-based workflows currently support buying, selling, and scheduling decisions across roughly $70 million in annual business activity. The objective, he said, is to automate that process end to end, from generating trading suggestions to executing trade scheduling.

“It’s real business automation with AI,” he said, moving the technology from back-office efficiency to core revenue-generating operations, where the potential returns are considerably higher.

Merchhiya agreed that efficiency alone should not define a sound AI business case. “Not every business case is about cost saving,” he said. “Not every business case is about bottom-line delivery. Each one has its own unique lens that you have to apply.”

It’s an important perspective for a city government navigating fiscal pressure while managing everything from public safety infrastructure to transportation networks.

For Avetisyan, AI adoption at an institution like UCLA requires thinking well beyond functional improvement. Workforce readiness, student preparation, governance, and ethical use all carry their own accountability structures and timelines for measuring return.

“If we’re going to just take and put AI on existing processes, existing work, that’s the worst thing we could do,” she said. Instead, business processes must be rethought to make the most of AI.

AI adoption constrained by trust in data

The relation of AI performance to the quality and reliability of the underlying data was the central theme of a panel that brought together Ilker Taskaya, field CTO at Perforce Software; Marivi Stuchinsky, VP of software engineering at Experian; and Chris Fodera, senior IT director at Qualcomm.

Their collective experience across financial services, semiconductor manufacturing, and enterprise data protection illustrated how differently the data problem presents itself depending on context.

“Without it, you’re just guessing,” Fodera said.

The observation carries particular weight given his background managing data across Qualcomm’s automotive and industrial IoT divisions. In his first two years supporting the automotive unit alone, the volume of sensor data exceeded everything Qualcomm had accumulated in the company’s prior 38 years.

The challenge, though, is to understand which data to trust in which context. Qualcomm’s autonomous driving models trained on San Diego road data perform reliably in Los Angeles. They do not perform reliably in India, where road infrastructure differs fundamentally.

“If you dirty down your model with data that doesn’t belong in it, it’s going to key off things you don’t want it to,” Fodera said.

Stuchinsky described Experian’s approach as drawing a deliberate boundary between consumer-facing data, which does not flow into AI pipelines, and internal operational data, which does. Rather than cleaning petabytes of legacy data before deployment, her team uses a vectorized database architecture that validates data as it is needed.

“This way, when AI points to it, it’s already clean,” she said.

Taskaya contextualized the broader stakes from the vendor side. AI has expanded the attack surface and raised the cost of exposure, while inverting the relative value of data and the applications sitting on top of it. As software commoditizes, data becomes the differentiating asset. As a result, protecting and governing data moves from being a compliance function to a source of competitive differentiation.

Investment shifts toward focused AI applications

A venture capital panel moderated by Julie Bort, editor at TechCrunch, brought a market-level perspective to the day’s recurring question about where AI is delivering value. The panel featured Chiraag Deora, principal at Greycroft; Maddi Holman, co-founder and general partner at Daring Ventures; Rob Smith, partner at M13; and Kesar Varma, partner at Upfront.

The session started with a provocative, even skeptical, opening question from Bort, who challenged the panel by characterizing LLMs as “the worst intern I’ve ever hired” and questioning whether AI would ever deliver on the transformational promises Silicon Valley has attached to it.

“AI is not making things better, yet,” Smith said. “At this point it’s about making them faster and more efficient.”

VC investment today, all agreed, is concentrating around defined use cases where outcomes can be measured and scaled. Healthcare revenue cycle management, data security, pharma sales compliance, and government services automation drew attention as areas where structured workflows and clear performance metrics are allowing AI to find traction.

Still, durable competitive advantage requires either owning proprietary data or controlling distribution, Holman said.

“You can’t just be displaying data for someone that could have it themselves,” she said. Smaller, purpose-built models drew consensus as the more defensible next wave, particularly in regulated industries where hallucination risk carries real consequences and institutional knowledge that walks out the door with departing employees can now be encoded and retained.

On risk posture, Smith’s advice reframed how CIOs might approach vendor evaluation.

“Invest in the people building the software, not the software itself,” he said. “The software will become obsolete in 18 months. If you back the right team with the right vision, they will continuously adapt.”

For CIOs, the panel’s experience translates into three posture shifts. On risk assessment, start with the team, not the technology, and ask founders directly about runway and five-year vision.

When it comes to vendor selection they recommended mapping procurement requirements before a pilot begins so both sides understand what a path to contract looks like. On roadmap planning, treat early-stage engagement as a design partnership rather than a vendor evaluation.

In the final analysis, they all agreed, the risk of inaction, is no longer smaller than the risk of engagement.

❌