Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • SAP’s new API policy restricts AI access, draws customer criticism
    With the rise of AI, APIs have once again become increasingly vital tools for fueling transformation. Enterprise software APIs, in particular, provide a critical link for CIOs’ AI strategies, enabling them to extract data from core business systems and feed it into their AI models of choice, for analysis, decision-making, and action. In response to the rapidly increasing use of APIs by non-SAP systems, enterprise software giant SAP has introduced a new API policy limiti
     

SAP’s new API policy restricts AI access, draws customer criticism

4 de Maio de 2026, 13:29

With the rise of AI, APIs have once again become increasingly vital tools for fueling transformation. Enterprise software APIs, in particular, provide a critical link for CIOs’ AI strategies, enabling them to extract data from core business systems and feed it into their AI models of choice, for analysis, decision-making, and action.

In response to the rapidly increasing use of APIs by non-SAP systems, enterprise software giant SAP has introduced a new API policy limiting access to the data housed in its systems. According to an official statement, the policy stipulates that only those interfaces listed in the SAP Business Accelerator Hub or in the respective product documentation are considered published APIs.

“Customer and third-party applications must not access, invoke, or interact in any manner with APIs that are not Published APIs,” the policy states.

‘This is unacceptable.’

While SAP justifies its new API policy as “designed to safeguard solution health” and as a necessary guarantee of technical stability, the policy could jeopardize the security of customers’ strategic plans as well as their innovation capabilities, the German-speaking SAP User Group (DSAG) warns.

“For SAP-to-non-SAP scenarios, this means: They will only be reliably supported where SAP has explicitly published and documented the underlying interfaces,” DSAG Chairman Jens Hungershausen explained in a statement.

Furthermore, the DSAG believes that the SAP Business Accelerator Hub and the vaguely defined product documentation have not yet been clearly established as contractual components. From the customer’s perspective, this necessitates the creation of clear and reliable framework conditions to enable early assessment of the impact of changes, Hungershausen stated.

“The DSAG has long been demanding absolutely reliable contract documents. However, SAP has taked a contrary position, for example with the SAP Business Data Cloud and now with its API Policy,” says Michael Bloch, DSAG board member for licenses, contracts, and support. Customers currently have questions regarding the interpretation of the documentation, and from DSAG’s perspective, there is a need for clarification regarding their contractual classification. “This is unacceptable,” Bloch states.

Cutting off AI system access?

The DSAG points out that potential new pricing models or usage regulations surrounding APIs must be communicated transparently — and early — to ensure planning fidelity for customers and partners. SAP, for example, has already developed a pricing model with its Digital Access model for creating certain document types in indirect usage.

“According to SAP information, there will be a fair-use model. However, the specific details are currently unclear and should be transparently documented in the API policy,” Bloch says.

Another critical point is that SAP links API usage to technical and organizational requirements. Moreover, use of APIs is restricted for certain scenarios, including:

  • Undocumented purposes
  • Systematic or large-scale data extractions
  • In conjunction with use of (semi-)autonomous or generative AI systems

Here, API usage is permitted only if it explicitly takes place within architectures or services provided by SAP.

“Except through and within the limits of SAP-endorsed architectures, data services, or service-specific pathways expressly identified and intended for such purposes, SAP prohibits API use for: (a) interaction or integration with (semi-)autonomous or generative AI systems that plan, select, or execute sequences of API calls, and (b) scraping, harvesting, or systematic and/or large-scale data extraction or replication,” the policy states.

“According to the information available to us, existing customer integrations and authorized partner solutions are not affected,” says DSAG CTO Stefan Nogly. However, he believes this important protection for existing integrations should be explicitly stated in SAP’s API policy.

Nogly points out that many user companies are already working on proofs of concept (PoC) and pilot projects based on the current interpretation of API usage. “From a customer perspective, we see a significant need for clarification and adaptation — especially to avoid disrupting existing business-critical end-to-end processes or making them legally vulnerable,” he says.

width="1024" height="576" sizes="auto, (max-width: 1024px) 100vw, 1024px">
Stefan Nogly, DSAG Executive Board Member for Technology: “In an era of increasingly heterogeneous architectures and intensive AI experiments, APIs are a key driver of innovation.”

DSAG

More transparency and transition periods needed

The SAP user group is particularly critical of SAP’s lack of transparency. Its members point out that the new API policy does not clearly document which specific APIs are affected, nor is the extent of the impact clearly defined. “The question is which interfaces are used in the partner solutions,” says DSAG Chairman Hungershausen.

According to DSAG’s understanding, those using official APIs don’t need to take any action, although the lack of contractual safeguards doesn’t guarantee absolute security. For some partner companies, however, the effort involved could be significant, and business models could collapse.

“Therefore, it is essential that SAP grants customers more time for the transition,” Hungershausen says. Customers and partners also need concrete technical and organizational support for switching to SAP-supported interfaces.

From DSAG’s perspective, it is crucial that customers are not forced to resort to other solution providers due to a lack of viable alternatives when existing scenarios are limited.

  • ✇Firewall Daily – The Cyber Express
  • Hacker Active Well Beyond Context.ai Compromise, Says Vercel CEO Mihir Bagwe
    Vercel CEO Guillermo Rauch, in an update today said that after scanning through petabytes of logs of the company's networks and APIs, his security team concluded that the threat actor behind the Vercel breach had been active well beyond Context.ai's compromise. Rauch said that the "threat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers. Once the attacker gets ahold of those keys, our logs show a repeated p
     

Hacker Active Well Beyond Context.ai Compromise, Says Vercel CEO

23 de Abril de 2026, 05:35

Vercel, Vercel Breach, APIs, npm Packages

Vercel CEO Guillermo Rauch, in an update today said that after scanning through petabytes of logs of the company's networks and APIs, his security team concluded that the threat actor behind the Vercel breach had been active well beyond Context.ai's compromise. Rauch said that the "threat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers. Once the attacker gets ahold of those keys, our logs show a repeated pattern: rapid and comprehensive API usage, with a focus on enumeration of non-sensitive environment variables." Researchers at Hudson Rock had earlier confirmed that the attack actually initiated in February itself when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments. What the latest findings mean is that there could be a wider net of victims that the threat actor may have phished for and what we know is just the tip of the iceberg - or not.
Also read: Vercel Incident Linked to AI Tool Hack, Internal Access Gained

Vercel Finds Customers Breached in Separate Malware, Social Engineering Attacks

In an official update, the company also stated that initially it identified a limited subset of customers whose non-sensitive environment variables stored on Vercel were compromised. However, a deeper assessment of the their network, as well as environment variable read events in the company's logs uncovered two additional findings.

"First, we have identified a small number of additional accounts that were compromised as part of this incident," the company noted.

But the main concern is the next finding: "Second, we have uncovered a small number of customer accounts with evidence of prior compromise that is independent of and predates this incident, potentially as a result of social engineering, malware, or other methods." 

The company did not disclose who were the attackers, what was the motive, or the impact on customers, and is yet to respond to these queries from The Cyber Express. It only stated: "In both cases, we have notified the affected customers."

Meanwhile, Rauch said, Vercel had notified other suspected victims and encouraged them to rotate credentials and adopt best practices.

No Compromise of npm Packages

The news of npm packages being compromised has surfaced a lot in recent times. To cover that front, Vercel's security team in collaboration with GitHub, Microsoft, npm, and Socket, confirmed that no npm packages published by Vercel had been compromised. "There is no evidence of tampering, and we believe the supply chain remains safe," the company said.
  • ✇Security | CIO
  • MuleSoft Agent Fabric adds new ways to keep AI agents in line
    Salesforce first sought to tackle AI agent sprawl last year with Agent Fabric, a suite of capabilities and tools inside its MuleSoft AnyPoint Platform. Now, it’s seeking to further rein in unruly AI agents on its platform and those of other vendors too, with new governance tools and deterministic controls. When enterprises adopt multiple agentic AI products, they can end up redundant or siloed workflows or scattered across teams and platforms, undermining operational eff
     

MuleSoft Agent Fabric adds new ways to keep AI agents in line

15 de Abril de 2026, 15:22

Salesforce first sought to tackle AI agent sprawl last year with Agent Fabric, a suite of capabilities and tools inside its MuleSoft AnyPoint Platform. Now, it’s seeking to further rein in unruly AI agents on its platform and those of other vendors too, with new governance tools and deterministic controls.

When enterprises adopt multiple agentic AI products, they can end up redundant or siloed workflows or scattered across teams and platforms, undermining operational efficiency and complicating governance as they try to scale AI safely and responsibly.

Agent Fabric, introduced in September 2025, started out as a place for enterprises to register, view, interconnect and govern agents. In January it added a deterministic scripting tool and the ability to scan for new agents and add them to the registry.

But enterprises still need more help to bring their AI agents under control, so Salesforce is adding more features.

First up is an expansion of the deterministic controls in the form of Agent Script for Agent Broker, an intelligent routing service inside Agent Fabric that is designed to connect agents across domains, dynamically matching user tasks with the best-fit agent. Salesforce said the controls will help developers codify workflows in multi-agent systems in order to ensure consistent and reliable outputs.

Rather than leave probabilistic agents to make all the decisions about how to resolve a problem, introducing an element of unpredictability, Agent Script for Agent Broker enables enterprises to steer some of the decision-making according to predetermined rules that require fewer computing resources than running a large language model.

That’s welcome news for Robert Kramer, managing parter at KramerERP.

“Pure autonomous agents don’t necessarily work in production as enterprises need to ensure predictable outcomes. The deterministic controls should facilitate a secure handoff of control and rules while still allowing the model to engage in reasoning when it’s appropriate,” he said. “It’s a balance between control and flexibility, which is the norm for most real deployments.”

For Rebecca Wettemann, principal analyst at Valoir, providing both deterministic and probabilistic options within Agent Fabric enables developers and agent builders to take the lower-cost route to more accurate and predictable results from agentic systems.

Enterprises will have to wait to put this deterministic orchestration feature into production, though: Still in beta testing, it won’t be generally available until June 2026.

Centralized LLM governance tackles cost

Beyond orchestration, Salesforce has added a new LLM Governance capability in AI Gateway, the control layer within Agent Fabric that provides centralized visibility of token usage, costs, and data flows for third-party model.

Enterprises will be able to use LLM Governance, now generally available, to help them keep their AI operations on budget, Salesforce said.

This is becoming increasingly important as CIOs seek to bring disparate AI systems under centralized control and justify spiralling AI costs.

Info-Tech Research Group advisory fellow Scott Bickley warned that without centralized governance like this, different teams around a company may choose different models, negotiate their own API contracts, and manage token budgets locally.

“This results in sprawling costs, inconsistent security postures, and no enterprise-wide policy enforcement,” he said. “By positioning AI Gateway as the choke point through which all LLM traffic flows, enterprises gain visibility into AI usage patterns, the models in use, purpose of the usage, and cost data.”

MCP additions simplify integration

Salesforce is also adding new Model Control Protocol features, including MCP Bridge to make it easier to access legacy APIs, and Informatica-hosted MCPs, that it says will simplify how agents interact with enterprise data and APIs.

These could save developers time and simplify the building of cross-environment, multi-agent systems.

Bickley said MCP Bridge will help enterprises with thousands of legacy APIs (REST, SOAP, GraphQL) built long before MCP existed.

“Agents speaking MCP cannot call those APIs natively so they require wrappers around the API endpoint; this would be a massive engineering lift. MCP Bridge allows these APIs to be exposed as MCP-compatible tools without modifying the underlying code,” he said.

And Wettemann said Informatica-hosted MCPs will further reduce development overhead by bringing built-in data quality and governance capabilities into agent workflow, particularly critical for enterprises in regulated industries and those with heightened risk concerns.

But Bickley added a note of caution. “APIs can behave oddly and have their own nuanced behavior,” he said. “Enterprises should test how MCP Bridge handles edge cases.”

Informatica-hosted MCPs will not be a miracle solution either, he warned: “Even if the Informatica data quality and governance capabilities are cleanly integrated in the Agent Fabric registry, these are not instantaneous operations. Checking data fields for accuracy, deduplication, and cross-system matching take time and carry latency measured in milliseconds or even multiple seconds, and that is pre-integration.”

A pivot for MuleSoft?

Bickley sees the updates as a broader strategy for Salesforce to reposition MuleSoft, which it acquired in 2018 for $5.7 billion, from a traditional API integration platform to an infrastructure layer for enterprise AI agents.

By layering orchestration, governance, and connectivity into Agent Fabric, Salesforce appears to be trying to position MuleSoft as the system of record for how agents are discovered, routed, and governed across the enterprise, deepening its role beyond API management into core AI infrastructure, he said.

Not all CIOs will welcome that move.

“If your agent control plane runs on Agent Fabric, switching costs rise materially, and the more agents you register, the more orchestration rules and governance policies defined, the more difficult it becomes to move to an alternative solution,” the analyst said.

As with any critical infrastructure dependency, “CIOs need to ask:  What is the exit path?  What components of Agent Fabric are portable and what is locked in?  What’s the pricing model?  What is the integration depth with non-Salesforce agents and data sources?” he said.

For now, though, enterprises have plenty of AI agent orchestration options to choose from.

This article first appeared on InfoWorld.

  • ✇DCiber
  • Cibersegurança: por que a proteção ainda é vista como despesa no setor financeiro? Redação
    O setor financeiro ocupa a segunda posição no ranking global de ataques cibernéticos, de acordo com um relatório da Verizon. O documento registrou 3.336 incidentes no segmento em 2025, com 927 resultando em vazamentos de dados confirmados. Na América Latina, foram 657 casos, sendo 413 com vazamentos. O cenário no Brasil acompanha a tendência, com o Banco Central reportando, somente em 2024, 12 incidentes de vazamentos de chaves Pix. Os números mostram a exposição de um segmento que lida com ativ
     

Cibersegurança: por que a proteção ainda é vista como despesa no setor financeiro?

8 de Dezembro de 2025, 11:37

O setor financeiro ocupa a segunda posição no ranking global de ataques cibernéticos, de acordo com um relatório da Verizon. O documento registrou 3.336 incidentes no segmento em 2025, com 927 resultando em vazamentos de dados confirmados. Na América Latina, foram 657 casos, sendo 413 com vazamentos. O cenário no Brasil acompanha a tendência, com o Banco Central reportando, somente em 2024, 12 incidentes de vazamentos de chaves Pix. Os números mostram a exposição de um segmento que lida com ativos e informações de clientes.

A recorrência dos ataques levanta uma questão sobre a abordagem da segurança pelas lideranças. A proteção dos sistemas e dados é vista por parte dos gestores como um centro de custo, não como um pilar para a sustentação do negócio. Essa visão ignora que o custo de um incidente de segurança é, em média, superior ao investimento preventivo. O relatório “Cost of a Data Breach” da IBM, de 2024, aponta que o prejuízo médio de um ataque no setor financeiro foi de US$ 6,08 milhões.

“A cibersegurança é tratada como uma despesa por empresas que ainda não têm um grau elevado de maturidade em segurança da informação. As companhias que já estão em um patamar mais elevado enxergam a cibersegurança como um investimento”, afirma Rodrigo Rocha, gerente de arquitetura de soluções da CG One, empresa de tecnologia focada em segurança da informação, proteção de redes e gerenciamento integrado de riscos.

A evolução dos riscos e os impactos nos negócios

Os riscos para as instituições financeiras abrangem desde ataques de negação de serviço (DDoS), que buscam a indisponibilidade de plataformas e o prejuízo de imagem, até o roubo de informações e o desvio de valores de contas de clientes. “Nos últimos anos, as táticas dos atacantes ganharam complexidade, com o desenvolvimento de ransomwares como LockBit e Conti, ataques à cadeia de suprimentos que comprometem plataformas de autenticação de fintechs, exploração de APIs e o uso de inteligência artificial generativa e deepfakes em ações de engenharia social”, explica Rocha.

Um ataque bem-sucedido pode resultar em perda de credibilidade junto a clientes e ao mercado, além de perdas financeiras diretas. Há também o impacto regulatório, com a possibilidade de aplicação de multas pela Autoridade Nacional de Proteção de Dados (ANPD) em caso de descumprimento da Lei Geral de Proteção de Dados Pessoais (LGPD).

A estratégia de defesa como caminho

Não existe uma única tecnologia que funcione como solução definitiva para a proteção do ecossistema financeiro. A eficácia da defesa está na implementação de um plano de médio e longo prazo, com o objetivo de elevar a maturidade em segurança da informação de forma contínua.

Para Rocha, as organizações podem utilizar frameworks de mercado para avaliar o nível de maturidade atual e traçar um plano de evolução. “A proteção de uma empresa, de qualquer segmento, depende da execução de um plano estruturado, com parceiros e soluções que ajudem nessa jornada”, finaliza o especialista da CG One.

  • ✇DCiber
  • Dados em risco? Saiba como proteger suas APIs e evitar exposições críticas Redação
    Relatórios recentes revelam que as vulnerabilidades em APIs, as chamadas Interfaces de Programação de Aplicações, são responsáveis por uma parte significativa das falhas de segurança nos sistemas modernos. Um estudo da Akamai apontou que 84% dos profissionais de segurança enfrentaram incidentes relacionados a APIs entre 2023 e 2024, refletindo um aumento contínuo nos últimos três anos. Essa crescente dependência tem gerado diversas preocupações com segurança. As APIs desempenham um papel de extr
     

Dados em risco? Saiba como proteger suas APIs e evitar exposições críticas

6 de Dezembro de 2025, 11:34

Relatórios recentes revelam que as vulnerabilidades em APIs, as chamadas Interfaces de Programação de Aplicações, são responsáveis por uma parte significativa das falhas de segurança nos sistemas modernos. Um estudo da Akamai apontou que 84% dos profissionais de segurança enfrentaram incidentes relacionados a APIs entre 2023 e 2024, refletindo um aumento contínuo nos últimos três anos. Essa crescente dependência tem gerado diversas preocupações com segurança.

As APIs desempenham um papel de extrema relevância na comunicação entre sistemas e na troca de dados, sendo importantes para o funcionamento de aplicativos móveis, plataformas de e-commerce e serviços em nuvem. No entanto, a grande exposição dessas interfaces às redes aumenta o risco de ataques, sendo exemplos de ameaças comuns a autenticação e autorização fracas, a exposição de dados confidenciais e a falta de controle adequado de acesso.

De acordo com o Relatório sobre o Estado da Segurança de API de 2025 da Salt Security, a rápida expansão dos ecossistemas de API –  impulsionada pela migração para a nuvem, integração de plataformas e monetização de dados – está superando as medidas de segurança existentes, expondo as empresas a riscos ainda maiores. “Com a ampliação da interconexão de aplicações e serviços online, as APIs se tornaram alvos atrativos para cibercriminosos, tornando a proteção dessas interfaces cada vez mais estratégica para a segurança de dados”, comenta Rogerio Rutledge,  DPO da Runtalent, empresa referência em tecnologia e serviços digitais.

O executivo destaca que, para proteger as APIs e evitar que elas sejam exploradas por cibercriminosos, as empresas precisam adotar estratégias robustas de segurança. “É preciso entender que as APIs são portas abertas para os dados da organização. Assim, proteger essas interfaces exige uma abordagem proativa, que vai além da autenticação básica e da criptografia de dados. Monitorar e auditar as APIs em tempo real, implementar controles rigorosos de acesso e adotar práticas de codificação segura são medidas fundamentais para garantir a integridade e a confidencialidade das informações”, explica.

Entre as principais práticas recomendadas para fortalecer a segurança das APIs, Rutledge destaca:

  1. Autenticação e autorização fortes: Implementar autenticação multifatorial (MFA) e usar tokens de segurança como OAuth 2.0 para garantir que apenas usuários autorizados acessem as APIs.
  2. Validação e sanitização de entradas: Realizar a validação rigorosa de todas as entradas e saídas para evitar a injeção de códigos maliciosos.
  3. Uso de criptografia robusta: Garantir que todos os dados sensíveis, tanto em trânsito quanto em repouso, sejam criptografados com protocolos seguros como TLS e AES.
  4. Monitoramento e auditoria contínuos: Implementar sistemas de monitoramento em tempo real para detectar atividades suspeitas e realizar auditorias regulares para garantir que as APIs não apresentem vulnerabilidades.
  5. Testes de segurança regulares: Realizar testes de penetração e análise de vulnerabilidades nas APIs de forma contínua para identificar e corrigir falhas de segurança antes que possam ser exploradas.

O gestor ressalta que, com o aumento da dependência de APIs para o funcionamento de sistemas e serviços, as empresas precisam estar preparadas para os desafios crescentes de segurança que surgem nesse cenário. “O investimento em soluções eficazes de proteção é fundamental para proteger dados sensíveis e garantir a continuidade dos negócios. Negligenciar a proteção dessas interfaces pode expor organizações a prejuízos financeiros, danos à reputação e perdas irreversíveis de confiança. Portanto, adotar uma abordagem preventiva, automatizada e alinhada às melhores práticas do setor é o caminho mais seguro para manter a integridade dos sistemas e assegurar a resiliência digital das empresas”, finaliza o executivo.

❌
❌