Visualização normal

Antes de ontemStream principal
  • ✇The Cloudflare Blog
  • Post-quantum encryption for Cloudflare IPsec is generally available Sharon Goldberg · Amos Paul
    While more than two-thirds of human-generated TLS traffic to Cloudflare is already protected by post-quantum cryptography, the world of site-to-site networking has been a different story. For years, the IPsec community remained caught between the high bar of Internet-scale interoperability and the niche requirements of specialized hardware. That gap is now closing. Earlier this month, we announced that Cloudflare has moved its target for full post-quantum security forward to 2029, spurred by sev
     

Post-quantum encryption for Cloudflare IPsec is generally available

30 de Abril de 2026, 11:00

While more than two-thirds of human-generated TLS traffic to Cloudflare is already protected by post-quantum cryptography, the world of site-to-site networking has been a different story. For years, the IPsec community remained caught between the high bar of Internet-scale interoperability and the niche requirements of specialized hardware. That gap is now closing. 

Earlier this month, we announced that Cloudflare has moved its target for full post-quantum security forward to 2029, spurred by several recent advances in quantum computing. To advance that goal, we’ve made post-quantum encryption in Cloudflare IPsec generally available.

Using the new IETF draft for hybrid ML-KEM (FIPS 203), we’ve successfully tested interoperability with branch connectors from Fortinet and Cisco — meaning you can start protecting your wide-area network (WAN) against harvest-now-decrypt-later attacks today using hardware you already have.

This post explains how we implemented the new hybrid IPsec handshake, why it took four years longer to land than its TLS counterpart, and how the industry is finally consolidating around a standard that works at Internet scale.

Cloudflare IPsec

Cloudflare IPsec is a WAN Network-as-a-Service that replaces legacy network architectures by connecting data centers, branch offices, and cloud VPCs to Cloudflare's global IP Anycast network. Customers get simplified configuration, high availability (if a data center becomes unavailable, traffic is automatically rerouted to the nearest healthy one), and the scale of Cloudflare's global network. This is done through encrypted IPsec tunnels that support both site-to-site WAN, outbound Internet connections, and connectivity to the Cloudflare One SASE platform

Post-quantum encryption in IPsec

Cloudflare IPsec now uses post-quantum encryption with hybrid ML-KEM (FIPS 203) to stop harvest-now-decrypt-later attacks. These are attacks where an adversary harvests data today and then decrypts later, after Q-Day, when there are powerful quantum computers that can break the classical public key cryptography used across the Internet.  Harvest-now-decrypt-later attacks are becoming a concern for more organizations as Q-Day approaches faster than expected.

ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism) is a post-quantum cryptography algorithm that is based on mathematical assumptions that are not known to be vulnerable to attacks by quantum computers. It does not require special hardware or a dedicated physical link between sender and receiver. ML-KEM is intentionally designed to be implemented in software across standard processors to provide post-quantum encryption of network traffic. 

Draft-ietf-ipsecme-ikev2-mlkem specifies post-quantum encryption for IPsec using hybrid ML-KEM, which combines the well-understood security of classical Diffie-Hellman and the post-quantum security of ML-KEM in a single, standards-compliant handshake. Specifically, a classical Diffie-Hellman exchange runs first, its derived key encrypts a second exchange that runs ML-KEM, and the outputs of both are mixed into the session keys that secure IPsec data plane traffic sent using the Encapsulating Security Payload (ESP) protocol. 

Our interoperable implementation 

Earlier we announced the closed beta of our implementation of draft-ietf-ipsecme-ikev2-mlkem in production in our Cloudflare IPsec product and tested it against a reference implementation (strongswan). Now that we have made this implementation generally available, we have also confirmed interoperability with several other vendors, including Cisco and Fortinet, which is a big win for this new standard.

Cisco: Customers using Cisco 8000 Series Secure Routers after version 26.1.1 as their branch connector can also now establish post-quantum Cloudflare IPsec tunnels per draft-ietf-ipsecme-ikev2-mlkem.

Fortinet: Customers using Fortinet FortiOS 7.6.6 and later as their branch connector can now establish post-quantum Cloudflare IPsec tunnels to Cloudflare's global network per draft-ietf-ipsecme-ikev2-mlkem.

The importance of being interoperable

Given that upgrading cryptography is hard and can take years, our 2029 target date for a full update to post-quantum cryptography is going to require concentrated effort. That’s why we hope the IPsec community continues to focus on the development of interoperable standards like draft-ietf-ipsecme-ikev2-mlkem.

Let us explain why these standards are vitally important. A full specification for hybrid ML-KEM in IPsec, draft-ietf-ipsecme-ikev2-mlkem, became available only in late 2025. That's roughly four years after support for hybrid ML-KEM landed in TLS. (In fact, Cloudflare turned on hybrid post-quantum key agreement with TLS in 2022, even before NIST finalized the standardization of ML-KEM, because the TLS community quickly converged on a single, interoperable approach and pushed it into production. Today more than two-thirds of the human-generated TLS traffic to Cloudflare's network is protected with hybrid ML-KEM.)

The four-year delay is likely due in part to the IPsec community's continued interest in Quantum Key Distribution (QKD), as codified in RFC 8784, published in 2020. We've written before about why QKD is not part of our post-quantum strategy: QKD requires specialized hardware and a dedicated physical link between the two parties, which fundamentally means it will not operate at Internet scale. Also, QKD does not provide authentication, so you still need post-quantum cryptography anyway to stop active attackers. It’s difficult to find implementations of QKD that interoperate across vendors.   

The U.S. NSA, Germany's BSI, and the UK's NCSC have all warned against solely relying on QKD. Post-quantum cryptography, by contrast, runs on the hardware you already have, authenticates the parties at both ends, and works end-to-end across the Internet. 

RFC 9370, published in 2023, opened the door to post-quantum cryptography in IPsec, allowing up to seven key exchanges to be run in parallel with classical Diffie-Hellman. However, RFC 9370 did not specify which ciphersuites should be used in these parallel key exchanges. In the absence of that specification, some vendors shipped early implementations under RFC 9370 before the hybrid ML-KEM draft was available, defining their own ciphersuites including some which are not NIST-standardized. This is exactly the kind of “ciphersuite bloat” NIST SP 800 52r2 warned against. And the risks to interoperability have played out in practice: Cloudflare IPsec does not yet interoperate with Palo Alto Networks' RFC 9370–based implementation, because it was launched before draft-ietf-ipsecme-ikev2-mlkem was available. 

Fortunately, we now have draft-ietf-ipsecme-ikev2-mlkem that fills in the gaps in RFC 9370, specifying hybrid ML-KEM as one of the key exchange mechanisms that can be operated in parallel with classical Diffie-Hellman. We hope to add Palo Alto Networks to the list of interoperable post-quantum branch connectors as the industry continues to consolidate around draft-ietf-ipsecme-ikev2-mlkem.

But the journey towards interoperable post-quantum IPsec standards is not over yet. While draft-ietf-ipsecme-ikev2-mlkem supports post-quantum encryption, we still need IPsec standards for post-quantum authentication, so that we can stop attacks by quantum adversaries on live systems after Q-Day. Given the shortened timeline for full post-quantum readiness, we hope the IPsec community will continue to focus on interoperable PQC implementations, rather than diverting focus to niche use cases with QKD.

Towards an interoperable post-quantum Internet

At Cloudflare, we’re helping make a secure and post-quantum Internet accessible to everyone, without specialized hardware and at no extra cost to our customers. Post-quantum Cloudflare IPsec is one more step on our path to full post-quantum security by 2029, and we’re doing it in a way that ensures that the Internet remains open and interoperable for years to come. 

Scaling MCP adoption: Our reference architecture for simpler, safer and cheaper enterprise deployments of MCP

14 de Abril de 2026, 10:00

We at Cloudflare have aggressively adopted Model Context Protocol (MCP) as a core part of our AI strategy. This shift has moved well beyond our engineering organization, with employees across product, sales, marketing, and finance teams now using agentic workflows to drive efficiency in their daily tasks. But the adoption of agentic workflow with MCP is not without its security risks. These range from authorization sprawl, prompt injection, and supply chain risks. To secure this broad company-wide adoption, we have integrated a suite of security controls from both our Cloudflare One (SASE) platform and our Cloudflare Developer platform, allowing us to govern AI usage with MCP without slowing down our workforce. 

In this blog we’ll walk through our own best practices for securing MCP workflows, by putting different parts of our platform together to create a unified security architecture for the era of autonomous AI. We’ll also share two new concepts that support enterprise MCP deployments:

We also talk about how our organization approached deploying MCP, and how we built out our MCP security architecture using Cloudflare products including remote MCP servers, Cloudflare Access, MCP server portals and AI Gateway

Remote MCP servers provide better visibility and control

MCP is an open standard that enables developers to build a two-way connection between AI applications and the data sources they need to access. In this architecture, the MCP client is the integration point with the LLM or other AI agent, and the MCP server sits between the MCP client and the corporate resources.

The separation between MCP clients and MCP servers allows agents to autonomously pursue goals and take actions while maintaining a clear boundary between the AI (integrated at the MCP client) and the credentials and APIs of the corporate resource (integrated at the MCP server). 

Our workforce at Cloudflare is constantly using MCP servers to access information in various internal resources, including our project management platform, our internal wiki, documentation and code management platforms, and more. 

Very early on, we realized that locally-hosted MCP servers were a security liability. Local MCP server deployments may rely on unvetted software sources and versions, which increases the risk of supply chain attacks or tool injection attacks. They prevent IT and security administrators from administrating these servers, leaving it up to individual employees and developers to choose which MCP servers they want to run and how they want to keep them up to date. This is a losing game.

Instead, we have a centralized team at Cloudflare that manages our MCP server deployment across the enterprise. This team built a shared MCP platform inside our monorepo that provides governed infrastructure out of the box. When an employee wants to expose an internal resource via MCP, they first get approval from our AI governance team, and then they copy a template, write their tool definitions, and deploy, all the while inheriting default-deny write controls with audit logging, auto-generated CI/CD pipelines, and secrets management for free. This means standing up a new governed MCP server is minutes of scaffolding. The governance is baked into the platform itself, which is what allowed adoption to spread so quickly. 

Our CI/CD pipeline deploys them as remote MCP servers on custom domains on Cloudflare’s developer platform. This gives us visibility into which MCPs servers are being used by our employees, while maintaining control over software sources. As an added bonus, every remote MCP server on the Cloudflare developer platform is automatically deployed across our global network of data centers, so MCP servers can be accessed by our employees with low latency, regardless of where they might be in the world.

Cloudflare Access provides authentication

Some of our MCP servers sit in front of public resources, like our Cloudflare documentation MCP server or Cloudflare Radar MCP server, and thus we want them to be accessible to anyone. But many of the MCP servers used by our workforce are sitting in front of our private corporate resources. These MCP servers require user authentication to ensure that they are off limits to everyone but authorized Cloudflare employees. To achieve this, our monorepo template for MCP servers integrates Cloudflare Access as the OAuth provider. Cloudflare Access secures login flows and issues access tokens to resources, while acting as an identity aggregator that verifies end user single-sign on (SSO), multifactor authentication (MFA), and a variety of contextual attributes such as IP addresses, location, or device certificates. 

MCP server portals centralize discovery and governance

MCP server portals unify governance and control for all AI activity.

As the number of our remote MCP servers grew, we hit a new wall: discovery. We wanted to make it easy for every employee (especially those that are new to MCP) to find and work with all the MCP servers that are available to them. Our MCP server portals product provided a convenient solution. The employee simply connects their MCP client to the MCP server portal, and the portal immediately reveals every internal and third-party MCP servers they are authorized to use. 

Beyond this, our MCP server portals provide centralized logging, consistent policy enforcement and data loss prevention (DLP guardrails). Our administrators can see who logged into what MCP portal and create DLP rules that prevent certain data, like personally identifiable data (PII), from being shared with certain MCP servers.

We can also create policies that control who has access to the portal itself, and what tools from each MCP server should be exposed. For example, we could set up one MCP server portal that is only accessible to employees that are part of our finance group that exposes just the read-only tools for the MCP server in front of our internal code repository. Meanwhile, a different MCP server portal, accessible only to employees on their corporate laptops that are in our engineering team, could expose more powerful read/write tools to our code repository MCP server.

An overview of our MCP server portal architecture is shown above. The portal supports both remote MCP servers hosted on Cloudflare, and third-party MCP servers hosted anywhere else. What makes this architecture uniquely performant is that all these security and networking components run on the same physical machine within our global network. When an employee's request moves through the MCP server portal, a Cloudflare-hosted remote MCP server, and Cloudflare Access, their traffic never needs to leave the same physical machine. 

Code Mode with MCP server portals reduces costs

After months of high-volume MCP deployments, we’ve paid out our fair share of tokens. We’ve also started to think most people are doing MCP wrong.

The standard approach to MCP requires defining a separate tool for every API operation that is exposed via an MCP server. But this static and exhaustive approach quickly exhausts an agent’s context window, especially for large platforms with thousands of endpoints.

We previously wrote about how we used server-side Code Mode to power Cloudflare’s MCP server, allowing us to expose the thousands of end-points in Cloudflare API while reducing token use by 99.9%. The Cloudflare MCP server exposes just two tools: a search tool lets the model write JavaScript to explore what’s available, and an execute tool lets it write JavaScript to call the tools it finds. The model discovers what it needs on demand, rather than receiving everything upfront.

We like this pattern so much, we had to make it available for everyone. So we have now launched the ability to use the “Code Mode” pattern with MCP server portals. Now you can front all of your MCP servers with a centralized portal that performs audit controls and progressive tool disclosure, in order to reduce token costs.

Here is how it works. Instead of exposing every tool definition to a client, all of your underlying MCP servers collapse into just two MCP portal tools: portal_codemode_search and portal_codemode_execute. The search tool gives the model access to a codemode.tools() function that returns all the tool definitions from every connected upstream MCP server. The model then writes JavaScript to filter and explore these definitions, finding exactly the tools it needs without every schema being loaded into context. The execute tool provides a codemode proxy object where each upstream tool is available as a callable function. The model writes JavaScript that calls these tools directly, chaining multiple operations, filtering results, and handling errors in code. All of this runs in a sandboxed environment on the MCP server portal powered by Dynamic Workers

Here is an example of an agent that needs to find a Jira ticket and update it with information from Google Drive. It first searches for the right tools:

// portal_codemode_search
async () => {
 const tools = await codemode.tools();
 return tools
  .filter(t => t.name.includes("jira") || t.name.includes("drive"))
  .map(t => ({ name: t.name, params: Object.keys(t.inputSchema.properties || {}) }));
}

The model now knows the exact tool names and parameters it needs, without the full schemas of tools ever entering its context. It then writes a single execute call to chain the operations together:

// portal_codemode_execute
async () => {
 const tickets = await codemode.jira_search_jira_with_jql({
  jql: ‘project = BLOG AND status = “In Progress”’,
  fields: [“summary”, “description”]
 });
 const doc = await codemode.google_workspace_drive_get_content({
  fileId: “1aBcDeFgHiJk”
 });
 await codemode.jira_update_jira_ticket({
  issueKey: tickets[0].key,
  fields: { description: tickets[0].description + “\n\n” + doc.content }
 });
 return { updated: tickets[0].key };
}

This is just two tool calls. The first discovers what's available, the second does the work. Without Code Mode, this same workflow would have required the model to receive the full schemas of every tool from both MCP servers upfront, and then make three separate tool invocations.

Let’s put the savings in perspective: when our internal MCP server portal is connected to just four of our internal MCP servers, it exposes 52 tools that consume approximately 9,400 tokens of context just for their definitions. With Code Mode enabled, those 52 tools collapse into 2 portal tools consuming roughly 600 tokens, a 94% reduction. And critically, this cost stays fixed. As we connect more MCP servers to the portal, the token cost of Code Mode doesn’t grow.

Code Mode can be activated on an MCP server portal by adding a query parameter to the URL. Instead of connecting to your portal over its usual URL (e.g. https://myportal.example.com/mcp), you attach ?codemode=search_and_execute to the URL (e.g. https://myportal.example.com/mcp?codemode=search_and_execute).

AI Gateway provides extensibility and cost controls

We aren’t done yet. We plug AI Gateway into our architecture by positioning it on the connection between the MCP client and the LLM. This allows us to quickly switch between various LLM providers (to prevent vendor lock-in) and to enforce cost controls (by limiting the number of tokens each employee can burn through). The full architecture is shown below.

Cloudflare Gateway discovers and blocks shadow MCP

Now that we’ve provided governed access to authorized MCP servers, let’s look into dealing with unauthorized MCP servers. We can perform shadow MCP discovery using Cloudflare Gateway. Cloudflare Gateway is our comprehensive secure web gateway that provides enterprise security teams with visibility and control over their employees’ Internet traffic.

We can use the Cloudflare Gateway API to perform a multi-layer scan to find remote MCP servers that are not being accessed via an MCP server portal. This is possible using a variety of existing Gateway and Data Loss Prevention (DLP) selectors, including:

  • Using the Gateway httpHost selector to scan for 

    • known MCP server hostnames using (like mcp.stripe.com)

    • mcp.* subdomains using wildcard hostname patterns 

  • Using the Gateway httpRequestURI selector to scan for MCP-specific URL paths like /mcp and /mcp/sse 

  • Using DLP-based body inspection to find MCP traffic, even if that traffic uses URI that do not contain the telltale mentions of mcp or sse. Specifically, we use the fact that MCP uses JSON-RPC over HTTP, which means every request contains a "method" field with values like "tools/call", "prompts/get", or "initialize." Here are some regex rules that can be used to detect MCP traffic in the HTTP body:

const DLP_REGEX_PATTERNS = [
  {
    name: "MCP Initialize Method",
    regex: '"method"\\s{0,5}:\\s{0,5}"initialize"',
  },
  {
    name: "MCP Tools Call",
    regex: '"method"\\s{0,5}:\\s{0,5}"tools/call"',
  },
  {
    name: "MCP Tools List",
    regex: '"method"\\s{0,5}:\\s{0,5}"tools/list"',
  },
  {
    name: "MCP Resources Read",
    regex: '"method"\\s{0,5}:\\s{0,5}"resources/read"',
  },
  {
    name: "MCP Resources List",
    regex: '"method"\\s{0,5}:\\s{0,5}"resources/list"',
  },
  {
    name: "MCP Prompts List",
    regex: '"method"\\s{0,5}:\\s{0,5}"prompts/(list|get)"',
  },
  {
    name: "MCP Sampling Create Message",
    regex: '"method"\\s{0,5}:\\s{0,5}"sampling/createMessage"',
  },
  {
    name: "MCP Protocol Version",
    regex: '"protocolVersion"\\s{0,5}:\\s{0,5}"202[4-9]',
  },
  {
    name: "MCP Notifications Initialized",
    regex: '"method"\\s{0,5}:\\s{0,5}"notifications/initialized"',
  },
  {
    name: "MCP Roots List",
    regex: '"method"\\s{0,5}:\\s{0,5}"roots/list"',
  },
];

The Gateway API supports additional automation. For example, one can use the custom DLP profile we defined above to block traffic, or redirect it, or just to log and inspect MCP payloads. Put this together, and Gateway can be used to provide comprehensive detection of unauthorized remote MCP servers accessed via an enterprise network. 

For more information on how to build this out, see this tutorial

Public-facing MCP Servers are protected with AI Security for Apps

So far, we’ve been focused on protecting our workforce’s access to our internal MCP servers. But, like many other organizations, we also have public-facing MCP servers that our customers can use to agentically administer and operate Cloudflare products. These MCP servers are hosted on Cloudflare’s developer platform. (You can find a list of individual MCPs for specific products here, or refer back to our new approach for providing more efficient access to the entire Cloudflare API using Code Mode.)

We believe that every organization should publish official, first-party MCP servers for their products. The alternative is that your customers source unvetted servers from public repositories where packages may contain dangerous trust assumptions, undisclosed data collection, and any range of unsanctioned behaviors. By publishing your own MCP servers, you control the code, update cadence, and security posture of the tools your customers use.

Since every remote MCP server is an HTTP endpoint, we can put it behind the Cloudflare Web Application Firewall (WAF). Customers can enable the AI Security for Apps feature within the WAF to automatically inspect inbound MCP traffic for prompt injection attempts, sensitive data leakage, and topic classification. Public facing MCPs are protected just as any other web API.  

The future of MCP in the enterprise

We hope our experience, products, and reference architectures will be useful to other organizations as they continue along their own journey towards broad enterprise-wide adoption of MCP.

We’ve secured our own MCP workflows by: 

  • Offering our developers a templated framework for building and deploying remote MCP servers on our developer platform using Cloudflare Access for authentication

  • Ensuring secure, identity-based access to authorized MCP servers by connecting our entire workforce to MCP server portals

  • Controlling costs using AI Gateway to mediate access to the LLMs powering our workforce’s MCP clients, and using Code Mode in MCP server portals to reduce token consumption and context bloat

  • Discovering shadow MCP usage by Cloudflare Gateway 

For organizations advancing on their own enterprise MCP journeys, we recommend starting by putting your existing remote and third-party MCP servers behind  Cloudflare MCP server portals and enabling Code Mode to start benefitting for cheaper, safer and simpler enterprise deployments of MCP.  

Acknowledgements:  This reference architecture and blog represents this work of many people across many different roles and business units at Cloudflare. This is just a partial list of contributors: Ann Ming Samborski,  Kate Reznykova, Mike Nomitch, James Royal, Liam Reese, Yumna Moazzam, Simon Thorpe, Rian van der Merwe, Rajesh Bhatia, Ayush Thakur, Gonzalo Chavarri, Maddy Onyehara, and Haley Campbell.

Everything you need to know about NIST’s new guidance in “SP 1800-35: Implementing a Zero Trust Architecture”

19 de Junho de 2025, 10:00

For decades, the United States National Institute of Standards and Technology (NIST) has been guiding industry efforts through the many publications in its Computer Security Resource Center. NIST has played an especially important role in the adoption of Zero Trust architecture, through its series of publications that began with NIST SP 800-207: Zero Trust Architecture, released in 2020.

NIST has released another Special Publication in this series, SP 1800-35, titled "Implementing a Zero Trust Architecture (ZTA)" which aims to provide practical steps and best practices for deploying ZTA across various environments.  NIST’s publications about ZTA have been extremely influential across the industry, but are often lengthy and highly detailed, so this blog provides a short and easier-to-read summary of NIST’s latest guidance on ZTA.

And so, in this blog post:

  • We summarize the key items you need to know about this new NIST publication, which presents a reference architecture for Zero Trust Architecture (ZTA) along with a series of “Builds” that demonstrate how different products from various vendors can be combined to construct a ZTA that complies with the reference architecture.

  • We show how Cloudflare’s Zero Trust product suite can be integrated with offerings from other vendors to support a Zero Trust Architecture that maps to the NIST’s reference architecture.

  • We highlight a few key features of Cloudflare’s Zero Trust platform that are especially valuable to customers seeking compliance with NIST’s ZTA reference architecture, including compliance with FedRAMP and new post-quantum cryptography standards.

Let’s dive into NIST’s special publication!

Overview of SP 1800-35

In SP 1800-35, NIST reminds us that:

A zero-trust architecture (ZTA) enables secure authorized access to assets — machines, applications and services running on them, and associated data and resources — whether located on-premises or in the cloud, for a hybrid workforce and partners based on an organization’s defined access policy.

NIST uses the term Subject to refer to entities (i.e. employees, developers, devices) that require access to Resources (i.e. computers, databases, servers, applications).  SP 1800-35 focuses on developing and demonstrating various ZTA implementations that allow Subjects to access Resources. Specifically, the reference architecture in SP 1800-35 focuses mainly on EIG or “Enhanced Identity Governance”, a specific approach to Zero Trust Architecture, which is defined by NIST in SP 800-207 as follows:

For [the EIG] approach, enterprise resource access policies are based on identity and assigned attributes. 

The primary requirement for [R]esource access is based on the access privileges granted to the given [S]ubject. Other factors such as device used, asset status, and environmental factors may alter the final confidence level calculation … or tailor the result in some way, such as granting only partial access to a given [Resource] based on network location.

Individual [R]esources or [policy enforcement points (PEP)] must have a way to forward requests to a policy engine service or authenticate the [S]ubject and approve the request before granting access.

While there are other approaches to ZTA mentioned in the original NIST SP 800-207, we omit those here because SP 1800-35 focuses mostly on EIG.

The ZTA reference architecture from SP 1800-35 focuses on EIG approaches as a set of logical components as shown in the figure below.  Each component in the reference architecture does not necessarily correspond directly to physical (hardware or software) components, or products sold by a single vendor, but rather to the logical functionality of the component.

Figure 1: General ZTA Reference Architecture. Source: NIST, Special Publication 1800-35, "Implementing a Zero Trust Architecture (ZTA)”, 2025.

The logical components in the reference architecture are all related to the implementation of policy. Policy is crucial for ZTA because the whole point of a ZTA is to apply policies that determine who has access to what, when and under what conditions.

The core components of the reference architecture are as follows:

| Policy Enforcement Point(PEP) | The PEP protects the “trust zones” that host enterprise Resources, and handles enabling, monitoring, and eventually terminating connections between Subjects and Resources. You can think of the PEP as the dataplane that supports the Subject’s access to the Resources.

Policy Enforcement Point
(PEP)

The PEP protects the “trust zones” that host enterprise Resources, and handles enabling, monitoring, and eventually terminating connections between Subjects and Resources.  You can think of the PEP as the dataplane that supports the Subject’s access to the Resources.

Policy Engine

(PE)

The PE handles the ultimate decision to grant, deny, or revoke access to a Resource for a given Subject, and calculates the trust scores/confidence levels and ultimate access decisions based on enterprise policy and information from supporting components. 

Policy Administrator

(PA)

The PA executes the PE’s policy decision by sending commands to the PEP to establish and terminate the communications path between the Subject and the Resource.

Policy Decision Point (PDP)

The PDP is where the decision as to whether or not to permit a Subject to access a Resource is made.  The PIP included the Policy Engine (PE) and the Policy Administrator (PA).  You can think of the PDP as the control plane that controls the Subject’s access to the Resources.

The PDP operates on inputs from Policy Information Points (PIPs) which are supporting components that provide critical data and policy rules to the Policy Decision Point (PDP).

Policy Information Point

(PIP)

The PIPs provide various types of telemetry and other information needed for the PDP to make informed access decisions.  Some PIPs include:

  • ICAM, or Identity, Credential, and Access Management, covering user authentication, single sign-on, user groups and access control features that are typically offered by Identity Providers (IdPs) like Okta, AzureAD or Ping Identity.  
  • Endpoint security includes endpoint detection and response (EDR) or endpoint protection platforms (EPP) that protect end user devices like laptops and mobile devices.  An EPP primarily focuses on preventing known threats using features like antivirus protection. Meanwhile, an EDR actively detects and responds to threats that may have already breached initial defenses using forensics, behavioral analysis and incident response tools. EDR and EPP products are offered by vendors like CrowdStrikeMicrosoftSentinelOne, and more
  • Security Analytics and Data Security products use data collection, aggregation, and analysis to discover security threats using network traffic, user behavior, and other system data, such as, CrowdStrikeDatadogIBM QRadarMicrosoft SentinelNew RelicSplunk, and more.

 

NIST’s figure might suggest that supporting components in the PIP are mere plug-ins responding in real-time to the PDP.  However, for many vendors, the ICAM, EDR/EPP, security analytics, and data security PIPs often represent complex and distributed infrastructures.

Crawl or run, but don’t walk

Next, the SP 1800-35 introduces two more detailed reference architectures, the “Crawl Phase” and the “Run Phase”.  The “Run Phase” corresponds to the reference architecture that is shown in the figure above.  The “Crawl Phase” is a simplified version of this reference architecture that only deals with protecting on-premise Resources, and omits cloud Resources. Both of these phases focused on Enhanced Identity Governance approaches to ZTA, as we defined above. NIST stated, "We are skipping the EIG walk phase and have proceeded directly to the run phase".

The SP 1800-35 then provides a sequence of detailed instructions, called “Builds”, that show how to implement “Crawl Phase” and “Run Phase” reference architectures using products sold by various vendors.

Since Cloudflare’s Zero Trust platform natively supports access to both cloud and on-premise resources, we will skip over the “Crawl Phase” and move directly to showing how Cloudflare’s Zero Trust platform can be used to support “Run Phase” of the reference architecture.

A complete Zero Trust Architecture using Cloudflare and integrations

Nothing in NIST SP 1800-35 represents an endorsement of specific vendor technologies. Instead, the intent of the publication is to offer a general architecture that applies regardless of the technologies or vendors an organization chooses to deploy.   It also includes a series of “Builds” using a variety of technologies from different vendors, that allow organizations to achieve a ZTA.   This section describes how Cloudflare fits in with a ZTA, enabling you to accelerate your ZTA deployment from Crawl directly to Run.

Regarding the “Builds” in SP 1800-35, this section can be viewed as an aggregation of the following three specific builds:

Now let’s see how we can map Cloudflare’s Zero Trust platform to the ZTA reference architecture:

Figure 2: General ZTA Reference Architecture Mapped to Cloudflare Zero Trust & Key Integrations. Source: NIST, Special Publication 1800-35, "Implementing a Zero Trust Architecture (ZTA)”, 2025, with modification by Cloudflare.

Cloudflare’s platform simplifies complexity by delivering the PEP via our global anycast network and the PDP via our Software-as-a-Service (SaaS) management console, which also serves as a global unified control plane. A complete ZTA involves integrating Cloudflare with PIPs provided by other vendors, as shown in the figure above.

Now let’s look at several key points in the figure.

In the bottom right corner of the figure are Resources, which may reside on-premise, in private data centers, or across multiple cloud environments.  Resources are made securely accessible through Cloudflare’s global anycast network via Cloudflare Tunnel (as shown in the figure) or Magic WAN (not shown). Resources are shielded from direct exposure to the public Internet by placing them behind Cloudflare Access and Cloudflare Gateway, which are PEPs that enforce zero-trust principles by granting access to Subjects that conform to policy requirements.

In the bottom left corner of the figure are Subjects, both human and non-human, that need access to Resources.  With Cloudflare’s platform, there are multiple ways that Subjects can again access to Resources, including:

  • Agentless approaches that allow end users to access Resources directly from their web browsers. Alternatively, Cloudflare’s Magic WAN can be used to support connections from enterprise networks directly to Cloudflare’s global anycast network via IPsec tunnels, GRE tunnels or Cloudflare Network Interconnect (CNI).

  • Agent-based approaches use Cloudflare’s lightweight WARP client, which protects corporate devices by securely and privately sending traffic to Cloudflare's global network.

Now we move onto the PEP (the Policy Enforcement Point), which is the dataplane of our ZTA.   Cloudflare Access is a modern Zero Trust Network Access solution that serves as a dynamic PEP, enforcing user-specific application access policies based on identity, device posture, context, and other factors.  Cloudflare Gateway is a Secure Web Gateway for filtering and inspecting traffic sent to the public Internet, serving as a dynamic PEP that provides DNS, HTTP and network traffic filtering, DNS resolver policies, and egress IP policies.

Both Cloudflare Access and Cloudflare Gateway rely on Cloudflare’s control plane, which acts as a PDP offering a policy engine (PE) and policy administrator (PA).  This PDP takes in inputs from PIPs provided by integrations with other vendors for ICAM, endpoint security, and security analytics.  Let’s dig into some of these integrations.

  • ICAM: Cloudflare’s control plane integrates with many ICAM providers that provide Single Sign On (SSO) and Multi-Factor Authentication (MFA). The ICAM provider authenticates human Subjects and passes information about authenticated users and groups back to Cloudflare’s control plane using Security Assertion Markup Language (SAML) or OpenID Connect (OIDC) integrations.  Cloudflare’s ICAM integration also supports AI/ML powered behavior-based user risk scoring, exchange, and re-evaluation. In the figure above, we depicted Okta as the ICAM provider, but Cloudflare supports many other ICAM vendors (e.g. Microsoft Entra, Jumpcloud, GitHub SSO, PingOne).   For non-human Subjects — such as service accounts, Internet of Things (IoT) devices, or machine identities — authentication can be performed through certificates, service tokens, or other cryptographic methods.

  • Endpoint security: Cloudflare’s control plane integrates with many endpoint security providers to exchange signals, such as device posture checks and user risk levels. Cloudflare facilitates this through integrations with endpoint detection and response EDR/EPP solutions, such as CrowdStrike, Microsoft, SentinelOne, and more. When posture checks are enabled with one of these vendors such as Microsoft, device state changes, 'noncompliant', can be sent to Cloudflare Zero Trust, automatically restricting access to Resources. Additionally, Cloudflare Zero Trust enables the ability to synchronize the Microsoft Entra ID risky users list and apply more stringent Zero Trust policies to users at higher risk. 

  • Security Analytics: Cloudflare’s control plane integrates with real-time logging and analytics for persistent monitoring.  Cloudflare's own analytics and logging features monitor access requests and security events. Optionally, these events can be sent to a Security Information and Event Management (SIEM)  solution such as, CrowdStrike, Datadog, IBM QRadar, Microsoft Sentinel, New Relic, Splunk, and more using Cloudflare’s logpush integration. Cloudflare's user risk scoring system is built on the OpenID Shared Signals Framework (SSF) Specification, which allows integration with existing and future providers that support this standard. SSF focuses on the exchange of Security Event Tokens (SETs), a specialized type of JSON Web Token (JWT). By using SETs, providers can share user risk information, creating a network of real-time, shared security intelligence. In the context of NIST’s Zero Trust Architecture, this system functions as a PIP, which is responsible for gathering information about the Subject and their context, such as risk scores, device posture, or threat intelligence. This information is then provided to the PDP, which evaluates access requests and determines the appropriate policy actions. The PEP uses these decisions to allow or deny access, completing the cycle of secure, dynamic access control.

  • Data security: Cloudflare’s Zero Trust offering provides robust data security capabilities across data-in-transit, data-in-use, and data-at-rest. Its Data Loss Prevention (DLP) safeguards sensitive information in transit by inspecting and blocking unauthorized data movement. Remote Browser Isolation (RBI) protects data-in-use by preventing malware, phishing, and unauthorized exfiltration while enabling secure web access. Meanwhile, Cloud Access Security Broker (CASB) ensures data-at-rest security by enforcing granular controls over SaaS applications, preventing unauthorized access and data leakage. Together, these capabilities provide comprehensive protection for modern enterprises operating in a cloud-first environment.

By leveraging Cloudflare's Zero Trust platform, enterprises can simplify and enhance their ZTA implementation, securing diverse environments and endpoints while ensuring scalability and ease of deployment. This approach ensures that all access requests—regardless of where the Subjects or Resources are located—adhere to robust security policies, reducing risks and improving compliance with modern security standards.

Support for agencies and enterprises running towards Zero Trust Architecture

Cloudflare works with multiple enterprises, and federal and state agencies that rely on NIST guidelines to secure their networks.  So we take a brief detour to describe some unique features of Cloudflare’s Zero Trust platform that we’ve found to be valuable to these enterprises.

  • FedRAMP data centers.  Many government agencies and commercial enterprises have FedRAMP requirements, and Cloudflare is well-equipped to support them. FedRAMPs requirements sometimes require organizations to self-host software and services inside their own network perimeter, which can result in higher latency, degraded performance and increased cost.  At Cloudflare, we take a different approach. Organizations can still benefit from Cloudflare’s global network and unparalleled performance while remaining Fedramp compliant.  To support FedRAMP customers, Cloudflare’s dataplane (aka our PEP, or Policy Enforcement Point) consists of data centers in over 330 cities where customers can send their encrypted traffic, and 32 FedRAMP datacenters where traffic is sent to when sensitive dataplane operations are required (e.g. TLS inspection).  This architecture means that our customers do not need to self-host a PEP and incur the associated cost, latency, and performance degradation.

  • Post-quantum cryptography. NIST has announced that by 2030 all conventional cryptography (RSA and ECDSA) must be deprecated and upgraded to post-quantum cryptography.  But upgrading cryptography is hard and takes time, so Cloudflare aims to take on the burden of managing cryptography upgrades for our customers. That’s why organizations can tunnel their corporate network traffic though Cloudflare’s Zero Trust platform, protecting it against quantum adversaries without the hassle of individually upgrading each and every corporate application, system, or network connection. End-to-end quantum safety is available for communications from end-user devices, via web browser (today) or Cloudflare’s WARP device client (mid-2025), to secure applications connected with Cloudflare Tunnel.

Run towards Zero Trust Architecture with Cloudflare 

NIST’s latest publication, SP 1800-35, provides a structured approach to implementing Zero Trust, emphasizing the importance of policy enforcement, continuous authentication, and secure access management. Cloudflare’s Zero Trust platform simplifies this complex framework by delivering a scalable, globally distributed solution that is FedRAMP-compliant and integrates with industry-leading providers like Okta, Microsoft, Ping, CrowdStrike, and SentinelOne to ensure comprehensive protection.

A key differentiator of Cloudflare’s Zero Trust solution is our global anycast network, one of the world’s largest and most interconnected networks. Spanning 330+ cities across 120+ countries, this network provides unparalleled performance, resilience, and scalability for enforcing Zero Trust policies without negatively impacting the end user experience. By leveraging Cloudflare’s network-level enforcement of security controls, organizations can ensure that access control, data protection, and security analytics operate at the speed of the Internet — without backhauling traffic through centralized choke points. This architecture enables low-latency, highly available enforcement of security policies, allowing enterprises to seamlessly protect users, devices, and applications across on-prem, cloud, and hybrid environments.

Now is the time to take action. You can start implementing Zero Trust today by leveraging Cloudflare’s platform in alignment with NIST’s reference architecture. Whether you are beginning your Zero Trust journey or enhancing an existing framework, Cloudflare provides the tools, network, and integrations to help you succeed. Sign up for Cloudflare Zero Trust, explore our integrations, and secure your organization with a modern, globally distributed approach to cybersecurity.

❌
❌