Visualização normal

Antes de ontemStream principal

New Linux FIRESTARTER Backdoor Targets Cisco Firepower Devices

CISA and NCSC warn that FIRESTARTER, a Linux-based backdoor, targets Cisco Firepower devices, evades patches, and enables persistent access even after firmware updates.
  • ✇Security Boulevard
  • Security Architecture for Hybrid Work: Enterprise Guide  Darren Kyle
    With 52% of U.S. employers adopting hybrid models, traditional perimeters are failing. Discover how to build a robust hybrid work security architecture using Secure SD-WAN, SASE, Zero Trust Network Access (ZTNA), and automated threat detection (SIEM/SOAR) to protect a dispersed workforce in 2026. The post Security Architecture for Hybrid Work: Enterprise Guide  appeared first on Security Boulevard.
     
  • ✇Security Boulevard
  • Is Wix Secure Enough? Understanding the Next Layer of Protection for Growing Websites Indusface
    You click “Publish” on your Wix site and breathe easy. HTTPS? Check. Automatic updates? Check. Hosting handled? Check. Your website feels bulletproof. But here is the catch: security is not. The post Is Wix Secure Enough? Understanding the Next Layer of Protection for Growing Websites appeared first on Indusface. The post Is Wix Secure Enough? Understanding the Next Layer of Protection for Growing Websites appeared first on Security Boulevard.
     

Is Wix Secure Enough? Understanding the Next Layer of Protection for Growing Websites

18 de Março de 2026, 04:11

You click “Publish” on your Wix site and breathe easy. HTTPS? Check. Automatic updates? Check. Hosting handled? Check. Your website feels bulletproof. But here is the catch: security is not.

The post Is Wix Secure Enough? Understanding the Next Layer of Protection for Growing Websites appeared first on Indusface.

The post Is Wix Secure Enough? Understanding the Next Layer of Protection for Growing Websites appeared first on Security Boulevard.

Vietnam Announces National Cybersecurity Firewall Plan Under New Digital Governance Law

cybersecurity firewall

Vietnam has announced plans to focus on building a cybersecurity firewall. The statement was delivered by Public Security Minister Lương Tam Quang on Feb. 7, following the closing session of the Communist Party of Vietnam’s 14th National Congress.  It was the first time a senior official explicitly used the term “cybersecurity firewall” to describe the country’s direction in digital governance. While Vietnam has long been regarded internationally as operating one of the most tightly controlled online environments, authorities had not previously declared an intention to construct what they now describe as a national cybersecurity firewall.  The announcement coincides with sweeping reforms to the country’s cybersecurity law framework. 

A New Cybersecurity Law Anchors the Digital Governance Strategy 

On Dec. 10, 2025, the 15th National Assembly passed a new Cybersecurity Law that will take effect on July 1, 2026. Drafted by the Ministry of Public Security (MPS), the legislation replaces both the 2018 Cybersecurity Law and the 2015 Law on Information Security.  The 2025 cybersecurity law introduces new language into Vietnam’s digital governance architecture. Notably, Point d, Clause 2, Article 10 states that authorities will “study the development of a national firewall system.” This is the first time such terminology has appeared in Vietnamese legislation, formally embedding the concept of a cybersecurity firewall within statutory law.  The inclusion of this provision represents a structural shift in how cybersecurity law is framed in the country, elevating technical filtering and monitoring infrastructure to the level of national policy objectives, as reported by The Vietnamese Magazine.

Draft Technical Standards Outline Cybersecurity Firewall Requirements 

Approximately two months after the law’s passage, the Ministry of Public Security released a draft regulation for public comment titled “National Technical Standard on Cybersecurity—Firewall—Basic Technical Requirements.” The document provides insight into the proposed technical architecture of the cybersecurity firewall.  According to the draft, firewall systems meeting national standards would be mandatory infrastructure for monitoring and filtering internet activity. These devices would be capable of filtering traffic and conducting deep packet inspection (DPI).  The proposal also includes SSL/TLS inspection capabilities. SSL/TLS protocols—indicated by the “https” prefix in web addresses—are commonly used to encrypt communications between users and websites. Under the draft framework, firewall systems would be able to decrypt encrypted communications, inspect their contents, and then re-encrypt them before forwarding the data.   In addition, the draft calls for integrating user identity data into individualized control policies. Web-filtering mechanisms would rely on blacklists containing at least 100,000 domain names. These blacklists are defined as collections of IP addresses, domains, and URLs subject to restriction under information security policies, aimed at blocking content or activity considered “undesirable.” 

Data Logging, Risk Assessment, and Centralized Oversight 

Beyond filtering capabilities, the proposed cybersecurity firewall would require network devices to log detailed information for every user session. Logged data would include time stamps, source and destination addresses, protocols used, and system responses.  User activity would then be assessed and assigned a “risk level.” If defined thresholds are exceeded, automated controls or alerts would be triggered and transmitted to cybersecurity authorities. This risk-based monitoring model adds another layer to the country’s digital governance structure, combining surveillance mechanisms with automated enforcement tools.  Separate draft regulations implementing the 2025 cybersecurity law would further obligate telecommunications and internet service providers to retain IP address identification data linked to subscriber information for a minimum of 12 months. Companies would also be required to establish direct technical connections enabling the transfer of IP data to the Ministry’s specialized cybersecurity force.  Under the proposed rules, user information must be provided within 24 hours upon request, or within three hours in urgent cases. All user data would be stored domestically at the MPS’s National Data Center. 
  • ✇DCiber
  • Sala-cofre volta ao radar das empresas diante do crescimento das ameaças híbridas Redação
    Nos últimos anos, os ataques cibernéticos no Brasil não apenas aumentaram em volume, mas também em sofisticação. Em 2024, o país registrou impressionantes 356 bilhões de tentativas de invasão digital, segundo dados do FortiGuard Labs. Esse número representa um aumento significativo em relação aos anos anteriores, refletindo uma mudança na estratégia dos cibercriminosos, que agora priorizam ataques mais direcionados e complexos. A evolução nas táticas de ataque exige que as empresas adotem med
     

Sala-cofre volta ao radar das empresas diante do crescimento das ameaças híbridas

8 de Dezembro de 2025, 11:19

Nos últimos anos, os ataques cibernéticos no Brasil não apenas aumentaram em volume, mas também em sofisticação. Em 2024, o país registrou impressionantes 356 bilhões de tentativas de invasão digital, segundo dados do FortiGuard Labs. Esse número representa um aumento significativo em relação aos anos anteriores, refletindo uma mudança na estratégia dos cibercriminosos, que agora priorizam ataques mais direcionados e complexos. A evolução nas táticas de ataque exige que as empresas adotem medidas de segurança mais robustas e integradas para proteger seus ativos digitais.

E as ameaças não se limitam somente ao espaço virtual. Ameaças híbridas – que combinam vetores cibernéticos e físicos – também se intensificaram. Empresas e infraestruturas críticas enfrentam riscos que vão de brechas digitais a incidentes físicos como incêndios, sabotagem ou desastres naturais.

Em paralelo, eventos climáticos extremos e ações maliciosas tangíveis continuam a ameaçar data centers e equipamentos corporativos. Em outras palavras, segurança cibernética e infraestrutura física andam de mãos dadas no enfrentamento das ameaças atuais.

O retorno da sala-cofre como uma camada estratégica de proteção

Nesse contexto de ameaça ampliada, as empresas estão redescobrindo as salas-cofre como uma importante camada complementar na estratégia de segurança digital. Historicamente concebida como uma solução de proteção física – literalmente uma sala fortificada para abrigar servidores e mídias – a sala-cofre evoluiu para se integrar à abordagem holística de segurança da informação.

Vários fatores explicam por que algumas empresas voltaram a apostar nessas estruturas. Primeiro, a proliferação de ataques de ransomware e outras ameaças digitais fez crescer a recomendação de manter backups offline e isolados – e não há isolamento mais seguro do que um cofre físico. Segundo, os custos de inatividade e perda de dados atingiram patamares recordes, tornando o investimento em proteção física altamente justificável. Quando uma sala-cofre garante a integridade da infraestrutura mesmo sob incidentes graves, o retorno do investimento torna-se rápido e certo, evitando prejuízos operacionais e de negócios. Em prédios compartilhados ou ambientes de TI híbridos, a sala-cofre atua como uma apólice de seguro: se o pior acontecer, a empresa terá condições de se recuperar rapidamente.

Além disso, a adoção de salas-cofre reflete uma mudança cultural. Empresas antes focadas exclusivamente no digital passaram a reconhecer que a resiliência completa exige abordar tanto o virtual quanto o físico. Não se trata de retroceder no tempo, mas sim de combinar o melhor dos dois mundos: firewalls, criptografia e sistemas de detecção cuidam das ameaças lógicas, enquanto a sala-cofre mitiga riscos ambientais e humanos (incêndios, água, poeira, sabotagem) que também podem causar a perda ou indisponibilidade de dados. Em síntese, a sala-cofre ressurge não como um gasto a mais, mas como parte integrante de uma estratégia moderna de defesa em profundidade, oferecendo uma camada adicional de proteção em tempos de ameaças complexas.

Novas certificações técnicas e exigências internacionais

As salas-cofre contemporâneas diferem muito dos cofres de antigamente. Elas incorporam uma série de requisitos técnicos e tecnológicos avançados para fazer frente às ameaças atuais. Estas salas podem suportar incêndio severo por pelo menos 60 minutos e são desenvolvidas para não permitir que a temperatura interna exceda 75 °C, nem que a umidade relativa ultrapasse 85%, mesmo sob calor e pressão extremos, dessa forma os equipamentos eletrônicos em seu interior não sofrem danos térmicos ou por condensão enquanto o fogo é combatido.

Além da barreira ao fogo, as salas-cofre devem ser verdadeiras fortalezas ambientais. Elas precisam impedir a entrada de água e poeira, seja proveniente de tubulações rompidas, de mangueiras dos bombeiros ou de detritos do ambiente externo. Para isso, adotam vedação de alto padrão, com nível de proteção equivalente a IP-66 ou similar, bloqueando jatos de água e partículas contaminantes. A estanqueidade e isolamento garantem também que fumaça e gases corrosivos não penetrem, protegendo circuitos delicados de contaminação química.

Tais características são essenciais, pois em um sinistro real não é apenas o fogo que ameaça o data center: explosões podem lançar destroços, o acionamento de sprinklers pode causar inundações, e a fuligem pode arruinar servidores. A sala-cofre, ao ser hermética e estruturalmente reforçada, funciona como um cofre dentro do cofre, mantendo a integridade mesmo se o prédio ao redor colapsar parcial ou totalmente.

A depender da sensibilidade dos equipamentos e da criticidade dos dados, muitas salas-cofre modernas são projetadas com blindagem eletromagnética – efetivamente atuando como uma gaiola de Faraday. Com isso, o ambiente interno fica imune a perturbações externas que poderiam causar mau funcionamento de servidores ou exponenciar erros de transmissão. De quebra, essa característica protege contra espionagem por emissão de sinais (tempest), evitando vazamento de informações via ondas de rádiofrequência.

Integração da sala-cofre à arquitetura moderna de segurança cibernética

A adoção de salas-cofre hoje não ocorre de forma isolada, mas sim integrada à arquitetura de segurança cibernética corporativa. Empresas de vanguarda veem a sala-cofre como mais um componente de sua estratégia de ciber-resiliência, alinhando políticas e tecnologias de modo que o todo seja maior que a soma das partes.

Na prática, isso significa que o plano de segurança considera cenários combinados: por exemplo, um ataque cibernético grave pode ser acompanhado por tentativas de sabotagem física, ou um desastre natural pode ser explorado por criminosos para roubar dados enquanto os sistemas estão offline. Nesses casos, a sala-cofre atua em conjunto com as defesas lógicas. Enquanto firewalls, antivírus e sistemas de detecção de intrusão tentam barrar o invasor digital, a sala-cofre impede que qualquer agressão física (fogo, explosão, intruso) destrua os servidores ou os deixe inacessíveis. Se, por alguma fatalidade, hackers contornarem todas as barreiras e ativarem um malware destrutivo, os backups dentro da sala-cofre permanecerão a salvo, desconectados e isolados o suficiente para possibilitar uma restauração rápida.

Uma das tendências arquitetônicas atuais é o conceito de “zero trust” não apenas na rede, mas também no ambiente físico. Ou seja, assume-se que nenhum perímetro está 100% livre de riscos – inclusive o prédio da empresa. Assim, informações realmente críticas podem ser duplicadas em mídia desconectada e guardadas em sala-cofre, garantindo um último recurso de recuperação em caso de ransomware, ou ataques de sabotagem coordenados.

As salas-cofre modernas também são projetadas para se conectar aos sistemas de gerenciamento e monitoramento central. Integradas ao SIEM (Security Information and Event Management) elas fornecem telemetria contínua: temperatura, umidade, status dos sensores, travamento de portas, tudo é acompanhado em tempo real, permitindo à equipe de TI agir proativamente.

Naturalmente, investir em uma sala-cofre não elimina a necessidade de manter defesas cibernéticas robustas. Ela deve ser vista como parte de um ecossistema: tão importante quanto backups criptografados, políticas de acesso e treinamento de usuários. A sala-cofre é a última trincheira, mas a empresa precisa vigiar todas as frentes. Quando integrada a um plano abrangente de segurança, entretanto, seu valor é incomensurável. Casos reais demonstram que, no momento da verdade – seja um ataque devastador ou um desastre inesperado –, essa infraestrutura pode fazer a diferença entre a continuidade dos negócios ou o colapso total.

O panorama contemporâneo aponta que veremos cada vez mais empresas adotando essa abordagem híbrida de segurança. Não por modismo ou saudosismo tecnológico, mas porque a realidade impõe camadas múltiplas de proteção. E se a transformação digital acelerou processos e ampliou exposições, ela também nos lembra de que, para todo avanço virtual, é prudente um passo de fortificação real. A sala-cofre, portanto, consolidou-se como um pilar estratégico da segurança digital corporativa, garantindo que mesmo diante do inesperado – seja um hacker ou um incêndio – a empresa permaneça de pé e operacional, custe o que custar.

TÜV Rheinland: há 150 anos tornando o mundo mais seguro

A TÜV Rheinland é uma das principais fornecedoras de serviços de testes e inspeções do mundo, com receita anual superior a 2,7 bilhões de euros e cerca de 26.000 colaboradores em mais de 50 países. Seus especialistas altamente qualificados testam sistemas e produtos técnicos, viabilizam a inovação e apoiam empresas na transição para uma atuação mais sustentável. A empresa capacita profissionais em diversas áreas e certifica sistemas de gestão conforme padrões internacionais. Com expertise reconhecida em áreas como mobilidade, fornecimento de energia e infraestrutura, a TÜV Rheinland assegura qualidade independente em todas as etapas, inclusive em tecnologias emergentes, como hidrogênio verde, inteligência artificial e condução autônoma. Dessa forma, contribui para um futuro mais seguro e melhor para todos. Desde 2006, a TÜV Rheinland é signatária do Pacto Global da ONU, que promove a sustentabilidade e combate à corrupção. Para saber mais, acesse: https://tuv.com

  • ✇SOC Prime Blog
  • AI Malware and LLM Abuse: The Next Wave of Cyber Threats Vlad Garaschenko
    AI-based threats are expected to grow exponentially. The main weakness on the defender side is no longer coming up with good detection ideas, but turning those ideas into production rules quickly enough and at a sufficient scale. AI-Native Malware Will Outpace Traditional SIEMs Without Automated Rule Deployment Future malware families are likely to embed small LLMs or similar models directly into their code. This enables behavior that is very hard for traditional defenses to handle: Self-modif
     

AI Malware and LLM Abuse: The Next Wave of Cyber Threats

14 de Novembro de 2025, 11:26

AI-based threats are expected to grow exponentially. The main weakness on the defender side is no longer coming up with good detection ideas, but turning those ideas into production rules quickly enough and at a sufficient scale.

AI-Native Malware Will Outpace Traditional SIEMs Without Automated Rule Deployment

Future malware families are likely to embed small LLMs or similar models directly into their code. This enables behavior that is very hard for traditional defenses to handle:

  • Self-modifying code that keeps changing to avoid signatures.
  • Context-aware evasion, where the malware “looks” at local logs, running processes, and security tools and adapts its tactics on the fly.
  • Autonomous “AI ransomware agents” that call external platforms for instructions, fetch new payloads, negotiate ransom, and then redeploy in a different form.

Malware starts to behave less like a static binary and more like a flexible service that learns and iterates inside each victim environment.

Most SIEM setups are not designed for this world. Even leading platforms usually support a few hundred rules at maх. That is not enough to cover the volume and variety of AI-driven techniques across large, complex estates. In practice, serious coverage means thousands of active rules mapped to specific log sources, assets, and use cases.

Here, the hard limit is SOC capacity. Every rule has a cost: tuning, false positive handling, documentation, and long-term maintenance. And to keep the workload under control, teams disable or, more often, never onboard a significant part of the potential detection content.

Switching off a rule that is already in monitoring means explicitly taking responsibility for removing a layer of defense, so with limited capacity, it often feels safer to block new rules than to retire existing ones.

For years, the main concern has been alert fatigue – when there are too many alerts for too few analysts. In an AI-native threat landscape, another problem becomes more important: coverage gaps. The most dangerous attack is the one that never triggers an alert because the required rule was never written, never approved, or never deployed.

This shifts the role of SOC leadership. The focus moves from micromanaging individual rules to managing the overall detection portfolio:

  • Which behaviors and assets are covered?
  • Which blind spots are accepted, and why?
  • How fast can the rule set change when a new technique, exploit, or campaign appears?

Traditional processes make this even harder. Manual QA, slow change control, and ticket-driven deployments can stretch the time from “we know how to detect this” to “this rule is live in production” into days or weeks. AI-driven campaigns can adapt within hours.

To close this gap, SOC operations will need to become AI-assisted themselves:

  • AI-supported rule generation and conversion from threat reports, hunting queries, and research into ready-to-deploy rules across multiple query languages.
  • Automated coverage mapping against frameworks like MITRE ATT&CK and against real telemetry (streams, topics, indices, log sources) to see what is actually monitored.
  • Intelligent prioritization of which rules to enable, silence, or tune based on risk, business criticality, and observed impact.
  • Tight integration with real-time event streaming platforms, so new rules can be tested, rolled out, and rolled back safely across very large volumes of data.

Without this level of automation and streaming-first design, SIEM becomes a bottleneck. AI-native threats will not wait for weekly change windows; detection intelligence and rule deployment must operate at streaming speed.

AI-Native Detection Intelligence Will Become the New Standard

By 2026, cybersecurity vendors will be judged on how deeply AI is embedded into their detection lifecycle, not on whether they simply “use AI” as a marketing label. Enterprise buyers, especially at Fortune 100 scale, will treat AI-native detection intelligence as a requirement.

Concretely, large customers will demand:

  • Self-managed, private LLMs that do not leak proprietary telemetry or logic to public clouds.
  • GPU-efficient models optimized specifically for detection intelligence workloads, not generic chat or content tasks.
  • Clear guarantees that data stays within well-defined trust boundaries.

On the product side, AI will touch every part of the detection stack:

  • AI-generated detection rules aligned with frameworks like MITRE ATT&CK (already in place at SOC Prime).
  • At SOC Prime alone, the volume of AI-generated detection rules has been growing at roughly 2x month over month, increasing from about 60 rules in June 2025 to nearly 1,000 in October 2025. This growth is driven both by faster deployment of new rules and by emerging AI-powered malware that require AI to fight AI.
  • AI-driven enrichment, tuning, and log-source adaptation so that rules stay relevant as telemetry changes.
  • AI-assisted retrospective investigations that can automatically replay new logic over historical data.
  • AI-based prioritization of threat content based on customer stack, geography, sector, and risk profile.

In other words, AI becomes part of the detection “factory”: how rules are produced, maintained, and retired across many environments. By 2026, AI-supported detection intelligence will no longer be a value-add feature; it will be the baseline expectation for serious security platforms.

Foundation Model Providers Will Own a New Security Layer – and Need LLM Firewalls

As large language models become part of the core infrastructure for software development, operations, and support, foundation model providers inevitably join the cybersecurity responsibility chain. When their models are used to generate phishing campaigns, malware, or exploit code at scale, pushing all responsibility to end-user organizations is no longer realistic.

Foundation model providers will be expected to detect and limit clearly malicious use cases and to control how their APIs are used, while still allowing legitimate security testing and research. This includes:

  • Screening prompts for obvious signs of malicious intent, such as step-by-step instructions for gaining initial access, escalating privileges, moving laterally, or exfiltrating data.
  • Watching for suspicious usage patterns across tenants such as automated loops, infrastructure-like behavior, or repeated generation of offensive security content.
  • Applying graduated responses: rate limiting, extra verification, human review, or hard blocking when abuse is obvious.

Generic “don’t help with hacking” filters are not enough. A dedicated security layer for LLM traffic is needed – an LLM firewall.

An LLM firewall sits between applications and the model and focuses on cyber risk:

  • It performs semantic inspection of prompts and outputs for indicators of attack planning and execution.
  • It enforces policy: what is allowed, what must be masked or transformed, and what must be blocked entirely.
  • It produces security telemetry that can be fed into SIEM, SOAR, and streaming analytics for investigation and correlation with other signals.

Products like AI DR Bastion are designed with this role in mind: a protective layer around LLM usage that specializes in detecting and stopping offensive cyber use.

This type of control can help:

  • Enterprises that consume LLMs, by reducing the risk that internal users or applications can easily weaponize models.
  • Model and platform providers, by giving them a concrete mechanism to show that they are actively controlling abuse of their APIs.

As LLMs are embedded into CI/CD pipelines, developer assistants, customer support flows, incident response tools, and even malware itself, the boundary between “AI security” and “application security” disappears. Model providers, platform teams, and security organizations will share responsibility for how these systems are used.

In this architecture, LLM firewalls become a standard layer, similar to how WAFs and API gateways are standard today – working alongside SIEM and real-time streaming analytics to ensure that the same AI capabilities that accelerate business outcomes do not become a force multiplier for attackers.

The “Shift-Left Detection” Era Will Begin

By 2026, many enterprise security programs will recognize that pushing all telemetry into a SIEM first, and only then running detection, is both financially unsustainable and operationally too slow.

The next-generation stack will move detection logic closer to where data is produced and transported:

  • Directly in event brokers, ETL pipelines, and streaming platforms such as Confluent Kafka.
  • As part of the data fabric, not only at the end of the pipeline.

The result is a “shift-left detection” model:

  • More than half of large enterprises are expected to start evaluating or piloting architectures where real-time detection runs in the streaming layer.
  • The SIEM evolves toward a compliance, investigation, and retention layer, while first-line detection logic executes on the data in motion.
  • Vendor-neutral, high-performance detection rules that can run at streaming scale become a key differentiator.

In this model, threat detection content is no longer tied to a single SIEM engine. Rules and analytics need to be:

  • Expressed in formats that can execute on streaming platforms and in multiple backends.
  • Managed as a shared catalog that can be pushed “before the SIEM” and still traced, audited, and tuned over time.

SOC Prime’s product direction for 2026 is aligned with this shift: building a line-speed pipeline that runs before the SIEM and integrates directly with streaming platforms. This makes it possible to combine:

  • AI-native detection intelligence at scale,
  • Real-time execution on event streams, and
  • Downstream correlation, retention, and compliance in SIEM and data platforms.

Taken together, AI-native malware, LLM abuse, AI-driven detection intelligence, and shift-left detection architectures define the next wave of cyber threats – and the shape of the defenses needed to meet them.



The post AI Malware and LLM Abuse: The Next Wave of Cyber Threats appeared first on SOC Prime.

  • ✇The Cloudflare Blog
  • One IP address, many users: detecting CGNAT to reduce collateral effects Vasilis Giotsas · Marwan Fayed
    IP addresses have historically been treated as stable identifiers for non-routing purposes such as for geolocation and security operations. Many operational and security mechanisms, such as blocklists, rate-limiting, and anomaly detection, rely on the assumption that a single IP address represents a cohesive, accountable entity or even, possibly, a specific user or device.But the structure of the Internet has changed, and those assumptions can no longer be made. Today, a single IPv4 address may
     

One IP address, many users: detecting CGNAT to reduce collateral effects

29 de Outubro de 2025, 10:00

IP addresses have historically been treated as stable identifiers for non-routing purposes such as for geolocation and security operations. Many operational and security mechanisms, such as blocklists, rate-limiting, and anomaly detection, rely on the assumption that a single IP address represents a cohesive, accountable entity or even, possibly, a specific user or device.

But the structure of the Internet has changed, and those assumptions can no longer be made. Today, a single IPv4 address may represent hundreds or even thousands of users due to widespread use of Carrier-Grade Network Address Translation (CGNAT), VPNs, and proxy middleboxes. This concentration of traffic can result in significant collateral damage – especially to users in developing regions of the world – when security mechanisms are applied without taking into account the multi-user nature of IPs.

This blog post presents our approach to detecting large-scale IP sharing globally. We describe how we build reliable training data, and how detection can help avoid unintentional bias affecting users in regions where IP sharing is most prevalent. Arguably it's those regional variations that motivate our efforts more than any other. 

Why this matters: Potential socioeconomic bias

Our work was initially motivated by a simple observation: CGNAT is a likely unseen source of bias on the Internet. Those biases would be more pronounced wherever there are more users and few addresses, such as in developing regions. And these biases can have profound implications for user experience, network operations, and digital equity.

The reasons are understandable for many reasons, not least because of necessity. Countries in the developing world often have significantly fewer available IPs, and more users. The disparity is a historical artifact of how the Internet grew: the largest blocks of IPv4 addresses were allocated decades ago, primarily to organizations in North America and Europe, leaving a much smaller pool for regions where Internet adoption expanded later. 

To visualize the IPv4 allocation gap, we plot country-level ratios of users to IP addresses in the figure below. We take online user estimates from the World Bank Group and the number of IP addresses in a country from Regional Internet Registry (RIR) records. The colour-coded map that emerges shows that the usage of each IP address is more concentrated in regions that generally have poor Internet penetration. For example, large portions of Africa and South Asia appear with the highest user-to-IP ratios. Conversely, the lowest user-to-IP ratios appear in Australia, Canada, Europe, and the USA — the very countries that otherwise have the highest Internet user penetration numbers.

The scarcity of IPv4 address space means that regional differences can only worsen as Internet penetration rates increase. A natural consequence of increased demand in developing regions is that ISPs would rely even more heavily on CGNAT, and is compounded by the fact that CGNAT is common in mobile networks that users in developing regions so heavily depend on. All of this means that actions known to be based on IP reputation or behaviour would disproportionately affect developing economies. 

Cloudflare is a global network in a global Internet. We are sharing our methodology so that others might benefit from our experience and help to mitigate unintended effects. First, let’s better understand CGNAT.

When one IP address serves multiple users

Large-scale IP address sharing is primarily achieved through two distinct methods. The first, and more familiar, involves services like VPNs and proxies. These tools emerge from a need to secure corporate networks or improve users' privacy, but can be used to circumvent censorship or even improve performance. Their deployment also tends to concentrate traffic from many users onto a small set of exit IPs. Typically, individuals are aware they are using such a service, whether for personal use or as part of a corporate network.

Separately, another form of large-scale IP sharing often goes unnoticed by users: Carrier-Grade NAT (CGNAT). One way to explain CGNAT is to start with a much smaller version of network address translation (NAT) that very likely exists in your home broadband router, formally called a Customer Premises Equipment (or CPE), which translates unseen private addresses in the home to visible and routable addresses in the ISP. Once traffic leaves the home, an ISP may add an additional enterprise-level address translation that causes many households or unrelated devices to appear behind a single IP address.

The crucial difference between large-scale IP sharing is user choice: carrier-grade address sharing is not a user choice, but is configured directly by Internet Service Providers (ISPs) within their access networks. Users are not aware that CGNATs are in use. 

The primary driver for this technology, understandably, is the exhaustion of the IPv4 address space. IPv4's 32-bit architecture supports only 4.3 billion unique addresses — a capacity that, while once seemingly vast, has been completely outpaced by the Internet's explosive growth. By the early 2010s, Regional Internet Registries (RIRs) had depleted their pools of unallocated IPv4 addresses. This left ISPs unable to easily acquire new address blocks, forcing them to maximize the use of their existing allocations.

While the long-term solution is the transition to IPv6, CGNAT emerged as the immediate, practical workaround. Instead of assigning a unique public IP address to each customer, ISPs use CGNAT to place multiple subscribers behind a single, shared IP address. This practice solves the problem of IP address scarcity. Since translated addresses are not publicly routable, CGNATs have also had the positive side effect of protecting many home devices that might be vulnerable to compromise. 

CGNATs also create significant operational fallout stemming from the fact that hundreds or even thousands of clients can appear to originate from a single IP address. This means an IP-based security system may inadvertently block or throttle large groups of users as a result of a single user behind the CGNAT engaging in malicious activity.

This isn't a new or niche issue. It has been recognized for years by the Internet Engineering Task Force (IETF), the organization that develops the core technical standards for the Internet. These standards, known as Requests for Comments (RFCs), act as the official blueprints for how the Internet should operate. RFC 6269, for example, discusses the challenges of IP address sharing, while RFC 7021 examines the impact of CGNAT on network applications. Both explain that traditional abuse-mitigation techniques, such as blocklisting or rate-limiting, assume a one-to-one relationship between IP addresses and users: when malicious activity is detected, the offending IP address can be blocked to prevent further abuse.

In shared IPv4 environments, such as those using CGNAT or other address-sharing techniques, this assumption breaks down because multiple subscribers can appear under the same public IP. Blocking the shared IP therefore penalizes many innocent users along with the abuser. In 2015 Ofcom, the UK's telecommunications regulator, reiterated these concerns in a report on the implications of CGNAT where they noted that, “In the event that an IPv4 address is blocked or blacklisted as a source of spam, the impact on a CGNAT would be greater, potentially affecting an entire subscriber base.” 

While the hope was that CGNAT was only a temporary solution until the eventual switch to IPv6, as the old proverb says, nothing is more permanent than a temporary solution. While IPv6 deployment continues to lag, CGNAT deployments have become increasingly common, and so do the related problems. 

CGNAT detection at Cloudflare

To enable a fairer treatment of users behind CGNAT IPs by security techniques that rely on IP reputation, our goal is to identify large-scale IP sharing. This allows traffic filtering to be better calibrated and collateral damage minimized. Additionally, we want to distinguish CGNAT IPs from other large-scale sharing (LSS) IP technologies, such as VPNs and proxies, because we may need to take different approaches to different kinds of IP-sharing technologies.

To do this, we decided to take advantage of Cloudflare’s extensive view of the active IP clients, and build a supervised learning classifier that would distinguish CGNAT and VPN/proxy IPs from IPs that are allocated to a single subscriber (non-LSS IPs), based on behavioural characteristics. The figure below shows an overview of our supervised classifier: 

While our classification approach is straightforward, a significant challenge is the lack of a reliable, comprehensive, and labeled dataset of CGNAT IPs for our training dataset.

Detecting CGNAT using public data sources 

Detection begins by building an initial dataset of IPs believed to be associated with CGNAT. Cloudflare has vast HTTP and traffic logs. Unfortunately there is no signal or label in any request to indicate what is or is not a CGNAT. 

To build an extensive labelled dataset to train our ML classifier, we employ a combination of network measurement techniques, as described below. We rely on public data sources to help disambiguate an initial set of large-scale shared IP addresses from others in Cloudflare’s logs.   

Distributed Traceroutes

The presence of a client behind CGNAT can often be inferred through traceroute analysis. CGNAT requires ISPs to insert a NAT step that typically uses the Shared Address Space (RFC 6598) after the customer premises equipment (CPE). By running a traceroute from the client to its own public IP and examining the hop sequence, the appearance of an address within 100.64.0.0/10 between the first private hop (e.g., 192.168.1.1) and the public IP is a strong indicator of CGNAT.

Traceroute can also reveal multi-level NAT, which CGNAT requires, as shown in the diagram below. If the ISP assigns the CPE a private RFC 1918 address that appears right after the local hop, this indicates at least two NAT layers. While ISPs sometimes use private addresses internally without CGNAT, observing private or shared ranges immediately downstream combined with multiple hops before the public IP strongly suggests CGNAT or equivalent multi-layer NAT.

Although traceroute accuracy depends on router configurations, detecting private and shared IP ranges is a reliable way to identify large-scale IP sharing. We apply this method to distributed traceroutes from over 9,000 RIPE Atlas probes to classify hosts as behind CGNAT, single-layer NAT, or no NAT.

Scraping WHOIS and PTR records

Many operators encode metadata about their IPs in the corresponding reverse DNS pointer (PTR) record that can signal administrative attributes and geographic information. We first query the DNS for PTR records for the full IPv4 space and then filter for a set of known keywords from the responses that indicate a CGNAT deployment. For example, each of the following three records matches a keyword (cgnat, cgn or lsn) used to detect CGNAT address space:

node-lsn.pool-1-0.dynamic.totinternet.net 103-246-52-9.gw1-cgnat.mobile.ufone.nz cgn.gsw2.as64098.net

WHOIS and Internet Routing Registry (IRR) records may also contain organizational names, remarks, or allocation details that reveal whether a block is used for CGNAT pools or residential assignments. 

Given that both PTR and WHOIS records may be manually maintained and therefore may be stale, we try to sanitize the extracted data by validating the fact that the corresponding ISPs indeed use CGNAT based on customer and market reports. 

Collecting VPN and proxy IPs 

Compiling a list of VPN and proxy IPs is more straightforward, as we can directly find such IPs in public service directories for anonymizers. We also subscribe to multiple VPN providers, and we collect the IPs allocated to our clients by connecting to a unique HTTP endpoint under our control. 

Modeling CGNAT with machine learning

By combining the above techniques, we accumulated a dataset of labeled IPs for more than 200K CGNAT IPs, 180K VPNs & proxies and close to 900K IPs allocated that are not LSS IPs. These were the entry points to modeling with machine learning.

Feature selection

Our hypothesis was that aggregated activity from CGNAT IPs is distinguishable from activity generated from other non-CGNAT IP addresses. Our feature extraction is an evaluation of that hypothesis — since networks do not disclose CGNAT and other uses of IPs, the quality of our inference is strictly dependent on our confidence in the training data. We claim the key discriminator is diversity, not just volume. For example, VM-hosted scanners may generate high numbers of requests, but with low information diversity. Similarly, globally routable CPEs may have individually unique characteristics, but with volumes that are less likely to be caught at lower sampling rates.

In our feature extraction, we parse a 1% sampled HTTP requests log for distinguishing features of IPs compiled in our reference set, and the same features for the corresponding /24 prefix (namely IPs with the same first 24 bits in common). We analyse the features for each of the VPNs, proxies, CGNAT, or non LSS IP. We find that features from the following broad categories are key discriminators for the different types of IPs in our training dataset:

  • Client-side signals: We analyze the aggregate properties of clients connecting from an IP. A large, diverse user base (like on a CGNAT) naturally presents a much wider statistical variety of client behaviors and connection parameters than a single-tenant server or a small business proxy.

  • Network and transport-level behaviors: We examine traffic at the network and transport layers. The way a large-scale network appliance (like a CGNAT) manages and routes connections often leaves subtle, measurable artifacts in its traffic patterns, such as in port allocation and observed network timing.

  • Traffic volume and destination diversity: We also model the volume and "shape" of the traffic. An IP representing thousands of independent users will, on average, generate a higher volume of requests and target a much wider, less correlated set of destinations than an IP representing a single user.

Crucially, to distinguish CGNAT from VPNs and proxies (which is absolutely necessary for calibrated security filtering), we had to aggregate these features at two different scopes: per-IP and per /24 prefixes. CGNAT IPs are typically allocated large blocks of IPs, whereas VPNs IPs are more scattered across different IP prefixes. 

Classification results

We compute the above features from HTTP logs over 24-hour intervals to increase data volume and reduce noise due to DHCP IP reallocation. The dataset is split into 70% training and 30% testing sets with disjoint /24 prefixes, and VPN and proxy labels are merged due to their similarity and lower operational importance compared to CGNAT detection.

Then we train a multi-class XGBoost model with class weighting to address imbalance, assigning each IP to the class with the highest predicted probability. XGBoost is well-suited for this task because it efficiently handles large feature sets, offers strong regularization to prevent overfitting, and delivers high accuracy with limited parameter tuning. The classifier achieves 0.98 accuracy, 0.97 weighted F1, and 0.04 log loss. The figure below shows the confusion matrix of the classification.

Our model is accurate for all three labels. The errors observed are mainly misclassifications of VPN/proxy IPs as CGNATs, mostly for VPN/proxy IPs that are within a /24 prefix that is also shared by broadband users outside of the proxy service. We also evaluate the prediction accuracy using k-fold cross validation, which provides a more reliable estimate of performance by training and validating on multiple data splits, reducing variance and overfitting compared to a single train–test split. We select 10 folds and we evaluate the Area Under the ROC Curve (AUC) and the multi-class logloss. We achieve a macro-average AUC of 0.9946 (σ=0.0069) and log loss of 0.0429 (σ=0.0115). Prefix-level features are the most important contributors to classification performance.

Users behind CGNAT are more likely to be rate limited

The figure below shows the daily number of CGNAT IP inferences generated by our CDN-deployed detection service between December 17, 2024 and January 9, 2025. The number of inferences remains largely stable, with noticeable dips during weekends and holidays such as Christmas and New Year’s Day. This pattern reflects expected seasonal variations, as lower traffic volumes during these periods lead to fewer active IP ranges and reduced request activity.

Next, recall that actions that rely on IP reputation or behaviour may be unduly influenced by CGNATs. One such example is bot detection. In an evaluation of our systems, we find that bot detection is resilient to those biases. However, we also learned that customers are more likely to rate limit IPs that we find are CGNATs.

We analyze bot labels by analyzing how often requests from CGNAT and non-CGNAT IPs are labeled as bots. Cloudflare assigns a bot score to each HTTP request using CatBoost models trained on various request features, and these scores are then exposed through the Web Application Firewall (WAF), allowing customers to apply filtering rules. The median bot rate is nearly identical for CGNAT (4.8%) and non-CGNAT (4.7%) IPs. However, the mean bot rate is notably lower for CGNATs (7%) than for non-CGNATs (13.1%), indicating different underlying distributions. Non-CGNAT IPs show a much wider spread, with some reaching 100% bot rates, while CGNAT IPs cluster mostly below 15%. This suggests that non-CGNAT IPs tend to be dominated by either human or bot activity, whereas CGNAT IPs reflect mixed behavior from many end users, with human traffic prevailing.

Interestingly, despite bot scores that indicate traffic is more likely to be from human users, CGNAT IPs are subject to rate limiting three times more often than non-CGNAT IPs. This is likely because multiple users share the same public IP, increasing the chances that legitimate traffic gets caught by customers’ bot mitigation and firewall rules.

This tells us that users behind CGNAT IPs are indeed susceptible to collateral effects, and identifying those IPs allows us to tune mitigation strategies to disrupt malicious traffic quickly while reducing collateral impact on benign users behind the same address.

A global view of the CGNAT ecosystem

One of the early motivations of this work was to understand if our knowledge about IP addresses might hide a bias along socio-economic boundaries—and in particular if an action on an IP address may disproportionately affect populations in developing nations, often referred to as the Global South. Identifying where different IPs exist is a necessary first step.

The map below shows the fraction of a country’s inferred CGNAT IPs over all IPs observed in the country. Regions with a greater reliance on CGNAT appear darker on the map. This view highlights the geodiversity of CGNATs in terms of importance; for example, much of Africa and Central and Southeast Asia rely on CGNATs. 

As further evidence of continental differences, the boxplot below shows the distribution of distinct user agents per IP across /24 prefixes inferred to be part of a CGNAT deployment in each continent. 

Notably, Africa has a much higher ratio of user agents to IP addresses than other regions, suggesting more clients share the same IP in African ASNs. So, not only do African ISPs rely more extensively on CGNAT, but the number of clients behind each CGNAT IP is higher. 

While the deployment rate of CGNAT per country is consistent with the users-per-IP ratio per country, it is not sufficient by itself to confirm deployment. The scatterplot below shows the number of users (according to APNIC user estimates) and the number of IPs per ASN for ASNs where we detect CGNAT. ASNs that have fewer available IP addresses than their user base appear below the diagonal. Interestingly the scatterplot indicates that many ASNs with more addresses than users still choose to deploy CGNAT. Presumably, these ASNs provide additional services beyond broadband, preventing them from dedicating their entire address pool to subscribers. 

What this means for everyday Internet users

Accurate detection of CGNAT IPs is crucial for minimizing collateral effects in network operations and for ensuring fair and effective application of security measures. Our findings underscore the potential socio-economic and geographical variations in the use of CGNATs, revealing significant disparities in how IP addresses are shared across different regions. 

At Cloudflare we are going beyond just using these insights to evaluate policies and practices. We are using the detection systems to improve our systems across our application security suite of features, and working with customers to understand how they might use these insights to improve the protections they configure.

Our work is ongoing and we’ll share details as we go. In the meantime, if you’re an ISP or network operator that operates CGNAT and want to help, get in touch at ask-research@cloudflare.com. Sharing knowledge and working together helps make better and equitable user experience for subscribers, while preserving web service safety and security.

  • ✇HACKMAGEDDON
  • CVEs Targeting Remote Access Technologies in 2025 Paolo Passeri
    The exploitation of vulnerabilities targeting remote access technologies to gain initial access is continuing relentlessly also during 2025, with initial access brokers, and in general opportunistic and targeted threat actors, quite active in leveraging software flaws to break into organizations.
     

CVEs Targeting Remote Access Technologies in 2025

7 de Outubro de 2025, 05:18
The exploitation of vulnerabilities targeting remote access technologies to gain initial access is continuing relentlessly also during 2025, with initial access brokers, and in general opportunistic and targeted threat actors, quite active in leveraging software flaws to break into organizations.
❌
❌