AI models weight trust signals differently in cybersecurity. A comprehensive framework for building entity authority as a security vendor, covering third-party corroboration, author entities, community presence, research credibility, & authority flywheel that compounds citation share.
The post Building Entity Authority in Cybersecurity: The Trust Signals AI Models Actually Weight for Security Vendors appeared first on Security Boulevard.
AI models weight trust signals differently in cybersecurity. A comprehensive framework for building entity authority as a security vendor, covering third-party corroboration, author entities, community presence, research credibility, & authority flywheel that compounds citation share.
A cybersecurity incident has raised concerns after it was revealed that sensitive data associated with the Jurong Region Line (JRL) MRT stations and the Changi NEWater Factory 3 were compromised. The contractor responsible for both critical infrastructure projects, Shanghai Tunnel Engineering Co (Singapore), is currently facing scrutiny as authorities investigate the breach.
Data Compromise Involving Shanghai Tunnel Engineering Co
The breach primarily affects the civil engineering firm Shangh
A cybersecurity incident has raised concerns after it was revealed that sensitive data associated with the Jurong Region Line (JRL) MRT stations and the Changi NEWater Factory 3 were compromised. The contractor responsible for both critical infrastructure projects, Shanghai Tunnel Engineering Co (Singapore), is currently facing scrutiny as authorities investigate the breach.
Data Compromise Involving Shanghai Tunnel Engineering Co
The breach primarily affects the civil engineering firm Shanghai Tunnel Engineering Co, which has been engaged in the construction of three key stations along the JRL and the new Changi NEWater Factory 3. While the exact timing of the incident remains unclear, the compromised data has since been identified as tender documents for the projects. These documents, however, are available on the government’s GeBIZ procurement portal, which mitigates concerns over the theft of sensitive information.
On April 27, the Land Transport Authority (LTA) responded to public queries by confirming that it was aware of the cybersecurity breach and had reported the matter to the police and other relevant authorities. In an effort to minimize potential risks, the LTA temporarily suspended the contractor’s access to its digital systems, although the breach has not been reported to have disrupted the ongoing construction of the JRL MRT stations.
Impact on Changi NEWater Factory 3
While the data breach raises alarms, the national water agency PUB (Public Utilities Board) has reassured the public that there has been no access to its digital systems by Shanghai Tunnel Engineering Co. Following an internal investigation, PUB concluded that no sensitive data related to the Changi NEWater Factory 3 had been stolen. The only data compromised were the project tender documents, which, as mentioned, are publicly accessible on GeBIZ.
A PUB spokesperson emphasized that the agency maintains a "serious view" of cybersecurity and has advised the contractor to review its security protocols. Despite extensive checks on known ransomware portals and hacker forums, no evidence of leaked data related to the breach has surfaced, alleviating some concerns among stakeholders.
Company’s Response to Cybersecurity Incident
In a statement issued on April 28, Shanghai Tunnel Engineering Co (Singapore) acknowledged the cybersecurity incident, confirming that it had taken immediate steps to contain the situation. While the company did not specify when the breach occurred, it assured the public that it was cooperating fully with the authorities. Furthermore, the company has enlisted an external cybersecurity specialist to aid in the investigation.
"We are cooperating fully with the relevant authorities and kindly request that all parties allow the investigation to proceed without interference," a company representative said.
Shanghai Tunnel Engineering Co, established in 1996, is a well-established contractor with significant experience in MRT projects across Singapore. The firm has previously worked on various stations for the Circle, Downtown, and Thomson-East Coast lines. Its latest projects involve critical infrastructure, including the JRL stations and the Changi NEWater Factory 3.
Contract Details and Future Expectations
In 2019, Shanghai Tunnel Engineering Co was awarded a $465.2 million contract to design and build three JRL stations, Choa Chu Kang, Choa Chu Kang West, and Tengah—along with a 4.3km viaduct connecting them. This work includes integrating the existing Choa Chu Kang MRT station on the North-South Line into the JRL network.
In addition to the JRL projects, Shanghai Tunnel Engineering Co is also involved in the construction of the Changi NEWater Factory 3. In November 2025, a $205 million contract was awarded to Sanli M&E Engineering, which formed a joint venture with Shanghai Tunnel Engineering Co in February 2026. The joint venture will be responsible for several key aspects of the factory’s construction, including civil, structural, and architectural works, as well as external and building services.
The Changi NEWater Factory 3 is expected to be operational by 2028 and will replace the existing Bedok facility. Once completed, the factory will be capable of producing up to 50 million gallons of NEWater daily, contributing significantly to Singapore's water sustainability efforts.
The Italian Data Protection Authority fine against Poste Italiane and Postepay has reached over €12.5 million, after regulators found unlawful processing of personal data affecting millions of users.
Italy’s Italian Data Protection Authority imposed a €6.6 million penalty on Poste Italiane and €5.8 million on Postepay. The action follows an investigation launched in April 2024 after multiple complaints from users regarding how their data was being handled through mobile applications.
Italian
The Italian Data Protection Authority fine against Poste Italiane and Postepay has reached over €12.5 million, after regulators found unlawful processing of personal data affecting millions of users.
Italy’s Italian Data Protection Authority imposed a €6.6 million penalty on Poste Italiane and €5.8 million on Postepay. The action follows an investigation launched in April 2024 after multiple complaints from users regarding how their data was being handled through mobile applications.
Italian Data Protection Authority Fine Linked to Intrusive App Monitoring
The Italian Data Protection Authority fine centers on how BancoPosta and Postepay apps collected user data. Customers were required to allow monitoring of information stored on their devices, including details about installed and active applications.
According to the companies, this access was necessary to detect malware and prevent fraud in line with payment security requirements. However, the regulator found that the scope of monitoring went too far.
Authorities stated that the data collection methods were not proportionate and resulted in excessive intrusion into users’ private lives. The ruling emphasized that fraud prevention cannot justify blanket access to personal device data.
Multiple Compliance Failures Identified
The investigation behind the Italian Data Protection Authority fine also revealed broader compliance failures. Regulators flagged insufficient transparency in how users were informed about data collection practices.
The companies were also found to have not conducted an adequate Data Protection Impact Assessment. Such assessments are required when processing activities pose high risks to individual privacy.
Further issues included weak security measures, unclear policies on how long data was stored, and irregularities in defining data controller responsibilities. These gaps raised concerns about how user data was governed internally.
As part of the enforcement action, both companies have been ordered to stop the disputed data processing practices if still ongoing. They must also align their data retention policies with regulatory requirements and report compliance to the Authority.
Italian Regulator Steps Up Enforcement
The action reinforces a broader trend of stricter enforcement by the Italian Data Protection Authority across the financial sector. The Italian Data Protection Authority fines Poste Italiane and Postepay case follows another high-profile enforcement action earlier this year involving Intesa Sanpaolo.
In March 2026, the regulator imposed a €31.8 million penalty on the bank after uncovering serious lapses in how customer data was protected. The case involved unauthorized access to sensitive information of more than 3,500 customers over a period of more than two years.
Investigators found that a single employee had accessed customer records more than 6,600 times without any legitimate business reason. The breach went undetected for months, exposing weaknesses in the bank’s internal monitoring systems.
Insider Risks and Monitoring Gaps under Focus
The Intesa Sanpaolo case highlighted a different but equally critical issue. While Poste Italiane and Postepay were penalized for excessive data collection, the bank was fined for failing to detect misuse of legitimate access.
According to the Authority, the bank’s monitoring systems were not designed to identify slow, repeated misuse of access over time. This allowed the unauthorized activity to continue without triggering alerts, even when it involved high-risk individuals such as public figures.
Regulators concluded that the controls in place were not aligned with the risks associated with broad internal access to sensitive financial data. The case has since raised concerns about insider threats and the effectiveness of existing detection mechanisms within financial institutions.
Growing Pressure on Financial Services
Together, these cases reflect a tightening regulatory environment in Italy, where financial institutions are being held accountable for both overreach and underperformance in data protection.
The Italian Data Protection Authority fines Poste Italiane and Postepay decision highlights the importance of balancing fraud prevention measures with user privacy. Security controls must be proportionate, transparent, and supported by proper risk assessments.
At the same time, the Intesa Sanpaolo breach demonstrates that insufficient monitoring can be just as damaging, particularly when insider threats go unnoticed for extended periods.
With enforcement actions increasing in scale and frequency, organizations operating in the financial sector are facing mounting pressure to reassess their data governance frameworks. The regulator’s recent decisions make it clear that both excessive data collection and weak oversight can lead to significant financial and reputational consequences.
The Intesa Sanpaolo data breach was not just the result of unauthorized access, it was a failure of detection that lasted for more than two years. In an exclusive response to The Cyber Express, Italy’s data protection authority has now clarified that the bank’s monitoring systems were not equipped to identify repeated, low-volume misuse of access over time.
The Intesa Sanpaolo data breach, which has already led to a €31.8 million fine, involved a single employee accessing the data of over 3,5
The Intesa Sanpaolo data breach was not just the result of unauthorized access, it was a failure of detection that lasted for more than two years. In an exclusive response to The Cyber Express, Italy’s data protection authority has now clarified that the bank’s monitoring systems were not equipped to identify repeated, low-volume misuse of access over time.
The Intesa Sanpaolo data breach, which has already led to a €31.8 million fine, involved a single employee accessing the data of over 3,500 customers without any valid business reason.
While earlier findings established the scale of the incident, the latest response explains why it continued undetected for so long.
Intesa Sanpaolo Data Breach: Monitoring Failed to Catch Slow, Repeated Access
At the center of the Intesa Sanpaolo data breach is a critical gap in how internal activity was monitored. In response to queries from The Cyber Express, Secretary General of the Italian Data Protection Authority, Luigi Montuori, said:
“The Authority found that the employee carried out unauthorized access over a period of more than two years without the bank’s alert systems detecting any anomaly. According to the decision, the controls adopted by the bank proved inadequate in light of the specific risks connected with its operating model, which allowed broad internal access to customer data.”
He further added:
“In particular, the Authority considered that the thresholds and monitoring mechanisms in place were not sufficient to promptly detect repeated but time-distributed improper access, including access involving politically exposed or otherwise high-profile individuals.”
This clarification is significant. It shows that the Intesa Sanpaolo data breach was not missed because of a lack of controls, but because those controls were not designed to detect how insider threats actually behave.
Rather than triggering alerts through large or unusual spikes, the access remained under the radar by being spread out over time. This exposes a common blind spot in enterprise monitoring, systems often focus on volume, not patterns.
No Confirmed Misuse, But Regulator Flags High Risk
Another key question in the Intesa Sanpaolo data breach has been whether the accessed data was misused beyond internal viewing. Montuori clarified in his response:
“The decision does not state that there is confirmed evidence of data exfiltration or further misuse of the data outside the unauthorized access itself. However, the Authority found that the unlawful access, its scale, its duration, and the categories of persons affected were sufficient to create a high risk for the rights and freedoms of the individuals concerned. Beyond the conclusions set out in our decision, the case is also under investigation by the judicial authority in criminal proceedings.”
Even without confirmed data exfiltration, the Intesa Sanpaolo data breach was treated as a serious violation. The regulator’s position is clear: prolonged unauthorized access, especially involving sensitive and high-profile individuals, creates inherent risk.
This reflects a broader shift in enforcement, where exposure itself, not just proven misuse, is enough to trigger regulatory action.
Post-Breach Fixes Highlight Earlier Gaps
Following the Intesa Sanpaolo data breach, the bank introduced several measures to strengthen its controls. The authority noted:
“The decision notes that, after the incident, the bank adopted a number of measures to strengthen its safeguards, including:
stronger protections for certain particularly sensitive or high-profile customers;
enhanced ex ante authorization mechanisms and ex post controls on access;
strengthened alerting and monitoring systems for anomalous access;
a dedicated task force for analysis and decision support;
the introduction of additional data masking measures;
broader governance improvements in the management of personal data breaches.
As stated in the decision, the Authority also took these remedial measures into account in its overall assessment.”
While these steps address key weaknesses, they also underline a larger issue. In the Intesa Sanpaolo data breach, the most critical safeguards, effective monitoring, stricter access control, and risk-based oversight, were strengthened only after the breach had already persisted for years.
A Broader Warning on Insider Risk
The Intesa Sanpaolo data breach offers a clear lesson for the banking sector and beyond.
Internal access remains one of the most difficult risks to control. Systems are often designed to enable efficiency, giving employees broad visibility across customer data. But without monitoring that reflects real user behavior, that access can be misused without detection.
What stands out in this case is that even access involving politically exposed and high-profile individuals did not trigger alerts. That points to a deeper issue—not just in tools, but in how risk is defined and monitored.
As Montuori concluded:
“At this stage, we have no further comment beyond the contents of the adopted measure”.
The case may be closed from a regulatory standpoint, but its implications are not. The Intesa Sanpaolo data breach shows that insider threats do not always appear as obvious anomalies, they often build quietly over time. Without systems designed to catch that, similar incidents are likely to happen again.
The Intesa Sanpaolo data breach has resulted in a €31.8 million fine from Italy’s data protection authority, after an investigation found serious lapses in how the bank protected customer data. The case centers on unauthorized access to the banking information of more than 3,500 customers over a period of more than two years, raising fresh concerns around internal threats in the financial sector.
The Intesa Sanpaolo data breach, first reported by the bank in July 2024, turned out to be far mo
The Intesa Sanpaolo data breach has resulted in a €31.8 million fine from Italy’s data protection authority, after an investigation found serious lapses in how the bank protected customer data. The case centers on unauthorized access to the banking information of more than 3,500 customers over a period of more than two years, raising fresh concerns around internal threats in the financial sector.
The Intesa Sanpaolo data breach, first reported by the bank in July 2024, turned out to be far more extensive than initially disclosed. Regulators found that a single employee had accessed sensitive banking data of 3,573 customers without any professional justification, making over 6,600 queries between February 2022 and April 2024.
Internal Access, No Early Detection
What stands out in the Intesa Sanpaolo data breach is not just the unauthorized access, but how long it went unnoticed.
According to the Italian Data Protection Authority, the bank’s internal monitoring systems failed to detect repeated anomalous access. The activity continued for months, exposing a clear gap in how employee actions were being tracked.
The access also involved individuals considered high-risk, including public figures and politically exposed persons. These profiles typically require stricter oversight, but the investigation found that enhanced controls were either not applied or were ineffective.
Regulator Flags GDPR Violations
The authority concluded that the Intesa Sanpaolo data breach violated key provisions of the GDPR, particularly around data integrity, confidentiality, and accountability.
At the core of the issue was the bank’s access model. Employees were able to query customer data across the system without sufficient restrictions. While such systems are often designed for operational flexibility, regulators noted that they must be backed by strong controls—which were lacking in this case.
The findings pointed to broader weaknesses in both technical safeguards and organizational oversight.
Delays in Intesa Sanpaolo Data Breach Notification
The bank’s response to the incident has also come under scrutiny. Authorities found that the breach notification was incomplete and delayed, falling short of legal requirements.
Customer communication was another weak point. Many affected individuals were informed only after the regulator intervened in November 2024, months after the issue had come to light.
This delay limited the ability of customers to take timely action, a factor that weighed into the final penalty.
Scale of Exposure Raises Concerns
The Intesa Sanpaolo data breach was not limited to a small set of accounts. The investigation showed that the employee accessed data linked to politicians, public figures, bank staff, and thousands of ordinary customers.
The information viewed included personal identification details as well as financial data such as account activity and payment card information.
While the bank stated there was no evidence of data being extracted or misused, regulators emphasized that unauthorized access alone constitutes a serious breach under GDPR.
Bank Responds, Tightens Controls
Intesa Sanpaolo has since taken corrective steps following the data breach. The bank said it dismissed the employee involved and has introduced stricter controls on data access.
New measures include requiring justification for accessing customer data outside assigned portfolios, enhanced alert systems to detect unusual activity, and additional layers of authorization.
The bank also argued during proceedings that not all breaches can be prevented and that its systems did eventually detect anomalies. However, regulators maintained that the delay and scale of the breach pointed to deeper issues.
A Broader Signal to the Banking Sector
The Intesa Sanpaolo data breach highlights a persistent challenge for financial institutions: insider risk.
Even with existing safeguards, employees with system access can misuse data if controls are not tight enough or actively monitored. The case shows that compliance is not just about having systems in place, but ensuring they work in practice.
For the wider banking sector, the message is clear. Monitoring cannot be passive, and access cannot be overly broad. Without that balance, even established institutions risk facing similar regulatory action.
Letter from group published by MPs blames 12 March glitch on software update to its mobile banking appsLloyds Banking Group exposed the personal data of nearly 500,000 customers in an IT glitch that left people’s payments, account details and national insurance numbers visible to other users, a committee of MPs has revealed.A letter from Lloyds, published by MPs on the Treasury select committee on Friday, blamed the glitch on a software defect introduced during an IT update to its Lloyds, Halifa
Letter from group published by MPs blames 12 March glitch on software update to its mobile banking apps
Lloyds Banking Group exposed the personal data of nearly 500,000 customers in an IT glitch that left people’s payments, account details and national insurance numbers visible to other users, a committee of MPs has revealed.
A letter from Lloyds, published by MPs on the Treasury select committee on Friday, blamed the glitch on a software defect introduced during an IT update to its Lloyds, Halifax and Bank of Scotland mobile banking apps overnight into 12 March.
The Ministry of Finance cyberattack in the Netherlands has once again highlighted a growing concern: even critical government systems are struggling to stay ahead of increasingly advanced threats. While officials have moved quickly to contain the Ministry of Finance data breach, the incident highlights deeper structural challenges in public-sector cybersecurity.
According to an official release, “The Ministry of Finance's ICT security detected unauthorized access to systems for a number of pr
The Ministry of Finance cyberattack in the Netherlands has once again highlighted a growing concern: even critical government systems are struggling to stay ahead of increasingly advanced threats. While officials have moved quickly to contain the Ministry of Finance data breach, the incident highlights deeper structural challenges in public-sector cybersecurity.
According to an official release, “The Ministry of Finance's ICT security detected unauthorized access to systems for a number of primary processes within the policy department on Thursday, March 19.”
What makes this Ministry of Finance cyberattack particularly concerning is not just the breach itself, but the fact that it affected systems tied to “primary processes”—a term that signals operational significance rather than peripheral infrastructure.
Ministry of Finance Cyberattack: What Happened
The Ministry of Finance cyberattack came to light after a third party flagged suspicious activity, prompting an internal investigation. Security teams confirmed unauthorized access to several internal systems within a policy department. In response, authorities acted swiftly, blocking access and taking compromised systems offline.
While this rapid containment is commendable, it also raises a critical question: why was external notification required in the first place? In mature cybersecurity environments, internal detection mechanisms are expected to identify anomalies before third parties do.
The ministry clarified that services provided to citizens and businesses—particularly those linked to taxation, customs, and benefits—remain unaffected. However, the disruption to internal operations has impacted some employees, though the scale remains undisclosed.
At this stage, officials have not confirmed whether sensitive data was accessed or exfiltrated. No threat actor has claimed responsibility, and investigators are still working to determine the entry point and intent behind the intrusion.
A Pattern of Cyber Incidents in the Netherlands
The Ministry of Finance cyberattack does not exist in isolation. It is part of a broader pattern of cybersecurity incidents affecting Dutch government institutions in recent months.
A notable case involved the Dutch Custodial Institutions Agency (DJI), where a data breach exposed employee information, including email addresses, phone numbers, and security certificates. Reports suggest attackers may have maintained access to DJI’s internal systems for up to five months—a duration that points to gaps in detection and response capabilities.
The breach was linked to a vulnerability in Ivanti Endpoint Manager Mobile, a widely used platform for managing enterprise devices. The same flaw also impacted other institutions, including the Dutch Data Protection Authority and the judiciary.
In that case, attackers reportedly had the ability not only to access data but also to remotely control or wipe devices, an escalation that moves beyond data theft into operational disruption.
Why the Ministry of Finance Cyberattack Matters
The significance of the Ministry of Finance cyberattack goes beyond immediate disruption. It highlights three critical issues:
Detection Gaps: The reliance on third-party alerts suggests that internal monitoring systems may not be fully optimized.
Attack Surface Complexity: Government systems, often layered and legacy-heavy, present attractive targets with multiple entry points.
Persistent Threat Actors: The DJI case shows attackers are willing—and able—to maintain long-term access without detection.
These factors combined indicate that cybersecurity is no longer just a technical issue but a governance challenge.
Government Response and the Road Ahead
Authorities have stated, “We will update this message when we can share more information.” While this cautious communication is understandable, transparency will be key in maintaining public trust—especially if sensitive data exposure is later confirmed.
State Secretary Claudia van Bruggen acknowledged the seriousness of recent incidents, emphasizing the government’s responsibility to protect its workforce. At the same time, officials have reassured that there is no immediate danger to affected personnel.
Still, reassurance alone is not enough. The Ministry of Finance cyberattack should serve as a catalyst for systemic improvements, ranging from stronger endpoint security to real-time threat detection and zero-trust architecture adoption.
Over the past few days Cloudflare has been notified through our vulnerability disclosure program and the certificate transparency mailing list that unauthorized certificates were issued by Fina CA for 1.1.1.1, one of the IP addresses used by our public DNS resolver service. From February 2024 to August 2025, Fina CA issued twelve certificates for 1.1.1.1 without our permission. We did not observe unauthorized issuance for any properties managed by Cloudflare other than 1.1.1.1.We have no evidenc
Over the past few days Cloudflare has been notified through our vulnerability disclosure program and the certificate transparency mailing list that unauthorized certificates were issued by Fina CA for 1.1.1.1, one of the IP addresses used by our public DNS resolver service. From February 2024 to August 2025, Fina CA issued twelve certificates for 1.1.1.1 without our permission. We did not observe unauthorized issuance for any properties managed by Cloudflare other than 1.1.1.1.
We have no evidence that bad actors took advantage of this error. To impersonate Cloudflare's public DNS resolver 1.1.1.1, an attacker would not only require an unauthorized certificate and its corresponding private key, but attacked users would also need to trust the Fina CA. Furthermore, traffic between the client and 1.1.1.1 would have to be intercepted.
While this unauthorized issuance is an unacceptable lapse in security by Fina CA, we should have caught and responded to it earlier. After speaking with Fina CA, it appears that they issued these certificates for the purposes of internal testing. However, no CA should be issuing certificates for domains and IP addresses without checking control. At present all certificates have been revoked. We are awaiting a full post-mortem from Fina.
While we regret this situation, we believe it is a useful opportunity to walk through how trust works on the Internet between networks like ourselves, destinations like 1.1.1.1, CAs like Fina, and devices like the one you are using to read this. To learn more about the mechanics, please keep reading.
Background
Cloudflare operates a public DNS resolver 1.1.1.1 service that millions of devices use to resolve domain names from a human-readable format such as example.com to an IP address like 192.0.2.42 or 2001:db8::2a.
The 1.1.1.1 service is accessible using various methods, across multiple domain names, such as cloudflare-dns.com and one.one.one.one, and also using various IP addresses, such as 1.1.1.1, 1.0.0.1, 2606:4700:4700::1111, and 2606:4700:4700::1001. 1.1.1.1 for Families also provides public DNS resolver services and is hosted on different IP addresses — 1.1.1.2, 1.1.1.3, 1.0.0.2, 1.0.0.3, 2606:4700:4700::1112, 2606:4700:4700::1113, 2606:4700:4700::1002, 2606:4700:4700::1003.
As originally specified in RFC 1034 and RFC 1035, the DNS protocol includes no privacy or authenticity protections. DNS queries and responses are exchanged between client and server in plain text over UDP or TCP. These represent around 60% of queries received by the Cloudflare 1.1.1.1 service. The lack of privacy or authenticity protection means that any intermediary can potentially read the DNS query and response and modify them without the client or the server being aware.
To address these shortcomings, we have helped develop and deploy multiple solutions at the IETF. The two of interest to this post are DNS over TLS (DoT, RFC 7878) and DNS over HTTPS (DoH, RFC 8484). In both cases the DNS protocol itself is mainly unchanged, and the desirable security properties are implemented in a lower layer, replacing the simple use of plain-text in UDP and TCP in the original specification. Both DoH and DoT use TLS to establish an authenticated, private, and encrypted channel over which DNS messages can be exchanged. To learn more you can read DNS Encryption Explained.
During the TLS handshake, the server proves its identity to the client by presenting a certificate. The client validates this certificate by verifying that it is signed by a Certification Authority that it already trusts. Only then does it establish a connection with the server. Once connected, TLS provides encryption and integrity for the DNS messages exchanged between client and server. This protects DoH and DoT against eavesdropping and tampering between the client and server.
The TLS certificates used in DoT and DoH are the same kinds of certificates HTTPS websites serve. Most website certificates are issued for domain names like example.com. When a client connects to that website, they resolve the name example.com to an IP like 192.0.2.42, then connect to the domain on that IP address. The server responds with a TLS certificate containing example.com, which the device validates.
However, DNS server certificates tend to be used slightly differently. Certificates used for DoT and DoH have to contain the service IP addresses, not just domain names. This is due to clients being unable to resolve a domain name in order to contact their resolver, like cloudflare-dns.com. Instead, devices are first set up by connecting to their resolver via a known IP address, such as 1.1.1.1 in the case of Cloudflare public DNS resolver. When this connection uses DoT or DoH, the resolver responds with a TLS certificate issued for that IP address, which the client validates. If the certificate is valid, the client believes that it is talking to the owner of 1.1.1.1 and starts sending DNS queries.
You can see that the IP addresses are included in the certificate Cloudflare’s public resolver uses for DoT/DoH:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
02:7d:c8:c5:e1:72:94:ae:c9:ed:3f:67:72:8e:8a:08
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1
Validity
Not Before: Jan 2 00:00:00 2025 GMT
Not After : Jan 21 23:59:59 2026 GMT
Subject: C=US, ST=California, L=San Francisco, O=Cloudflare, Inc., CN=cloudflare-dns.com
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:cloudflare-dns.com, DNS:*.cloudflare-dns.com, DNS:one.one.one.one, IP Address:1.0.0.1, IP Address:1.1.1.1, IP Address:162.159.36.1, IP Address:162.159.46.1, IP Address:2606:4700:4700:0:0:0:0:1001, IP Address:2606:4700:4700:0:0:0:0:1111, IP Address:2606:4700:4700:0:0:0:0:64, IP Address:2606:4700:4700:0:0:0:0:6400
Rogue certificate issuance
The section above describes normal, expected use of Cloudflare public DNS resolver 1.1.1.1 service, using certificates managed by Cloudflare. However, Cloudflare has been made aware of other, unauthorized certificates being issued for 1.1.1.1. Since certificate validation is the mechanism by which DoH and DoT clients establish the authenticity of a DNS resolver, this is a concern. Let’s now dive a little further in the security model provided by DoH and DoT.
Consider a client that is preconfigured to use the 1.1.1.1 resolver service using DoT. The client must establish a TLS session with the configured server before it can send any DNS queries. To be trusted, the server needs to present a certificate issued by a CA that the client trusts. The collection of certificates trusted by the client is also called the root store.
A Certification Authority (CA) is an organisation, such as DigiCert in the section above, whose role is to receive requests to sign certificates and verify that the requester has control of the domain. In this incident, Fina CA issued certificates for 1.1.1.1 without Cloudflare's involvement. This means that Fina CA did not properly check whether the requestor had legitimate control over 1.1.1.1. According to Fina CA:
“They were issued for the purpose of internal testing of certificate issuance in the production environment. An error occurred during the issuance of the test certificates when entering the IP addresses and as such they were published on Certificate Transparency log servers.”
Although it’s not clear whether Fina CA sees it as an error, we emphasize that it is not an error to publish test certificates on Certificate Transparency (more about what that is later on). Instead, the error at hand is Fina CA using their production keys to sign a certificate for an IP address without permission of the controller. We have talked about misuse of 1.1.1.1 in documentation, lab, and testing environments at length. Instead of the Cloudflare public DNS resolver 1.1.1.1 IP address, Fina should have used an IP address it controls itself.
Unauthorized certificates are unfortunately not uncommon, whether due to negligence — such as IdenTrust in November 2024 — or compromise. Famously in 2011, the Dutch CA DigiNotar was hacked, and its keys were used to issue hundreds of certificates. This hack was a wake-up call and motivated the introduction of Certificate Transparency (CT), later formalised in RFC 6962. The goal of Certificate Transparency is not to directly prevent misissuance, but to be able to detect any misissuance once it has happened, by making sure every certificate issued by a CA is publicly available for inspection.
In certificate transparency several independent parties, including Cloudflare, operate public logs of issued certificates. Many modern browsers do not accept certificates unless they provide proof in the form of signed certificate timestamps (SCTs) that the certificate has been logged in at least two logs. Domain owners can therefore monitor all public CT logs for any certificate containing domains they care about. If they see a certificate for their domains that they did not authorize, they can raise the alarm. CT is also the data source for public services such as crt.sh and Cloudflare Radar’s certificate transparency page.
Not all clients require proof of inclusion in certificate transparency. Browsers do, but most DNS clients don’t. We were fortunate that Fina CA did submit the unauthorized certificates to the CT logs, which allowed them to be discovered.
Investigation into potential malicious use
Our immediate concern was that someone had maliciously used the certificates to impersonate the 1.1.1.1 service. Such an attack would require all the following:
An attacker would require a rogue certificate and its corresponding private key.
Attacked clients would need to trust the Fina CA.
Traffic between the client and 1.1.1.1 would have to be intercepted.
In light of this incident, we have reviewed these requirements one by one:
1. We know that a certificate was issued without Cloudflare's involvement. We must assume that a corresponding private key exists, which is not under Cloudflare's control. This could be used by an attacker. Fina CA wrote to us that the private keys were exclusively in Fina’s controlled environment and were immediately destroyed even before the certificates were revoked. As we have no way to verify this, we have and continue to take steps to detect malicious use as described in point 3.
2. Furthermore, some clients trust Fina CA. It is included by default in Microsoft’s root store and in an EU Trust Service provider. We can exclude some clients, as the CA certificate is not included by default in the root stores of Android, Apple, Mozilla, or Chrome. These users cannot have been affected with these default settings. For these certificates to be used nefariously, the client’s root store must include the Certification Authority (CA) that issued them. Upon discovering the problem, we immediately reached out to Fina CA, Microsoft, and the EU Trust Service provider. Microsoft responded quickly, and started rolling out an update to their disallowed list, which should cause clients that use it to stop trusting the certificate.
3. Finally, we have launched an investigation into possible interception between users and 1.1.1.1. The first way this could happen is when the attacker is on-path of the client request. Such man-in-the-middle attacks are likely to be invisible to us. Clients will get responses from their on-path middlebox and we have no reliable way of telling that is happening. On-path interference has been a persistent problem for 1.1.1.1, which we’ve been working on ever since we announced 1.1.1.1.
A second scenario can occur when a malicious actor is off-path, but is able to hijack 1.1.1.1 routing via BGP. These are scenarios we have discussed in aprevious blog post, and increasing adoption of RPKI route origin validation (ROV) makes BGP hijacks with high penetration harder. We looked at the historical BGP announcements involving 1.1.1.1, and have found no evidence that such routing hijacks took place.
Although we cannot be certain, so far we have seen no evidence that these certificates have been used to impersonate Cloudflare public DNS resolver 1.1.1.1 traffic. In later sections we discuss the steps we have taken to prevent such impersonation in the future, as well as concrete actions you can take to protect your own systems and users.
A closer look at the unauthorized certificates attributes
All unauthorized certificates for 1.1.1.1 were valid for exactly one year and included other domain names. Most of these domain names are not registered, which indicates that the certificates were issued without proper domain control validation. This violates sections 3.2.2.4 and 3.2.2.5 of the CA/Browser Forum’s Baseline Requirements, and sections 3.2.2.3 and 3.2.2.4 of the Fina CA Certificate Policy.
The full list of domain names we identified on the unauthorized certificates are as follows:
It’s also worth noting that the Subject attribute points to a fictional organisation TEST D.D., as can be seen on this unauthorized certificate:
Serial Number:
a5:30:a2:9c:c1:a5:da:40:00:00:00:00:56:71:f2:4c
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=HR, O=Financijska agencija, CN=Fina RDC 2015
Validity
Not Before: Nov 2 23:45:15 2024 GMT
Not After : Nov 2 23:45:15 2025 GMT
Subject: C=HR, O=TEST D.D., L=ZAGREB, CN=testssl.finatest.hr, serialNumber=VATHR-32343828408.306
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:testssl.finatest.hr, DNS:testssl2.finatest.hr, IP Address:1.1.1.1
Incident timeline and impact
All timestamps are UTC. All certificates are identified by their date of validity.
The first certificate was issued to be valid starting February 2024, and revoked 33 min later. 11 certificate issuances with common name 1.1.1.1 followed from February 2024 to August 2025. Public reports have been made on Hacker News and on the certificate-transparency mailing list early in September 2025, which Cloudflare responded to.
While responding to the incident, we identified the full list of misissued certificates, their revocation status, and which clients trust them.
First response by Cloudflare on the mailing list about starting the investigation
2025-09-03 12:08:00
Incident declared
2025-09-03 12:16:00
Notification of an unauthorised issuance sent to Fina CA, Microsoft Root Store, and EU Trust service provider
2025-09-03 12:23:00
Cloudflare identifies an initial list of nine rogue certificates
2025-09-03 12:24:00
Outreach to Fina CA to inform them about the unauthorized issuance, requesting revocation
2025-09-03 12:26:00
Identify the number of requests served on 1.1.1.1 IP address, and associated names/services
2025-09-03 12:42:00
As a precautionary measure, began investigation to rule out the possibility of a BGP hijack for 1.1.1.1
2025-09-03 18:48:00
Second notification of the incident to Fina CA
2025-09-03 21:27:00
Microsoft Root Store notifies us that they are preventing further use of the identified unauthorized certificates by using their quick-revocation mechanism.
2025-09-04 06:13:27
Fina revoked all certificates.
2025-09-04 12:44:00
Cloudflare receives a response from Fina indicating “an error occurred during the issuance of the test certificates when entering the IP addresses and as such they were published on Certificate Transparency log servers. [...] Fina will eliminate the possibility of such an error recurring.”
It is therefore disappointing that we failed to properly monitor certificates for our own domain. We failed three times. The first time because 1.1.1.1 is an IP certificate and our system failed to alert on these. The second time because even if we were to receive certificate issuance alerts, as any of our customers can, we did not implement sufficient filtering. With the sheer number of names and issuances we manage it has not been possible for us to keep up with manual reviews. Finally, because of this noisy monitoring, we did not enable alerting for all of our domains. We are addressing all three shortcomings.
We double-checked all certificates issued for our names, including but not limited to 1.1.1.1, using certificate transparency, and confirmed that as of 3 September, the Fina CA issued certificates are the only unauthorized issuances. We contacted Fina, and the root programs we know that trust them, to ask for revocation and investigation. The certificates have been revoked.
Despite no indication of usage of these certificates so far, we take this incident extremely seriously. We have identified several steps we can take to address the risk of these sorts of problems occurring in the future, and we plan to start working on them immediately:
Alerting: Cloudflare will improve alerts and escalation for issuance of certificates for missing Cloudflare owned domains including 1.1.1.1 certificates.
Transparency: The issuance of these unauthorised 1.1.1.1 certificates were detected because Fina CA used Certificate Transparency. Transparency inclusion is not enforced by most DNS clients, which implies that this detection was a lucky one. We are working on bringing transparency to non-browser clients, in particular DNS clients that rely on TLS.
Bug Bounty: Our procedure for triaging reports made through our vulnerability disclosure program was the cause for a delayed response. We are working to revise our triaging process to ensure such reports get the right visibility.
Monitoring: During this incident, our team relied on crt.sh to provide us a convenient UI to explore CA issued certificates. We’d like to give a shout to the Sectigo team for maintaining this tool. Given Cloudflare is an active CT Monitor, we have started to build a dedicated UI to explore our data in Radar. We are looking to enable exploration of certs with IP addresses as common names to Radar as well.
What steps should you take?
This incident demonstrates the disproportionate impact that the current root store model can have. It is enough for a single certification authority going rogue for everyone to be at risk.
If you are an IT manager with a fleet of managed devices, you should consider whether you need to take direct action to revoke these unauthorized certificates. We provide the list in the timeline section above. As the certificates have since been revoked, it is possible that no direct intervention should be required; however, system-wide revocation is not instantaneous and automatic and hence we recommend checking.
If you are tasked to review the policy of a root store that includes Fina CA, you should take immediate actions to review their inclusion in your program. The issue that has been identified through the course of this investigation raises concerns, and requires a clear report and follow-up from the CA. In addition, to make it possible to detect future such incidents, you should consider having a requirement for all CAs in your root store to participate in Certificate Transparency. Without CT logs, problems such as the one we describe here are impossible to address before they result in impact to end users.
We are not suggesting that you should stop using DoH or DoT. DNS over UDP and TCP are unencrypted, which puts every single query and response at risk of tampering and unauthorised surveillance. However, we believe that DoH and DoT client security could be improved if clients required that server certificates be included in a certificate transparency log.
Conclusion
This event is the first time we have observed a rogue issuance of a certificate used by our public DNS resolver 1.1.1.1 service. While we have no evidence this was malicious, we know that there might be future attempts that are.
We plan to accelerate how quickly we discover and alert on these types of issues ourselves. We know that we can catch these earlier, and we plan to do so.
The identification of these kinds of issues rely on an ecosystem of partners working together to support Certificate Transparency. We are grateful for the monitors who noticed and reported this issue.
Cybercriminal groups peddling sophisticated phishing kits that convert stolen card data into mobile wallets have recently shifted their focus to targeting customers of brokerage services, new research shows. Undeterred by security controls at these trading platforms that block users from wiring funds directly out of accounts, the phishers have pivoted to using multiple compromised brokerage accounts in unison to manipulate the prices of foreign stocks.
Image: Shutterstock, WhataWin.
This so-call
Cybercriminal groups peddling sophisticated phishing kits that convert stolen card data into mobile wallets have recently shifted their focus to targeting customers of brokerage services, new research shows. Undeterred by security controls at these trading platforms that block users from wiring funds directly out of accounts, the phishers have pivoted to using multiple compromised brokerage accounts in unison to manipulate the prices of foreign stocks.
Image: Shutterstock, WhataWin.
This so-called ‘ramp and dump‘ scheme borrows its name from age-old “pump and dump” scams, wherein fraudsters purchase a large number of shares in some penny stock, and then promote the company in a frenzied social media blitz to build up interest from other investors. The fraudsters dump their shares after the price of the penny stock increases to some degree, which usually then causes a sharp drop in the value of the shares for legitimate investors.
With ramp and dump, the scammers do not need to rely on ginning up interest in the targeted stock on social media. Rather, they will preposition themselves in the stock that they wish to inflate, using compromised accounts to purchase large volumes of it and then dumping the shares after the stock price reaches a certain value. In February 2025, the FBI said it was seeking information from victims of this scheme.
“In this variation, the price manipulation is primarily the result of controlled trading activity conducted by the bad actors behind the scam,” reads an advisory from the Financial Industry Regulatory Authority (FINRA), a private, non-profit organization that regulates member brokerage firms. “Ultimately, the outcome for unsuspecting investors is the same—a catastrophic collapse in share price that leaves investors with unrecoverable losses.”
“They will often coordinate with other actors and will wait until a certain time to buy a particular Chinese IPO [initial public offering] stock or penny stock,” said Merrill, who has been chronicling the rapid maturation and growth of the China-based phishing community over the past three years.
“They’ll use all these victim brokerage accounts, and if needed they’ll liquidate the account’s current positions, and will preposition themselves in that instrument in some account they control, and then sell everything when the price goes up,” he said. “The victim will be left with worthless shares of that equity in their account, and the brokerage may not be happy either.”
Merrill said the early days of these phishing groups — between 2022 and 2024 — were typified by phishing kits that used text messages to spoof the U.S. Postal Service or some local toll road operator, warning about a delinquent shipping or toll fee that needed paying. Recipients who clicked the link and provided their payment information at a fake USPS or toll operator site were then asked to verify the transaction by sharing a one-time code sent via text message.
In reality, the victim’s bank is sending that code to the mobile number on file for their customer because the fraudsters have just attempted to enroll that victim’s card details into a mobile wallet. If the visitor supplies that one-time code, their payment card is then added to a new mobile wallet on an Apple or Google device that is physically controlled by the phishers.
An image from the Telegram channel for a popular Chinese mobile phishing kit vendor shows 10 mobile phones for sale, each loaded with 4-6 digital wallets from different financial institutions.
This China-based phishing collective exposed a major weakness common to many U.S.-based financial institutions that already require multi-factor authentication: The reliance on a single, phishable one-time token for provisioning mobile wallets. Happily, Merrill said many financial institutions that were caught flat-footed on this scam two years ago have since strengthened authentication requirements for onboarding new mobile wallets (such as requiring the card to be enrolled via the bank’s mobile app).
But just as squeezing one part of a balloon merely forces the air trapped inside to bulge into another area, fraudsters don’t go away when you make their current enterprise less profitable: They just shift their focus to a less-guarded area. And lately, that gaze has settled squarely on customers of the major brokerage platforms, Merrill said.
THE OUTSIDER
Merrill pointed to several Telegram channels operated by some of the more accomplished phishing kit sellers, which are full of videos demonstrating how every feature in their kits can be tailored to the attacker’s target. The video snippet below comes from the Telegram channel of “Outsider,” a popular Mandarin-speaking phishing kit vendor whose latest offering includes a number of ready-made templates for using text messages to phish brokerage account credentials and one-time codes.
According to Merrill, Outsider is a woman who previously went by the handle “Chenlun.” KrebsOnSecurity profiled Chenlun’s phishing empire in an October 2023 story about a China-based group that was phishing mobile customers of more than a dozen postal services around the globe. In that case, the phishing sites were using a Telegram bot that sent stolen credentials to the “@chenlun” Telegram account.
Chenlun’s phishing lures are sent via Apple’s iMessage and Google’s RCS service and spoof one of the major brokerage platforms, warning that the account has been suspended for suspicious activity and that recipients should log in and verify some information. The missives include a link to a phishing page that collects the customer’s username and password, and then asks the user to enter a one-time code that will arrive via SMS.
The new phish kit videos on Outsider’s Telegram channel only feature templates for Schwab customers, but Merrill said the kit can easily be adapted to target other brokerage platforms. One reason the fraudsters are picking on brokerage firms, he said, has to do with the way they handle multi-factor authentication.
Schwab clients are presented with two options for second factor authentication when they open an account. Users who select the option to only prompt for a code on untrusted devices can choose to receive it via text message, an automated inbound phone call, or an outbound call to Schwab. With the “always at login” option selected, users can choose to receive the code through the Schwab app, a text message, or a Symantec VIP mobile app.
In response to questions, Schwab said it regularly updates clients on emerging fraud trends, including this specific type, which the company addressed in communications sent to clients earlier this year.
The 2FA text message from Schwab warns recipients against giving away their one-time code.
“That message focused on trading-related fraud, highlighting both account intrusions and scams conducted through social media or messaging apps that deceive individuals into executing trades themselves,” Schwab said in a written statement. “We are aware and tracking this trend across several channels, as well as others like it, which attempt to exploit SMS-based verification with stolen credentials. We actively monitor for suspicious patterns and take steps to disrupt them. This activity is part of a broader, industry-wide threat, and we take a multi-layered approach to address and mitigate it.”
Other popular brokerage platforms allow similar methods for multi-factor authentication. Fidelity requires a username and password on initial login, and offers the ability to receive a one-time token via SMS, an automated phone call, or by approving a push notification sent through the Fidelity mobile app. However, all three of these methods for sending one-time tokens are phishable; even with the brokerage firm’s app, the phishers could prompt the user to approve a login request that they initiated in the app with the phished credentials.
Vanguard offers customers a range of multi-factor authentication choices, including the option to require a physical security key in addition to one’s credentials on each login. A security key implements a robust form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by connecting an enrolled USB or Bluetooth device and pressing a button. The key works without the need for any special software drivers, and the nice thing about it is your second factor cannot be phished.
THE PERFECT CRIME?
Merrill said that in many ways the ramp-and-dump scheme is the perfect crime because it leaves precious few connections between the victim brokerage accounts and the fraudsters.
“It’s really genius because it decouples so many things,” he said. “They can buy shares [in the stock to be pumped] in their personal account on the Chinese exchanges, and the price happens to go up. The Chinese or Hong Kong brokerages aren’t going to see anything funky.”
Merrill said it’s unclear exactly how those perpetrating these ramp-and-dump schemes coordinate their activities, such as whether the accounts are phished well in advance or shortly before being used to inflate the stock price of Chinese companies. The latter possibility would fit nicely with the existing human infrastructure these criminal groups already have in place.
For example, KrebsOnSecurity recently wrote about research from Merrill and other researchers showing the phishers behind these slick mobile phishing kits employed people to sit for hours at a time in front of large banks of mobile phones being used to send the text message lures. These technicians were needed to respond in real time to victims who were supplying the one-time code sent from their financial institution.
The ashtray says: You’ve been phishing all night.
“You can get access to a victim’s brokerage with a one-time passcode, but then you sort of have to use it right away if you can’t set new security settings so you can come back to that account later,” Merrill said.
The rapid pace of innovations produced by these China-based phishing vendors is due in part to their use of artificial intelligence and large language models to help develop the mobile phishing kits, he added.
“These guys are vibe coding stuff together and using LLMs to translate things or help put the user interface together,” Merrill said. “It’s only a matter of time before they start to integrate the LLMs into their development cycle to make it more rapid. The technologies they are building definitely have helped lower the barrier of entry for everyone.”
The GitHub Advisory Database (Advisory DB) is a vital resource for developers, providing a comprehensive list of known security vulnerabilities and malware affecting open source packages. This post analyzes trends in the Advisory DB, highlighting the growth in reviewed advisories, ecosystem coverage, and source contributions in 2024. We’ll delve into how GitHub provides actionable data to secure software projects.
Advisories
The GitHub Advisory Database contains a list of known security v
The GitHub Advisory Database (Advisory DB) is a vital resource for developers, providing a comprehensive list of known security vulnerabilities and malware affecting open source packages. This post analyzes trends in the Advisory DB, highlighting the growth in reviewed advisories, ecosystem coverage, and source contributions in 2024. We’ll delve into how GitHub provides actionable data to secure software projects.
Advisories
The GitHub Advisory Database contains a list of known security vulnerabilities and malware, grouped in three categories:
GitHub-reviewed advisories: Manually reviewed advisories in software packages that GitHub supports.
Unreviewed advisories: These are automatically pulled from the National Vulnerability Database (NVD) and are either in the process of being reviewed, do not affect a supported package, or do not discuss a valid vulnerability.
Malware advisories: These are specific to malware threats identified by the npm security team.
Reviewed advisories
GitHub-reviewed advisories are security vulnerabilities that have been mapped to packages in ecosystems we support. We carefully review each advisory for validity and ensure that they have a full description, and contain both ecosystem and package information.
Every year, GitHub increases the number of advisories we publish. We have been able to do this due to the increase in advisories coming from our sources (see Sources section below), expanding our ecosystem coverage (also described below), and review campaigns of advisories published before we started the database.
In the past five years, the database has gone from fewer than 400 reviewed advisories to over 20,000 reviewed advisories in October of 2024.
Unreviewed advisories
Unreviewed advisories are security vulnerabilities that we publish automatically into the GitHub Advisory Database directly from the National Vulnerability Database feed. The name is a bit of a misnomer as many of these advisories have actually been reviewed by a GitHub analyst. The reason why they fall into this category is because they are not found in a package in one of the supported ecosystems or are not discussing a valid vulnerability, and all have been reviewed by analysts other than someone from the GitHub Security Lab. Even though most of these advisories will never turn into a reviewed advisory, we still publish them so that you do not have to look in multiple databases at once.
Malware
Malware advisories relate to vulnerabilities caused by malware, and are security advisories that GitHub publishes automatically into the GitHub Advisory Database directly from information provided by the npm security team. Malware advisories are currently exclusive to the npm ecosystem. GitHub doesn’t edit or accept community contributions on these advisories.
Ecosystem coverage
GitHub-reviewed advisories include security vulnerabilities that have been mapped to packages in ecosystems we support. Generally, we name our supported ecosystems after the software programming language’s associated package registry. We review advisories if they are for a vulnerability in a package that comes from a supported registry.
Vulnerabilities in Maven and Composer packages are nearly half of the advisories in the database. npm, pip, and Go make up much of the rest, while the other ecosystems have a much smaller footprint.
This has not always been the case. When the database was initially launched, NPM advisories dominated the database, but as we have expanded our coverage and added support for new ecosystems, the distribution mix has changed.
Sources: Where do the advisories come from?
We add advisories to the GitHub Advisory Database from the following sources:
NVD: This is a huge source of vulnerabilities covering all types of software. We publish all NVD advisories but only review those relevant to our supported ecosystems, which reduces noise for our users.
GitHub Repository Advisories: The second largest source is made up of advisories published through GitHub’s repository security advisory feature. Similar to NVD, these aren’t restricted to our supported ecosystems. However, we provide better coverage of the repository advisories because they focus exclusively on open source software.
Community Contributions: These are reports from the community that are almost exclusively requesting updates to existing advisories.
Other Specialized Sources: Sources like PyPA Advisories (for Python) and Go Vulncheck (for Go) that focus on specific ecosystems. Because they only cover packages within our supported ecosystems, most of their advisories are relevant to us and get reviewed.
If you add up the number of reviewed advisories from each source, you will find that total is more than the total reviewed advisories. This is because each source can publish an advisory for the same vulnerability. In fact, over half of our advisories have more than one source.
Of the advisories with a single source, nearly all of them come from NVD/CVE. This justifies NVD/CVE as a source, even though it is by far the noisiest.
2024 saw a significant increase (39%) in the number of advisories imported from our sources. This is for the most part caused by an increase in the number of CVE records published.
CVE Numbering Authority
In addition to publishing advisories in the GitHub Advisory Database, we are also a CVE Numbering Authority (CNA) for any repository on GitHub. This means that we issue CVE IDs for vulnerabilities reported to us by maintainers, and we publish the vulnerabilities to the CVE database once the corresponding repository advisory is published.
GitHub published over 2,000 CVE records in 2024, making us the fifth-largest CNA in the CVE Program.
The GitHub CNA is open to all repositories on GitHub, not just ones in a supported ecosystem.
Advisory prioritization
Given the constant deluge of reported vulnerabilities, you’ll want tools that can help you prioritize your remediation efforts. To that end, GitHub provides additional data in the advisory to allow readers to prioritize their vulnerabilities. In particular, there are:
Severity Rating/CVSS: A low to critical rating for how severe the vulnerability is likely to be, along with a corresponding CVSS score and vector.
CWE: CWE identifiers provide a programmatic method for determining the type of vulnerability.
EPSS: The Exploit Prediction Scoring System, or EPSS, is a system devised by the global Forum of Incident Response and Security Teams (FIRST) for quantifying the likelihood a vulnerability will be attacked in the next 30 days.
Using these ratings, half of all vulnerabilities (15% are Critical and 35% are High) warrant immediate or near-term attention. By focusing remediation efforts on these, you can significantly reduce risk exposure while managing workload more efficiently.
The CVSS specification says the base score we provide, “reflects the severity of a vulnerability according to its intrinsic characteristics which are constant over time and assumes the reasonable worst-case impact across different deployed environments.” However, the worst-case scenario for your deployment may not be the same as CVSS’s. After all, a crash in a word processor is not as severe as a crash in a server. In order to give more context to your prioritization, GitHub allows you to filter alerts based on the type of vulnerability or weakness using CWE identifiers. So you have the capability to never see another regular expression denial of service (CWE-1333) vulnerability again or always see SQL injection (CWE-89) vulnerabilities.
Still drowning in vulnerabilities? Try using EPSS to focus on vulnerabilities likely to be attacked in the next 30 days. EPSS uses data from a variety of sources to create a probability of whether exploitation attempts will be seen in the next 30 days for a given vulnerability. As you can see from the chart below, if you focus on vulnerabilities with EPSS scores of 10% or higher (approx. 7% of all vulnerabilities in the Advisory DB), you can cover nearly all of the vulnerabilities that are likely to see exploit activity.
EPSS probability
Vulnerabilities in range
Percentage of overall vulnerabilities
Expected vulnerabilities in range attacked within the next 30 days
Percentage of total attacked vulnerabilities
High ( >= 10%)
1440
7.17%
741
85.96%
Moderate ( >= 1%, < 10%)
2687
13.37%
84
9.74%
Low ( >= 0.1%, < 1%)
10264
51.09%
35
4.06%
Very Low ( < 0.1%)
5701
28.37%
2
0.23%
Important caveats to remember when using EPSS:
Low probability events occur.
EPSS does not tell you whether a vulnerability is exploited; it only claims how likely it is.
EPSS scores are updated daily and will change as new information comes in, so a low-probability vulnerability today may become high probability tomorrow.
For more details on how to use CVSS and EPSS for prioritization, see our blog on prioritizing Dependabot alerts.
Actionable data
The GitHub Advisory DB isn’t just a repository of vulnerabilities. It powers tools that help developers secure their projects. Services like Dependabot use the Advisory DB to:
Identify vulnerabilities: It checks if your projects use any software packages with known vulnerabilities.
Suggest fixes: It recommends updated versions of packages that fix those vulnerabilities when available.
Reduce noise: You’ll only get notified about vulnerabilities that affect the version of the package you are using.
Take this with you
The GitHub Advisory Database is a powerful resource for tracking open source software vulnerabilities, with over 22,000 reviewed advisories to date. By focusing on popular package registries, GitHub allows you to definitively connect vulnerabilities to the packages you are using. Additional data such as CVSS and EPSS scores help you properly prioritize your mitigation efforts.
GitHub’s role as a CVE Numbering Authority extends beyond the Advisory Database, ensuring that thousands of vulnerabilities each year reach the broader CVE community. Want to ensure your vulnerability fix reaches your users? Create a GitHub security advisory in your repository to take advantage of both the GitHub Advisory Database and GitHub’s CNA services.
Ever come across a Common Vulnerabilities and Exposures (CVE) ID affecting software you use or maintain and thought the information could be better?
CVE IDs are a widely-used system for tracking software vulnerabilities. When a vulnerable dependency affects your software, you can create a repository security advisory to alert others. But if you want your insight to reach the most upstream data source possible, you’ll need to contact the CVE Numbering Authority (CNA) that issued the vulnerabili
CVE IDs are a widely-used system for tracking software vulnerabilities. When a vulnerable dependency affects your software, you can create a repository security advisory to alert others. But if you want your insight to reach the most upstream data source possible, you’ll need to contact the CVE Numbering Authority (CNA) that issued the vulnerability’s CVE ID.
GitHub, as part of a community of over 400 CNAs, can help in cases when GitHub issued the CVE (such as with this community contribution). And with just a few key details, you can identify the right CNA and reach out with the necessary context. This guide shows you how.
Every CVE record contains an entry that includes the name of the CNA that issued the CVE ID. The CNA is responsible for updating the CVE record after its initial publication, so any requests should be directed to them.
On cve.org, the CNA is listed as the first piece of information under the “Required CVE Record Information” header. The information is also available on the right side of the page.
On nvd.nist.gov, information about the issuing CNA is available in the “QUICK INFO” box. The issuing CNA is called “Source”.
After identifying the CNA from the CVE record, locate their official contact information to request updates or changes. That information is available on the CNA partners website at https://www.cve.org/PartnerInformation/ListofPartners.
Search for the CNA’s name in the search bar. Some organizations may have more than one CNA, so make sure that the CVE you want corresponds to the correct CNA.
The left column, under “Partner,” has the name of the CNA that links to a profile page with its scope and contact information.
Most CNAs have an email address for CVE-related communications. Click the link under “Step 2: Contact” that says Email to find the CNA’s email address.
The most notable exception to the general preference for email communication among CNAs is the MITRE Corporation, the world’s most prolific CVE Numbering Authority. MITRE uses a webform at https://cveform.mitre.org/ for submitting requests to create, update, dispute, or reject CVEs.
The information you want to add, remove, or change within the CVE record
Why you want to change the information
Supporting evidence, usually in the form of a reference link
Including publicly available reference links is important, as they justify the changes. Examples of reference links include:
A publicly available vulnerability report, advisory, or proof-of-concept
A fix commit or release notes that describe a patch
An issue in the affected repository in which the maintainer discusses the vulnerability in their software with the community
A community contribution pull request that suggests a change to the CVE’s corresponding GitHub Security Advisory
When submitting changes, keep in mind that the CNA isn’t your only audience. Clear context around disclosure decisions and vulnerability details helps the broader developer and security community understand the risks and make informed decisions about mitigation.
“3.2.4.1 Subject to their respective CNA Scope Definitions, CNAs MUST respond in a timely manner to CVE ID assignment requests submitted through the CNA’s public POC.
3.2.4.2 CNAs SHOULD document their expected response times, including those for the public POC.”
The CNA rules establish firm timelines for assignment of CVE IDs to vulnerabilities that are already public knowledge. For CVE ID assignment or record publication in particular, section 4.2 and section 4.5 of the CVE CNA rules establish 72 hours as the time limit in which CNAs should issue CVE IDs or publish CVE records for publicly-known vulnerabilities. However, no such guidance exists for changing a CVE record.
If the CNA doesn’t respond or you cannot reach an agreement about the content of the CVE record, the next step is to engage in the dispute process.
The CVE Program Policy and Procedure for Disputing a CVE Record provides details on how you may go about disputing a CVE record and escalating a dispute. The details of that process are beyond the scope of this post. However, if you end up disputing a CVE record, it’s good to know who the root or top-level root of the CNA is that reviews the dispute.
When viewing a CNA’s partner page linked from https://www.cve.org/PartnerInformation/ListofPartners, you can find the CNA’s root under the column “Top-Level Root.” For most CNAs, their root is the Top-Level Root, MITRE.