Visualização de leitura

Why maintaining data cleanliness is essential to cybersecurity

Data, in all its shapes and forms, is one of the most critical assets a business possesses. Not only does it provide organizations with critical information regarding their systems and processes, but it also fuels growth and enables better decision-making on all levels.

However, like any other piece of company equipment, data can degrade over time and become less valuable if organizations aren’t careful. What’s even more dangerous is that neglecting data hygiene can expose organizations to a number of security threats and regulatory compliance issues.

Understanding data cleanliness

Data cleanliness, also called data hygiene, is the process of ensuring all organizational data maintains accuracy and consistency regardless of where it’s stored and how it’s used. To achieve this, organizations need to ensure their data is regularly checked against six core characteristics:

  • Accuracy: Free from errors
  • Completeness: No missing values or incomplete records
  • Consistency: Maintains format across different systems and platforms
  • Validity: Follows pre-defined rules or standards
  • Uniformity: Uses correct data inputs, measurements and naming conventions across all datasets
  • Timeliness: Up-to-date and relevant

To effectively manage each of these components, organizations can use a variety of data management tools and solutions. These automated systems leverage data profiling and cleansing processes to help detect anomalies as they appear and help organizations resolve them.

Why maintaining clean data is so important

Ensuring organizational data remains free from errors and can be a trusted source of critical business information is essential to ensuring both operational efficiency and resiliency. Considering the amount of digital sources most organizations rely on today, there are several ways that businesses can lose sight of how their data is collected, stored and accessed.

“Organizations today are challenged with a number of issues when trying to maintain the integrity of their critical data,”  Evelyn Kim, a program director with IBM Security, says. ” Data is growing exponentially in more formats and locations causing organizations to lose visibility and control over their sensitive data. We see organizations grappling with shadow data (undiscovered or unknown data) that pose significant risks. Generative AI also presents new risks to data — both from a need to have enough of the right data for gen AI use and from ensuring data is not tampered with.”

Security risks associated with unclean data

While the importance of data integrity may seem limited to helping to support smoother business operations, it is actually a core element of ensuring a strong cybersecurity posture. Below are some of the inherent security risks that can occur if good data hygiene is neglected over time:

Cybersecurity threats

With the proliferation of data, data classification, especially of sensitive data, is even more critical to security. Understanding where sensitive data resides is a key step in monitoring data stores and databases to prevent breaches and detect cyberattacks to reduce the impact and damage across critical networks and connected systems.

The effectiveness of modern security tools and technologies also relies on accurate data. Without establishing a reliable baseline for normal business activity, these security solutions lose their ability to identify suspicious user patterns. They can lead to false positives and inadequate threat detection.

Compliance failures

Data cleanliness plays a crucial role in helping organizations meet various regulatory requirements. “Highly regulated industries tend to have significant data governance/security concerns. We typically see financial services, healthcare, manufacturing and utility/energy sectors leaning heavily on data security investments to assist with their compliance efforts,” states Kim.

Without accurate and complete compliance reporting data, organizations open themselves up to significant compliance violations and associated financial penalties. This can also lead to long-term legal repercussions that can damage a business’s reputation and impact customer loyalty.

Maintaining a clean data environment

Data cleansing isn’t something that organizations schedule throughout the year or complete as a one-time project. It requires an ongoing commitment and the ability to integrate data quality practices into every stage of the data lifecycle. From initial data collection and entry to storage, processing and analysis, organizations should follow numerous proactive data maintenance steps, including:

  • Establishing clear data governance policies: Businesses should establish clear roles and accountabilities in their organization when it comes to data entry, validation and updating procedures. This also includes following strict compliance guidelines on how to properly handle data in and out of transit.

  • Investing in data quality solutions: Organizations should research and implement next-generation tools that provide automated data cleansing activities in real-time while handling deduplication and validation processes systematically. These tools help identify and address data quality issues proactively, freeing up time and resources for internal teams.

  • Adopting a security-first culture: Establishing a business culture that prioritizes data security and integrity is essential. This involves initiating training sessions for employees on the importance of following strict data management standards as well as implementing strict access controls, data encryption and monitoring solutions.

Keep your data healthy

Data is what keeps modern organizations running. However, if you’re not careful, the value of this asset will diminish over time and lead to a number of business consequences. By prioritizing data cleanliness, organizations can uncover the true potential of their critical data, allowing them to make better decisions while creating more resilience in their security and compliance initiatives.

The post Why maintaining data cleanliness is essential to cybersecurity appeared first on Security Intelligence.

How CTEM is providing better cybersecurity resilience for organizations

Organizations today continuously face a number of fast-moving cyber threats that regularly challenge the effectiveness of their cybersecurity defenses. However, to keep pace, businesses need a proactive and adaptive approach to their security planning and execution.

Cyber threat exposure management (CTEM) is an effective way to achieve this goal. It provides organizations with a reliable framework for identifying, assessing and mitigating new cyber risks as they materialize.

The importance of developing cybersecurity resilience

Regardless of the industry, all organizations are subject to certain security risks. While various tools and solutions can help to reduce this risk, the only real way of maintaining a strong security posture is by developing a certain amount of cybersecurity resilience.

Cybersecurity resilience is the ability of a business to maintain its core operational state regardless of an attempted or even successful cyberattack. The key components of cybersecurity resilience include:

  • Proactive risk management: It’s important to be able to identify and mitigate any potential threats before they have the opportunity to exploit known vulnerabilities. This requires regular risk assessments and strict security policies.

  • Continuous monitoring and improvement: Monitoring systems and networks is critical to help identify suspicious network activity while informing the necessary stakeholders for mediation. Regularly reviewing logs and threat reports also allows organizations to improve their security efforts going forward.

  • Incident response and recovery: In the event of a successful breach, organizations must be prepared to handle all necessary protocols for threat containment while executing critical recovery efforts to minimize operational disruption.

  • Maintaining a progressive cybersecurity culture: While security tools and solutions are important, organizations looking to establish more cybersecurity resilience need to also build awareness with their employees on relevant threats and how they can help protect themselves and the business.

What is CTEM?

While establishing cybersecurity resilience on its own is important, the prevalence and severity of modern-day security threats mean organizations need to look for a more comprehensive approach to threat management.

CTEM relies on the use of automated routines spread across an organization’s entire infrastructure, designed to identify and assess any security gaps present. Unlike traditional vulnerability assessments, which are typically scheduled throughout the year, CTEM solutions enable real-time threat intelligence at all times.

When integrated across all of an organization’s IT assets, including on-premise and cloud networks, systems, applications and databases, CTEM solutions provide a much more proactive approach to strengthening an organization’s security posture.

Explore cyber threat management services

Key components of CTEM

CTEM frameworks operate by incorporating several key components across an organization’s entire infrastructure. These components include:

Threat intelligence

Leveraging real-time threat intelligence, CTEM references an organization’s location, industry type and digital structure to benchmark against similar organizations while recognizing and prioritizing likely threats. This helps businesses place their mitigation efforts in the right places while always being one step ahead of malicious attackers.

Vulnerability management

CTEM makes use of active vulnerability scanning and assessment tools to look for common vulnerabilities and exposures (CVEs) as well as misconfigurations in systems and networks that could lead to exploitation. Using automated routines, CTEM solutions will run continuous scans for these vulnerabilities and then prioritize them based on the most critical risks.

Security testing

Applying CTEM frameworks across an organization can often include making use of penetration testing services and establishing red teams to help simulate real-world attack scenarios. This helps organizations validate the effectiveness of their current cybersecurity solutions and helps to “stress-test” response capabilities.

Risk assessment

CTEM solutions apply various risk assessment methodologies to help evaluate the potential impact of discovered vulnerabilities. This includes considering various factors that can impact remediation efforts, including the types of assets at risk, how financially sensitive each asset is and the potential impact a successful breach could have on the long-term viability of an organization.

Breaking down the five stages of CTEM

CTEM deployments are an iterative process that involves continuous improvement and refinement. The five stages of CTEM include:

  1. Scoping: The initial stage of CTEM involves establishing certain boundaries within which the solution will operate. This requires organizations to identify the relevant systems, applications or key data the solution will actively monitor. Another element of this stage is to outline any specific goals or objectives that need to be achieved to ensure the solution is properly calibrated.

  2. Discovery: The discovery stage is when all digital assets are cataloged within the defined scope. While many assets may already be defined during initial scoping stages, the CTEM discovery process may also identify unknown assets, including SaaS solutions or other shadow IT elements that may have been missed. This stage is completed using a series of automated tools that scan and catalog new assets as they’re discovered.

  3. Prioritization: After all assets are properly cataloged, the next step is to assess and prioritize all risks associated with each of them. To achieve this, CTEM solutions will apply risk assessment protocols and active threat intelligence to determine the most critical risks.

  4. Validation: The validation stage makes sure that any identified vulnerabilities are legitimate and require an actual remediation process. This is designed to minimize or eliminate any false positives.

  5. Mobilization: The final stage of CTEM is mobilization, which is any action necessary to remediate vulnerabilities and mitigate risks. This can include coordinated efforts between security teams, IT operations and business stakeholders to ensure that vulnerabilities are addressed effectively.

Start implementing CTEM in your organization

Implementing CTEM is a crucial step towards improving an organization’s cybersecurity resilience. Here are some steps your organization can follow to start benefiting from CTEM integrations:

  1. Begin with a cybersecurity risk assessment: Take the time to conduct a comprehensive cybersecurity risk assessment with the help of a security services partner to identify any potential vulnerabilities in your organization.

  2. Embrace automation: Leveraging automation tools to streamline various aspects of your CTEM program is critical to enable real-time threat mitigation. This can help to reduce manual security efforts, improve the accuracy of risk remediation efforts and accelerate incident response times.

  3. Prioritize and validate: Prioritize any discovered vulnerabilities based on their potential impact on your organization and validate any potential attack vectors using techniques like penetration testing and red team simulations.

  4. Establish clear communication channels: It’s important to ensure that security information is shared effectively between different teams and stakeholders. Regardless of the type of CTEM solution your organization chooses to implement, establishing clear communication channels and protocols is essential to ensure that security information is disseminated effectively and acted on in a timely manner.

Keep your business ready

Implementing a CTEM program for your organization is a critical step for organizations considering today’s increasing cyber threats. By taking a proactive and continuous approach to your risk management strategy, you can significantly minimize your digital attack surface while achieving a more resilient cybersecurity posture.

The post How CTEM is providing better cybersecurity resilience for organizations appeared first on Security Intelligence.

Cloud threat report: Why have SaaS platforms on dark web marketplaces decreased?

IBM’s X-Force team recently released the latest edition of the Cloud Threat Landscape Report for 2024, providing a comprehensive outlook on the rise of cloud infrastructure adoption and its associated risks.

One of the key takeaways of this year’s report was focused on the gradual decrease in Software-as-a-Service (SaaS) platforms being mentioned across dark web marketplaces. While this trend potentially points to more cloud platforms increasing their defensive posture and limiting the number of exploits or compromised credentials that are surfacing, there are a few other factors to consider.

Sudden decrease in SaaS mentions across the dark web

In a recent collaboration with Cybersixgill, a leading dark web intelligence firm, IBM’s X-Force provided updated statistics in its recent Cloud Threat Landscape Report surrounding the number of SaaS solutions mentioned across the dark web.

Surprisingly, even though compromised cloud solutions are still highly relevant and valuable assets when creating sellable assets across dark web marketplaces, the number of SaaS platforms being mentioned dropped by an average of 20.4% year-over-year.

Among some of the highest reductions was WordPress-Admin, declining nearly 98% between 2023 and 2024, followed by Microsoft Active Directory and ServiceNow, which saw a 44% and 38% decline, respectively.

While the majority of SaaS platforms mentioned decreased year-over-year, Microsoft TeamViewer was an outlier. Even though the platform only represented 1.8% of all of the mentioned SaaS solutions, it still saw an increase of 9% between 2023 and 2024.

Read the Cloud Threat Landscape Report

What are the potential contributors to less SaaS mentions?

The decreased activity in SaaS mentions initially points to a potentially emerging trend in the sophistication of modern-day cybersecurity solutions. However, as with all first-year statistical report shifts, it’s important to consider all calculation variables and contributing factors.

To help shed some more light on these figures, Colin Connor, a member of IBM’s X-Force team, was interviewed to provide additional perspective. When asked to comment on the potential driver of this dark web trend shift, Connor states, “These statistics appear to be an overall trend that was also referenced in the decrease in total compromised credentials sold during the same reporting period. This also coincides with the takedown of Raccoon Stealer, which caused a prolonged decrease in credential sales from July 2023 onward.”

Racoon Stealer was one of the most widely used infostealer malware that dominated the majority of the dark web market share for credential stealers starting in 2022 but was taken down by the FBI in August of 2023.

Commenting on the overall impact Racoon Stealer had on the year-over-over statistics of this report, Connor says, “During its peak in March 2023, was nearly 87% of the source of stolen logs and accounted for almost 50% of the stolen credentials in our 2023 collection. It’s also important to remember that the majority of dark web credentials sold are stolen from infostealer malware. So, this takedown of Raccoon had a dramatic effect. The marketplace continues to recover — from 192,000 credential sets overall for sale in July 2023 to 721,000 in July 2024. It also has yet to recover from the peak in March 2023 — which equated to 1.2 million credential sets for sale.”

Will there be a resurgence of compromised SaaS platforms in the near future?

According to IBM’s X-Force team, while the year-over-year decline of SaaS mentions on the dark web is positive — pointing to increased law enforcement actions against major dark web marketplaces and enhanced security measures being taken by large enterprises — it’s critical not to allow this to let organization’s guard down.

When asked about what the most recent Raccoon Stealer takedown means for the shifting dark web market dynamics, Connor states, “Racoon’s ability to recover in 2024 was limited, but what we’re seeing is that the relatively smaller players are starting to grow… We saw that Luma, RisePro and Stealc have now become major players… Luma especially took a huge step up, showing a 241% in popularity in Q3.”

It’s still too early to know if these previously smaller players will have the stamina to create disruptions similar to Raccoon Stealer across the dark web in the next couple of years. There is also the possibility that Racoon Stealer will see some form of recovery in the future.

The important thing is that organizations don’t become complacent in their proactive security planning. IBM’s X-Force team recommends that all organizations continue to conduct comprehensive security testing across their on-premise and cloud infrastructure while regularly strengthening their incident response capabilities. This helps to ensure that even when trends begin to shift, organizations can mitigate their risks of having systems or networks compromised.

The post Cloud threat report: Why have SaaS platforms on dark web marketplaces decreased? appeared first on Security Intelligence.

How to craft a comprehensive data cleanliness policy

Practicing good data hygiene is critical for today’s businesses. With everything from operational efficiency to cybersecurity readiness relying on the integrity of stored data, having confidence in your organization’s data cleanliness policy is essential.

But what does this involve, and how can you ensure your data cleanliness policy checks the right boxes? Luckily, there are practical steps you can follow to ensure data accuracy while mitigating the security and compliance risks that come with poor data hygiene.

Understanding the 6 dimensions of data cleanliness

It doesn’t matter where your company data is sourced — without addressing its quality and accuracy, you won’t be able to rely on it. To create the right data cleanliness policy, you’ll need to understand its different dimensions. These include:

  • Accuracy: Identifies to what extent data can be trusted and is free from errors. This requires specific validation protocols and compliance with data collection standards.
  • Completeness: Signifies whether or not collected data provides clear answers to certain questions. It involves evaluating any missing data attributes and recognizing any apparent gaps.
  • Consistency: Checks that data is properly mirrored when stored in multiple databases and represented by a percentage of matched values.
  • Validity: Refers to data adherence against predefined rules or formats. It helps eliminate the violation of logical constraints or data type restrictions.
  • Uniqueness: Makes sure all data types reference the same units of measure or support formats to remove the possibility of information overlapping or duplication across data sets.
  • Timeliness: Represents the degree to which data remains up-to-date. This ensures data is accessible when it’s required so it can be used properly.

Once you have a grasp on these six core elements, you’re ready to move forward with crafting your data cleanliness policy.

Explore data security solutions

Step 1: Define policy scope and objectives

The first step to take when creating a data cleanliness policy is to define all appropriate business objectives. Any specific data sets or systems and the intended use of the information within them should be clearly outlined.

This step also involves considering often-overlooked data, including unused software logs, outdated emails and former customer records. If this information is forgotten about, it can lead to security issues down the road when they are left in unsecured locations.

Step 2: Classify data assets

With your policy scope defined, you’ll need to take inventory of all relevant data sources. Data assets can include various databases spread across multi-cloud environments, locally stored spreadsheets or any other areas where data is stored.

Classifying all data assets is another way to minimize forgotten data from compiling and creating high-value targets for cyber criminals. During this process, you’ll also want to categorize data based on its relative sensitivity or regulatory requirements. This will make it easier to implement the right access controls and data retention policies.

Step 3: Establish data quality standards

The data quality standards you develop for your policy should be measurable and easy to understand. To achieve this, you’ll need to lay out specific criteria for each data type, including the acceptable formats data should be in and any validation rules you have in place.

With your metrics in place, you’ll be able to regularly monitor their performance over time. Many times, regulatory requirements will stipulate that data needs to meet certain accuracy and completeness benchmarks. Having these trackable metrics in place provides the transparency needed to ensure these regulations are continuously being met.

Step 4: Assign roles and responsibilities

Establishing clear accountabilities is essential when managing organizational data. Your data cleanliness policy should define the various roles in your organization, including specifying who can access data and what levels of permission they have.

Controlling the amount of individuals who can access, modify or delete data is one of the most important elements of ensuring data integrity over the long term. It helps you to mitigate the danger of insider threats as well as establish clear lines of accountability if and when anomalies are located in data sets.

It is also common to make use of a data governance team that can help to implement and enforce various policy initiatives. These teams can reduce the likelihood of data inconsistency and help support various data security protocols in place.

Step 5: Implement data cleansing procedures

In the event that data issues are discovered, your policy should also cover necessary data correction procedures. This can include standardization, normalization or deduplication of data stored across systems.

Another supporting element of this process is having clear data retention and disposal policies in place. This helps to reinforce best practices when it comes to data lifecycle management. It also minimizes a digital attack surface, making it less likely that sensitive information is left in a vulnerable storage state, and helps to minimize damages in the event of a successful cyberattack.

Maintain healthier organizational data

Being able to rely on the accuracy and consistency of your company data is critical. Not only does data integrity play an important factor in improving the value of your technology investments, but it also helps to strengthen your cybersecurity posture.

By following the steps above, you’ll be able to draft a data cleanliness policy that allows you to maintain healthier organizational data while extracting its full value.

The post How to craft a comprehensive data cleanliness policy appeared first on Security Intelligence.

2024 roundup: Top data breach stories and industry trends

With 2025 on the horizon, it’s important to reflect on the developments and various setbacks that happened in cybersecurity this past year. While there have been many improvements in security technologies and growing awareness of emerging cybersecurity threats, 2024 was also a hard reminder that the ongoing fight against cyber criminals is far from over.

We’ve summarized this past year’s top five data breach stories and industry trends, with key takeaways from each that organizations should note going into the following year.

Billions of US citizens have private data exposed

On April 8, 2024, one of the largest personal data breaches took place, leading to nearly 3 billion US citizens having their information leaked on the dark web. Even more shocking was that all of this information came from only one source — National Public Data, a background check and fraud prevention service located in Coral Springs, Florida.

The stolen information collected contained names, social security numbers, home addresses and known relatives, and was listed on the dark web for sale for $3.5 million. Many of the victims were still unaware of the breach several months later, leading to several class action lawsuits filed by a dozen U.S. states. National Public Data has since then filed for bankruptcy.

Third-party breaches impact top 48 energy companies

A SecurityScorecard report revealed this year that 90% of the world’s top energy companies experienced data breaches that stemmed from third-party breaches. Many of these attacks were a direct result of increased reliance on cloud services and third-party integration to manage networked systems.

It was confirmed that out of the 264 individual breaches linked to third-party compromises, the MOVEit vulnerability was one of the major reasons for the issues. With critical infrastructure organizations playing a significant role in the health and well-being of citizens, these types of breaches continue to threaten public safety. The energy sector as a whole has since begun implementing stricter vendor assessments, continuous system and threat monitoring solutions and more secure data transfer protocols.

Read the Cost of a Data Breach Report

Financial firms face the highest data breach costs since the pandemic

According to the IBM Cost of a Data Breach 2024 report, the financial sector has seen a surge in data breach costs since the pandemic, reaching an average of $6.08 million per incident. While various attack types account for this increase, IT failures and simple human error account for a significant portion of the problem.

While certain improvements have been made in threat detection and containment timelines, many financial firms still have an uphill battle to climb. Larger-scale financial service breaches are now estimated to reach hundreds of millions of dollars in damages, leading many organizations to invest more in comprehensive identity and access management (IAM) solutions, AI-powered security solutions and dedicated incident response teams.

Average data breach cost increases 10% year-over-year

The global average cost of data breaches jumped 10% year-over-year between 2023 and 2024, with the latest figure reaching an alarming $4.88 million. The number represented by this average is driven by a number of factors, including lost business revenues, recovery costs and regulatory fines.

Complicating this ongoing trend, 40% of breaches recorded now involve data spread across multiple public and cloud environments and on-premises systems. These larger digital footprints average over $5 million in recovery costs with an average containment timeline of 283 days. Encouragingly, organizations that leverage AI-driven security workflows are experiencing a significantly lower average of $2.2 million per breach, pointing to a positive trend in next-generation security measures.

50% of data breaches tied to security staffing shortages

The cybersecurity skills gap widened over the last few years, with 50% of organizations experiencing data breaches reporting that they stemmed from staffing shortages. Skills shortages are specific to a wide range of critical areas, including cloud security and incident response, data analysis and compliance expertise. Another growing need for these impacted organizations is proficiency in security information and event management (SIEM) tools and active threat hunting.

In an ongoing effort to fill the key personnel gaps, it’s now recommended that organizations put a stronger focus on upskilling their existing workforce. Modern businesses can also leverage professional soft skills such as good communication and adaptability to help supplement and strengthen their security teams.

Moving into 2025

The past year has shown that while modern cybersecurity tools and solutions provide protection against a broader range of threats, very few industries and organizations are immune to cyber crime’s evolving nature.

As we move into 2025, enterprises should prioritize a proactive approach to cybersecurity planning. This includes optimizing their access restriction policies when operating with both in-house and remote teams, working to address any critical staffing shortages, and creating a stronger culture of security awareness within their organization.

The post 2024 roundup: Top data breach stories and industry trends appeared first on Security Intelligence.

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.

However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing environments is actually moderately low. Still, projections from X-Force reveal that an increase in these sophisticated attack methods could be on the horizon.

Current status of the cloud computing market

The cloud computing market continues to grow exponentially, with experts expecting its value to reach more than $675 billion by the end of 2024. As more organizations expand their operational capabilities beyond on-premise restrictions and leverage public and private cloud infrastructure and services, adoption of AI technology is steadily increasing across multiple industry sectors.

Generative AI’s rapid integration into cloud computing platforms has created many opportunities for businesses, especially when enabling better automation and efficiency in the deployment, provisioning and scalability of IT services and SaaS applications.

However, as more businesses rely on new disruptive technologies to help them maximize the value of their cloud investments, the potential security danger that generative AI poses is something closely monitored by various cybersecurity organizations.

Read the Cloud Threat Landscape Report

Why are AI-generated attacks in the cloud currently considered lower risk?

Although AI-generated attacks are still among the top emerging risks for senior risk and assurance executives, according to a recent Gartner report, the current threat of AI technologies being exploited and leveraged in cloud infrastructure attacks is still moderately low, according to X-Force’s research.

This isn’t to say that AI technology isn’t still being regularly used in the development and distribution of highly sophisticated phishing schemes at scale. This behavior has already been observed with active malware distributors like Hive0137, who make use of large language models (LLMs) when scripting new dark web tools. Rather, the current lower risk projections are relevant to the likelihood of AI platforms being directly targeted in both cloud and on-premise environments.

One of the primary reasons for this lower risk has to do with the complex undertaking it will take for cyber criminals to breach and manipulate the underlying infrastructure of AI deployments successfully. Even if attackers put considerable resources into this effort, the still relatively low market saturation of cloud-based AI tools and solutions would likely lead to a low return on investment in time, resources and risks associated with carrying out these attacks.

Preparing for an inevitable increase in AI-driven cloud threats

While the immediate risks of AI-driven cloud threats may be lower today, this isn’t to say that organizations shouldn’t prepare for this to change in the near future.

IBM’s X-Force team has recognized correlations between the percentage of market share new technologies have across various markets and the trigger points related to their associated cybersecurity risks. According to the recent X-Force analysis, once generative AI matures and approaches 50% market saturation, it’s likely that its attack surface will become a larger target for cyber criminals.

For organizations currently utilizing AI technologies and proceeding with cloud adoption, designing more secure AI strategies is essential. This includes developing stronger identity security postures, integrating security throughout their cloud development processes and safeguarding the integrity of their data and quantum computation models.

The post Cloud Threat Landscape Report: AI-generated attacks low for the cloud appeared first on Security Intelligence.

❌