Visualização normal

Antes de ontemStream principal
  • ✇Security Intelligence
  • How to calculate your AI-powered cybersecurity’s ROI Mike Elgan
    Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company’s internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes. The organization’s AI-powered cybersecurity solution, which continuously monitors network traffic
     

How to calculate your AI-powered cybersecurity’s ROI

17 de Janeiro de 2025, 11:00

Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company’s internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.

The organization’s AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains across the network, quarantines the phishing emails, resets passwords for all potentially compromised accounts and sends real-time alerts to the security operations center, providing detailed information about the attack vector and affected systems.

Using predictive analytics, the AI suggests potential next steps the attackers might take, allowing the security team to strengthen defenses in those areas proactively.

The good guys won. But was the AI solution worth the price? What’s the value in dollars of that victory? It’s easy to measure the investment in AI. But how do you measure the return on that investment? Specifically, how do you measure the value of data never stolen, unknown reputational damage that never happened, customer trust never lost or reduced operational risks never incurred?

The rise of AI cybersecurity

To be sure, cybersecurity AI spending is set to increase dramatically. Organizations spent $24 billion in 2023, with an expected rise to $133 billion by 2030. Cybersecurity professionals and the companies they work for will increasingly rely on advanced AI solutions as threats grow and the cost of data breaches also rises.

The challenging nature of cybersecurity ROI is compounded by many other factors — dozens, hundreds or thousands of attempted cyberattacks per year per organization; the lack of universally accepted metrics or calculations for cybersecurity ROI; the long payback period for investments in cybersecurity AI; the fast-changing nature of the threat landscape; the fact that cybersecurity investments also touch areas like operational efficiency, regulatory compliance and others.

Historically, organizations calculated ROI in cybersecurity investments by estimating money saved in the absence of security incidents. But that fails to account for proactive security measures, efficiency gains in operations and the overall security posture. With the integration of AI, cybersecurity has fundamentally changed, offering enhanced threat detection and prevention capabilities beyond simply measuring the absence of incidents.

A proactive approach and improved operational efficiency through task automation provide tangible benefits not captured in traditional ROI calculations.

Explore AI cybersecurity solutions

New metrics for ROI calculation

The use of AI tools has transformed the typical cybersecurity ROI calculation, introducing several quantifiable metrics:

These metrics offer a more comprehensive view of the value derived from AI-powered cybersecurity investments, enabling organizations to make more informed decisions about resource allocation and strategic planning.

Cost savings can also be measured in the aggregate. According to the IBM 2024 Cost of a Data Breach report, organizations extensively using security AI and automation in prevention workflows saved an average of $2.2 million in breach costs compared to those without such technologies.

Still, measuring AI cybersecurity ROI comes with challenges, including difficulty attributing prevented incidents directly to AI, the constantly evolving threat landscape and balancing initial investment costs with long-term benefits.

Taking a holistic approach to cybersecurity AI ROI

Organizations can leverage established frameworks, such as the NIST Cybersecurity Framework, to effectively measure and communicate AI’s ROI in cybersecurity. By aligning AI initiatives with these functions, organizations can more accurately measure their impact on overall cybersecurity performance.

To effectively measure the impact of AI on cybersecurity ROI, organizations should focus on specific Key Performance Indicators (KPIs):

  • Mean time to detect
  • Mean time to respond
  • Security operational efficiency
  • Threat intelligence accuracy
  • Compliance adherence rate

The best approach is to adopt a more comprehensive approach that uses risk assessment frameworks, measures risk reduction, considers and estimates intangible benefits and regularly reviews and updates calculations.

Organizations must adopt a holistic approach that considers the proactive capabilities, efficiency gains and quantifiable metrics provided by AI-powered solutions. This comprehensive evaluation allows a more accurate assessment of cybersecurity investments’ true value and impact in today’s complex threat landscape.

Of course, cyberattacks don’t happen randomly or in a vacuum. Take the follow-on consequences of the ongoing cybersecurity skills gap, which can be self-enlarging, according to Sam Hector, senior strategy leader of IBM Security.

“When you don’t have enough skilled experts in monitoring and defending your infrastructure, a few things happen,” Hector said. “The time to triage alerts grows as the queue of incidents to review becomes longer, meaning you’re more likely to be breached, and attackers dwell times increase (when they are in your environment undetected) as you’re less likely to find the needle in the haystack. The time to detect increasing directly leads to higher breach costs on average.”

And the problem keeps growing: “Teams that are stretched too thin don’t have the time to devote to improving cybersecurity processes, integration and efficiency,” Hector said. “They’re unable to drill exercises and embark on further training as they’re too focused on keeping the lights on. This means over time, they’re less effective comparable to the threat landscape, and misconfigurations and gaps develop that attackers can exploit.”

Hector said persistent attackers are unlikely to go unnoticed by these weakening defenses: “If there’s a specific industry, region or even organization that is known to be struggling to acquire cybersecurity skills, this puts them at increased risk of being targeted by attackers who will be anticipating weaker defenses.”

An ongoing shift in cybersecurity investment

The integration of AI in cybersecurity has fundamentally changed how organizations approach and measure their security investments. By providing more tangible and comprehensive ROI metrics, AI enables organizations to make data-driven decisions about their cybersecurity strategies. As cyber threats continue to evolve, the role of AI in cybersecurity will only grow more critical, making it essential for organizations to invest in — and effectively measure — the impact of these technologies.

The post How to calculate your AI-powered cybersecurity’s ROI appeared first on Security Intelligence.

  • ✇Security Intelligence
  • ISC2 Cybersecurity Workforce Study: Shortage of AI skilled workers Sue Poremba
    AI has made an impact everywhere else across the tech world, so it should surprise no one that the 2024 ISC2 Cybersecurity Workforce Study saw artificial intelligence (AI) jump into the top five list of security skills. It’s not just the need for workers with security-related AI skills. The Workforce Study also takes a deep dive into how the 16,000 respondents think AI will impact cybersecurity and job roles overall, from changing skills approaches to creating generative AI (gen AI) strategies.
     

ISC2 Cybersecurity Workforce Study: Shortage of AI skilled workers

15 de Janeiro de 2025, 11:00

AI has made an impact everywhere else across the tech world, so it should surprise no one that the 2024 ISC2 Cybersecurity Workforce Study saw artificial intelligence (AI) jump into the top five list of security skills.

It’s not just the need for workers with security-related AI skills. The Workforce Study also takes a deep dive into how the 16,000 respondents think AI will impact cybersecurity and job roles overall, from changing skills approaches to creating generative AI (gen AI) strategies.

Budgets and the skills gap

According to the study, two-thirds of respondents think that their expertise in cybersecurity will augment AI technology; on the flip side, a third are concerned their jobs could be eliminated in an AI-focused world.

That, of course, is not going to happen immediately. Not even half the respondents have implemented gen AI into their tools. The more immediate concern for cybersecurity professionals is budgets.

“In 2024, 25% of respondents reported layoffs in their cybersecurity departments, a 3% rise from 2023, while 37% faced budget cuts, a 7% rise from 2023,” the report stated.

These budget cuts have impacted the skills gap, as two-thirds of the respondents said not only have the budget cuts led to current staffing shortages but they are expected to make closing the skills gap even more difficult in the next few years.

Many of the respondents pointed out that the skills gap has had a more negative effect on organizational security than the decrease in on-site staff. In part because the funding isn’t available for training and because those with skills in high demand are moving on to better-paying positions, many security teams struggle to address the threats and risks in today’s cybersecurity landscape.

Explore IBM SkillsBuild

The role of AI in the skills gap

Two years ago, AI wasn’t even considered a required skill set for cybersecurity jobs, but now it is a top five skill, said Jon France, CISO with ISC2.

“And we suspect that probably next year, it will be the number one in-demand skill set around security,” France said in a conversation at ISC2’s Security Congress in Las Vegas.

(If you’re wondering, the other skills in the top five are cloud, zero trust architecture, forensics, incident response and application security — all areas that have been at the top of the skills need list for a long time.)

AI’s role in cybersecurity is changing because of the exponential increase in data and the need to gather good intelligence on the data being generated.

“AI is one of the tools that can obviously consider large data sets very quickly,” said France. Still, human eyes are necessary to validate the results generated from AI models. This is where AI security skills will be most needed to advance the changes in how analysts and incident responders analyze data.

France also believes that AI will change the scope of entry-level security positions. “I think if you’re coming into the profession, and if you’ve got to pick up one thing to learn, you’ll get the most favorable opportunities if you have experience of using generative AI coding.”

Right now, however, there is a bit of a disconnect between the technical skills that hiring managers think are needed and what non-hiring managers want. Both types of managers list cloud computing security skills at the top of the list, but when asked about AI/ML skills, only 24% of hiring managers said it was a skill they want right now, ranking last on the skills-need list. When non-hiring managers are asked about the skills most in demand to advance careers, 37% said AI/ML, higher than every other listed skill but cloud security.

AI is reinventing cybersecurity skills

In its study AI in Cyber 2024, ISC2 found that 82% of respondents are optimistic that AI will improve work efficiency, and 88% thought it would impact their job role in some way. Relying more on AI in the cyber world has a lot of positive points, but there are also issues around the technology causing stress. Four in ten respondents said they aren’t prepared for the explosion of AI, according to the AI study, and 65% said their organization needs more regulations around the safe use of gen AI, according to the Workforce study.

But there are also a lot of question marks surrounding what skills will be needed. “While study participants speculated on what skills may be automated or streamlined, they cannot yet predict what activities, if any, AI will replace,” the study reported. Perhaps this is why hiring managers are showing some reluctance to hire cybersecurity professionals who have AI technical expertise.

With AI, many anticipate an uptick in the need for non-technical skills. Cybersecurity has been more open to finding potential professionals outside of the traditional technical areas and training them for their new roles, so it isn’t too surprising that, because hiring managers aren’t certain of the type of skills that will be required for using gen AI as a security tool (or for securing gen AI, for that matter), there is a greater willingness to default to non-tech skills that are seen as more transferable as the technology evolves. Overall, strong communication skills were listed as the most in-demand skill set across all of cybersecurity, followed closely by strong problem-solving skills and teamwork/collaboration skills.

The cyber workforce in the world of AI

Looking at the overall picture of how AI skills will fit into the cybersecurity workforce going forward, it is likely that the issues that hamper hiring today will have a similar impact on AI expertise. Budget cuts will decrease the workforce, as already mentioned. France pointed to the human resources gap as well, where entry-level positions are posted with requirements such as certifications that require five years of work experience.

“We also need to blow this myth: New entrance into the cybersecurity workforce doesn’t mean young. It can be a career change. In fact, career changes bring a lot of different viewpoints and experiences,” said France.

Hire for the skills the employee is bringing to the table, even if they aren’t what you need right now. “The rest,” said France, “can be taught.”

The post ISC2 Cybersecurity Workforce Study: Shortage of AI skilled workers appeared first on Security Intelligence.

  • ✇Security Intelligence
  • Preparing for the future of data privacy Jennifer Gregory
    The focus on data privacy started to quickly shift beyond compliance in recent years and is expected to move even faster in the near future. Not surprisingly, the Thomson Reuters Risk & Compliance Survey Report found that 82% of respondents cited data and cybersecurity concerns as their organization’s greatest risk. However, the majority of organizations noticed a recent shift: that their organization has been moving from compliance as a “check the box” task to a strategic function. With th
     

Preparing for the future of data privacy

2 de Janeiro de 2025, 11:00

The focus on data privacy started to quickly shift beyond compliance in recent years and is expected to move even faster in the near future. Not surprisingly, the Thomson Reuters Risk & Compliance Survey Report found that 82% of respondents cited data and cybersecurity concerns as their organization’s greatest risk. However, the majority of organizations noticed a recent shift: that their organization has been moving from compliance as a “check the box” task to a strategic function.

With this evolution in data privacy, many organizations find that they need to proactively make changes to their approach to set themselves up for the future. Here are five key considerations to get ready for the future of data privacy.

1. Create a process for staying up to date on new and evolving regulations

While data privacy is more than simply compliance, your organization must comply with all regulations first and foremost — or else risk fines and reputational damage. However, regulations are constantly being passed and changed, making it exceptionally challenging to stay up to date. As of September 2024, 20 states had consumer data privacy laws, with legislation pending in numerous other states. While the U.S. does not currently have a federal data privacy law, the American Privacy Rights Act is in the first stage of legislation.

As the data privacy regulation landscape continues to change, organizations must create a process to manage all pertinent regulations, which can be challenging for global companies. Because organizations must comply with the regulations of their customer locations, not the company’s locations, global businesses often find themselves bound by many different regulations. Organizations are increasingly turning to artificial intelligence (AI) with tools that monitor all relevant regulations and ensure compliance, which saves time and reduces fines.

2. Focus on balancing data privacy with analytics and AI goals

AI at the University of Pennsylvania’s Wharton School found that the percentage of employees who used AI weekly increased from 37% in 2023 to 73% in 2024. However, this significant and rapid increase in AI adoption has created significant data privacy issues. Top concerns include a lack of data transparency, new endpoints for vulnerabilities, third-party vendors and potential regulatory gaps. At the same time, businesses not using AI will likely quickly fall behind competitors in productivity and personalization.

Because not using AI is rarely the right business decision, organizations must take a strategic approach to creating a balance between business value and data security. While technology is part of the solution, platforms and systems cannot solve the challenges without a balanced approach. By creating processes and a framework that helps organizations evaluate risks and benefits, businesses can make smart business decisions with regard to data privacy. For example, a company may adopt automation throughout their organization using AI except in use cases that involve sensitive customer and employee data.

Explore data privacy solutions

3. Consider privacy-preserving machine learning (PPML)

By using specific techniques in AI and analytics, organizations can reduce data privacy risks. Many organizations are turning to PPML, which is an initiative started by Microsoft to protect data privacy when training large-capacity language models. Here are the three components of PPML defined by Microsoft:

  1. Understand: Organizations should conduct threat modeling and attack research while also identifying properties and guarantees. Additionally, leaders need to understand regulatory requirements.
  2. Measure: To determine the current status of data privacy, leaders should capture vulnerabilities quantitatively. Next, teams should develop and apply frameworks to monitor risks and mitigation success.
  3. Mitigate: After gaining a full picture of data privacy, teams must develop and apply techniques to reduce privacy risks. Lastly, leaders must meet all legal and compliance regulations.

4. Focus on data minimization

In the past, many businesses defaulted to keeping all — or at least most of — their data for a lengthy period of time. However, all data stored and saved must follow compliance regulations, causing many organizations to use a strategy referred to as data minimization.

Deloitte defines data minimization as taking steps to determine what information is needed, how it’s protected and used and how long to keep it. By taking this measured approach and determining which data to keep, organizations can reduce costs, make it easier to find the right data and improve compliance. Additionally, it’s easier and takes fewer resources to secure a smaller volume of data.

5. Create a culture of data privacy

Just like cybersecurity, data privacy is not simply the job of specific employees. Instead, organizations need to instill the mindset that every employee is responsible for data privacy. Creating a data privacy culture doesn’t happen overnight or with a single meeting. Instead, leaders must work to instill the values and focus over time. The first step is for leaders to become champions, express the shift in responsibility and “walk the walk” in terms of data privacy.

Because data privacy depends on team members following the processes and requirements specified, organizations must not simply dictate the rules but instead must explain the importance of data privacy. When employees understand the risks of not following the processes as well as the consequences to the organization and its consumers, they are more likely to comply.

Additionally, leaders should measure compliance with the processes to determine the current state and then the goal. By then offering incentives, organizations can help encourage compliance as well as stress its overall importance.

Start crafting your data privacy approach now

As your team focuses on planning for 2025 and beyond, now is the time to pause to make sure that your approach and goals align with where the industry is moving. Organizations that understand where data privacy is likely headed and take the steps needed to align their goals with the future of data privacy can be better prepared to more effectively gain business value from their data while still ensuring compliance.

The post Preparing for the future of data privacy appeared first on Security Intelligence.

  • ✇News – Security Intelligence
  • Apple Intelligence raises stakes in privacy and security Jonathan Reed
    Apple’s latest innovation, Apple Intelligence, is redefining what’s possible in consumer technology. Integrated into iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1, this milestone puts advanced artificial intelligence (AI) tools directly in the hands of millions. Beyond being a breakthrough for personal convenience, it represents an enormous economic opportunity. But the bold step into accessible AI comes with critical questions about security, privacy and the risks of real-time decision-making i
     

Apple Intelligence raises stakes in privacy and security

26 de Dezembro de 2024, 11:00

Apple’s latest innovation, Apple Intelligence, is redefining what’s possible in consumer technology. Integrated into iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1, this milestone puts advanced artificial intelligence (AI) tools directly in the hands of millions. Beyond being a breakthrough for personal convenience, it represents an enormous economic opportunity. But the bold step into accessible AI comes with critical questions about security, privacy and the risks of real-time decision-making in users’ most private digital spaces.

AI in every pocket

Having sophisticated AI at your fingertips isn’t just a leap in personal technology; it’s a seismic shift in how industries will evolve. By enabling real-time decision-making, mobile artificial intelligence can streamline everything from personalized notifications to productivity tools, making AI a ubiquitous companion in daily life. But what happens when AI that draws from “personal context” is compromised? Could this create a bonanza of social engineering and malicious exploits?

The risks of real-time AI processing

Apple Intelligence thrives on real-time personalization — analyzing user interactions to refine notifications, messaging and decision-making. While this enhances the user experience, it’s a double-edged sword. If attackers compromise these systems, the AI’s ability to customize notifications or prioritize messages could become a weapon. Malicious actors could manipulate AI to inject fraudulent messages or notifications, potentially duping users into disclosing sensitive information.

These risks aren’t hypothetical. For example, security researchers have exposed how hidden data in images can deceive AI into taking unintended actions — a stark reminder of how intelligent systems remain susceptible to creative exploitation.

In the new, real-time AI age, AI cybersecurity must address several risks, such as:

  1. Privacy concerns: Continuous data collection and analysis can lead to unauthorized access or misuse of personal information. For instance, AI-powered virtual assistants that capture frequent screenshots to personalize user experiences have raised significant privacy issues.
  2. Security vulnerabilities: Real-time AI systems can be susceptible to cyberattacks, especially if they process sensitive data without robust security measures. The rapid evolution of AI introduces new vulnerabilities, necessitating strong data protection mechanisms.
  3. Bias and discrimination: AI models trained on biased data can perpetuate or even amplify existing prejudices, leading to unfair outcomes in real-time applications. Addressing these biases is crucial to ensure equitable AI deployment.
  4. Lack of transparency: Real-time decision-making by AI systems can be opaque, making it challenging to understand or challenge outcomes, especially in critical areas like healthcare or criminal justice. This opacity can undermine trust and accountability.
  5. Operational risks: Dependence on real-time AI can lead to overreliance on automated systems, potentially resulting in operational failures if the AI system malfunctions or provides incorrect outputs. Ensuring human oversight is essential to mitigate such risks.
Explore AI cybersecurity solutions

Privacy: Apple’s ace in the hole

Unlike many competitors, Apple processes much of its AI functionality on-device, leveraging its latest A18 and A18 Pro chips, specifically designed for high-performance, energy-efficient machine learning. For tasks requiring greater computational power, Apple employs Private Cloud Compute, a system that processes data securely without storing or exposing it to third parties.

Apple’s long-standing reputation for prioritizing privacy gives it a competitive edge. Yet, even with robust safeguards, no system is infallible. Compromised AI features — especially those tied to messaging and notifications — could become a goldmine for social engineering schemes, threatening the very trust that Apple has built its brand upon.

Economic upside vs. security downside

The economic scale of this innovation is staggering, as it pushes companies to adopt AI-driven solutions to stay competitive. However, this proliferation amplifies security challenges. The widespread adoption of real-time AI raises the stakes for all users, from everyday consumers to enterprise-level stakeholders.

To stay ahead of potential threats, Apple has expanded its Security Bounty Program, offering rewards of up to $1 million for identifying vulnerabilities in its AI systems. This proactive approach underscores the company’s commitment to evolving alongside emerging threats.

The AI double-edged sword

The arrival of Apple Intelligence is a watershed moment in consumer technology. It promises unparalleled convenience and personalization while also highlighting the inherent risks of entrusting critical processes to AI. Apple’s dedication to privacy offers a significant buffer against these risks, but the rapid evolution of AI demands constant vigilance.

The question isn’t whether AI will become an integral part of our lives — it already has. The real challenge lies in ensuring that this technology remains a force for good, safeguarding the trust and security of those who rely on it. As Apple paves the way for AI in the consumer market, the balance between innovation and protection has never been more critical.

The post Apple Intelligence raises stakes in privacy and security appeared first on Security Intelligence.

  • ✇Security Intelligence
  • Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models Charles Owen-Jackson
    With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook. With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data us
     

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

17 de Dezembro de 2024, 11:00

With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.

With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to train them.

That’s where AI-specific red teaming comes in. It’s a way to test the resilience of AI systems against dynamic threat scenarios. This involves simulating real-world attack scenarios to stress-test AI systems before and after they’re deployed in a production environment. Red teaming has become vitally important in ensuring that organizations can enjoy the benefits of gen AI without adding risk.

IBM’s X-Force Red Offensive Security service follows an iterative process with continuous testing to address vulnerabilities across four key areas:

  1. Model safety and security testing
  2. Gen AI application testing
  3. AI platform security testing
  4. MLSecOps pipeline security testing

In this article, we’ll focus on three types of adversarial attacks that target AI models and training data.

Prompt injection

Most mainstream gen AI models have safeguards built in to mitigate the risk of them producing harmful content. For example, under normal circumstances, you can’t ask ChatGPT or Copilot to write malicious code. However, methods such as prompt injection attacks and jailbreaking can make it possible to work around these safeguards.

One of the goals of AI red teaming is to deliberately make AI “misbehave” — just as attackers do. Jailbreaking is one such method that involves creative prompting to get a model to subvert its safety filters. However, while jailbreaking can theoretically help a user carry out an actual crime, most malicious actors use other attack vectors — simply because they’re far more effective.

Prompt injection attacks are much more severe. Rather than targeting the models themselves, they target the entire software supply chain by obfuscating malicious instructions in prompts that otherwise appear harmless. For instance, an attacker might use prompt injection to get an AI model to reveal sensitive information like an API key, potentially giving them back-door access to any other systems that are connected to it.

Red teams can also simulate evasion attacks, a type of adversarial attack whereby an attacker subtly modifies inputs to trick a model into classifying or misinterpreting an instruction. These modifications are usually imperceptible to humans. However, they can still manipulate an AI model into taking an undesired action. For example, this might include changing a single pixel in an input image to fool the classifier of a computer vision model, such as one intended for use in a self-driving vehicle.

Explore X-Force Red Offensive Security Services

Data poisoning

Attackers also target AI models during training and development, hence it’s essential that red teams simulate the same attacks to identify risks that could compromise the whole project. A data poisoning attack happens when an adversary introduces malicious data into the training set, thereby corrupting the learning process and embedding vulnerabilities into the model itself. The result is that the entire model becomes a potential entry point for further attacks. If training data is compromised, it’s usually necessary to retrain the model from scratch. That’s a highly resource-intensive and time-consuming operation.

Red team involvement is vital from the very beginning of the AI model development process to mitigate the risk of data poisoning. Red teams simulate real-world data poisoning attacks in a secure sandbox environment air-gapped from existing production systems. Doing so provides insights into how vulnerable the model is to data poisoning and how real threat actors might infiltrate or compromise the training process.

AI red teams can proactively identify weaknesses in data collection pipelines, too. Large language models (LLMs) often draw data from a huge number of different sources. ChatGPT, for example, was trained on a vast corpus of text data from millions of websites, books and other sources. When building a proprietary LLM, it’s crucial that organizations know exactly where they’re getting their training data from and how it’s vetted for quality. While that’s more of a job for security auditors and process reviewers, red teams can use penetration testing to assess a model’s ability to resist flaws in its data collection pipeline.

Model inversion

Proprietary AI models are usually trained, at least partially, on the organization’s own data. For instance, an LLM deployed in customer service might use the company’s customer data for training so that it can provide the most relevant outputs. Ideally, models should only be trained based on anonymized data that everyone is allowed to see. Even then, however, privacy breaches may still be a risk due to model inversion attacks and membership inference attacks.

Even after deployment, gen AI models can retain traces of the data that they were trained on. For instance, the team at Google’s DeepMind AI research laboratory successfully managed to trick ChatGPT into leaking training data using a simple prompt. Model inversion attacks can, therefore, allow malicious actors to reconstruct training data, potentially revealing confidential information in the process.

Membership inference attacks work in a similar way. In this case, an adversary tries to predict whether a particular data point was used to train the model through inference with the help of another model. This is a more sophisticated method in which an attacker first trains a separate model – known as a membership inference model — based on the output of the model they’re attacking.

For example, let’s say a model has been trained on customer purchase histories to provide personalized product recommendations. An attacker may then create a membership inference model and compare its outputs with those of the target model to infer potentially sensitive information that they might use in a targeted attack.

In either case, red teams can evaluate AI models for their ability to inadvertently leak sensitive information directly or indirectly through inference. This can help identify vulnerabilities in training data workflows themselves, such as data that hasn’t been sufficiently anonymized in accordance with the organization’s privacy policies.

Building trust in AI

Building trust in AI requires a proactive strategy, and AI red teaming plays a fundamental role. By using methods like adversarial training and simulated model inversion attacks, red teams can identify vulnerabilities that other security analysts are likely to miss.

These findings can then help AI developers prioritize and implement proactive safeguards to prevent real threat actors from exploiting the very same vulnerabilities. For businesses, the result is reduced security risk and increased trust in AI models, which are fast becoming deeply ingrained across many business-critical systems.

The post Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models appeared first on Security Intelligence.

❌
❌