Visualização normal

Antes de ontemSecurity Intelligence
  • ✇Security Intelligence
  • 4 ways to bring cybersecurity into your community Jennifer Gregory
    It’s easy to focus on technology when talking about cybersecurity. However, the best prevention measures rely on the education of those who use technology. Organizations training their employees is the first step. But the industry needs to expand the concept of a culture of cybersecurity and take it from where it currently stands as an organizational responsibility to a global perspective. When every person who uses technology — for work, personal use and school — views cybersecurity as their r
     

4 ways to bring cybersecurity into your community

14 de Fevereiro de 2025, 11:00

It’s easy to focus on technology when talking about cybersecurity. However, the best prevention measures rely on the education of those who use technology. Organizations training their employees is the first step. But the industry needs to expand the concept of a culture of cybersecurity and take it from where it currently stands as an organizational responsibility to a global perspective.

When every person who uses technology — for work, personal use and school — views cybersecurity as their responsibility, it becomes much harder for cyber criminals to successfully launch attacks. Achieving this goal starts with taking precautions to reduce personal risk through securing devices and data. However, each of us also needs to recognize and report all potential cyber threats we run across.

A global culture of cybersecurity is only possible when corporate organizations, nonprofits and universities all work to spread the message and include outreach in their mission. Here are four ways to take cybersecurity into the community to help create a global culture of cybersecurity:

1. Launch a mentorship initiative

A key element of a global culture of cybersecurity is making sure the industry has a pipeline of diverse and skilled professionals. Because cybersecurity offers non-traditional career pathways, including badging and certifications, job seekers often struggle to determine the best route. When cybersecurity professionals provide support to those who are interested in joining our ranks, we can remove barriers to new cybersecurity professionals entering the field.

For example, the nonprofit Women in Cybersecurity offers a formal nine-month mentorship program that helps members strengthen their skills in areas such as influence, negotiation, leadership, work/life harmony and communication. In 2021, the program matched 1,115 mentees from entry-level to senior level with experienced mentors to help them navigate their journey.

Organizations launching mentorship programs should start by determining their target audiences, such as underserved communities, university students or entry-level professionals. Next, they should determine the framework for the program, including creating a curriculum for mentors, determining how to recruit mentors and matching mentors with mentees. After launching the initiative, it’s important to monitor the program and make changes based on feedback provided by participants.

Build your cybersecurity skills

2. Focus on the next generation

Reaching out to students, especially those in high school and middle school, is a great way to help fill the professional pipeline by targeting young people who are making future career decisions. At the same time, members of this demographic are heavy users of technology and can help spread the education they receive to their families and peers. Iowa State University’s Center for Cybersecurity Innovation & Outreach (CyIO) offers several programs for high schoolers. Since 2007, CyIO has sponsored Innovate-IT clubs, which focus on either game design or cyber defense, at Iowa high schools. The Iowa Cyber Hub also hosts the Youth Cyber Summit every October, which provides activities such as a Capture the Flag challenge, interactive security demos, discussions about career pathways and panel discussions regarding cybersecurity careers.

Organizations looking to nurture the next generation should start by determining their key message and goals, such as educating or encouraging kids to become cybersecurity professionals. Next, decide how to get the message across to the right audience, such as clubs or events. Then, partner with schools or nonprofits that focus on kids to create the programming and get the word out.

3. Look for ways to add humor and fun

Instead of presenting lectures and offering dry information, look for fun ways to get your message out to the community. Balancing humor with information encourages people to pay attention and, most importantly, remember your message. Start with the core message you want to communicate, and then identify your specific target audience. Next, brainstorm ways that will appeal to your audience so you can get your message across while captivating their attention. Be sure to test out your idea with several people in your target audience before going live to make sure you are hitting the mark.

Videos are a great method of reaching people in a lighthearted way. In honor of Cybersecurity Month, Iowa State University created a catchy video called Cyber House Rock!, which encourages people to “encrypt your data, make passwords strong, to keep away all the malware, spam and email scams.” BuzzFeed’s Internet Privacy Prank uses the “show, not tell” approach to help people see how easy it is for cyber criminals to find their information.

Events are also a great way to add humor and fun. Princeton’s cybersecurity team got decked out for its “War Games” showing with an 80s dress-up night. After the show was over, attendees talked about what had changed in terms of information security since the movie was released in 1983. At other events, the team adds fun by bringing a Wheel of Fortune so people can spin it to win prizes while learning about cybersecurity.

4. Create an ambassador program to help friends and families

While mentorships help future and current professionals, Iowa State helps fill a big educational void. The Cybersecurity Ambassador Program, offered through the Iowa Cyber Hub, empowers Iowans by reaching out to businesses, communities, schools, friends and families. The Ambassadors provide the knowledge and tools to help others safely navigate the internet, such as avoiding scams, bullying and privacy breaches.

Focusing on helping residents and students as well as businesses, organizations can use these types of programs to provide education that is often overlooked. Launching an ambassador program is similar to the process of creating a mentorship program, but organizations need to focus on how to reach people who are most in need, such as retired adults and teenagers. Ambassador programs can also offer events to the community on specific topics, like keeping your data private and what to do if your computer is attacked by ransomware.

While it’s easy for organizations to focus on reducing their own vulnerabilities, the digital world is safer when everyone is educated and engaged about cybersecurity. By actively working to achieve this culture, organizations, nonprofits and universities can make big strides to make the internet and technology safer for all.

The post 4 ways to bring cybersecurity into your community appeared first on Security Intelligence.

  • ✇Security Intelligence
  • How red teaming helps safeguard the infrastructure behind AI models Charles Owen-Jackson
    Artificial intelligence (AI) is now squarely on the frontlines of information security. However, as is often the case when the pace of technological innovation is very rapid, security often ends up being a secondary consideration. This is increasingly evident from the ad-hoc nature of many implementations, where organizations lack a clear strategy for responsible AI use. Attack surfaces aren’t just expanding due to risks and vulnerabilities in AI models themselves but also in the underlying inf
     

How red teaming helps safeguard the infrastructure behind AI models

13 de Fevereiro de 2025, 11:00

Artificial intelligence (AI) is now squarely on the frontlines of information security. However, as is often the case when the pace of technological innovation is very rapid, security often ends up being a secondary consideration. This is increasingly evident from the ad-hoc nature of many implementations, where organizations lack a clear strategy for responsible AI use.

Attack surfaces aren’t just expanding due to risks and vulnerabilities in AI models themselves but also in the underlying infrastructure that supports them. Many foundation models, as well as the data sets used to train them, are open-source and readily available to developers and adversaries alike.

Unique risks to AI models

According to Ruben Boonen, CNE Capability Development Lead at IBM: “One problem is that you have these models hosted on giant open-source data stores. You don’t know who created them or how they were modified, and there are a number of issues that can occur here. For example, let’s say you use PyTorch to load a model hosted on one of these data stores, but it has been changed in a way that’s undesirable. It can be very hard to tell because the model might behave normally in 99% of cases.”

Recently, researchers discovered thousands of malicious files hosted on Hugging Face, one of the largest repositories for open-source generative AI models and training data sets. These included around a hundred malicious models capable of injecting malicious code onto users’ machines. In one case, hackers set up a fake profile masquerading as genetic testing startup 23AndMe to deceive users into downloading a compromised model capable of stealing AWS passwords. It was downloaded thousands of times before finally being reported and removed.

In another recent case, red team researchers discovered vulnerabilities in ChatGPT’s API, in which a single HTTP request elicited two responses indicating an unusual code path that could theoretically be exploited if not addressed. This, in turn, could lead to data leakage, denial of service attacks and even escalation of privileges. The team also discovered vulnerabilities in plugins for ChatGPT, potentially resulting in account takeover.

While open-source licensing and cloud computing are key drivers of innovation in the AI space, they’re also a source of risk. On top of these AI-specific risk areas, general infrastructure security concerns also apply, such as vulnerabilities in cloud configurations or poor monitoring and logging processes.

AI models are the new frontier of intellectual property theft

Imagine pouring huge amounts of financial and human resources into building a proprietary AI model, only to have it stolen or reverse-engineered. Unfortunately, model theft is a growing problem, not least because AI models often contain sensitive information and can potentially reveal an organization’s secrets should they end up in the wrong hands.

One of the most common mechanisms for model theft is model extraction, whereby attackers access and exploit models through API vulnerabilities. This can potentially grant them access to black-box models — like ChatGPT — at which point they can strategically query the model to collect enough data to reverse engineer it.

In most cases, AI systems run on cloud architecture rather than local machines. After all, the cloud provides the scalable data storage and processing power required to run AI models easily and accessibly. However, that accessibility also increases the attack surface, allowing adversaries to exploit vulnerabilities like misconfigurations in access permissions.

“When companies provide these models, there are usually client-facing applications delivering services to end users, such as an AI chatbot. If there’s an API that tells it which model to use, attackers could attempt to exploit it to access an unreleased model,” says Boonen.

Red teams keep AI models secure

Protecting against model theft and reverse engineering requires a multifaceted approach that combines conventional security measures like secure containerization practices and access controls, as well as offensive security measures.

The latter is where red teaming comes in. Red teams can proactively address several aspects of AI model theft, such as:

  • API attacks: By systematically querying black-box models in the same way adversaries would, red teams can identify vulnerabilities like suboptimal rate limiting or insufficient response filtering.
  • Side-channel attacks: Red teams can also carry out side-channel analyses, in which they monitor metrics like CPU and memory usage in an attempt to glean information about the model size, architecture or parameters.
  • Container and orchestration attacks: By assessing containerized AI dependencies like frameworks, libraries, models and applications, red teams can identify orchestration vulnerabilities, such as misconfigured permissions and unauthorized container access.
  • Supply chain attacks: Red teams can probe entire AI supply chains spanning multiple dependencies hosted in different environments to ensure that only trusted components like plugins and third-party integrations are being used.

A thorough red teaming strategy can simulate the full scope of real-world attacks against AI infrastructure to reveal gaps in security and incident response plans that could lead to model theft.

Mitigating the problem of excessive agency in AI systems

Most AI systems have a degree of autonomy with regard to how they interface with different systems and respond to prompts. After all, that’s what makes them useful. However, if systems have too much autonomy, functionality or permissions — a concept OWASP calls “excessive agency” — they can end up triggering harmful or unpredictable outputs and processes or leaving gaps in security.

Boonen warns that components, such as optical character recognition (OCR) for PDF files and images which multimodal systems rely on to process inputs, “can introduce vulnerabilities if they’re not properly secured”.

Granting an AI system excessive agency also expands the attack surface unnecessarily, thus giving adversaries more potential entry points. Typically, AI systems designed for enterprise use are integrated into much broader environments spanning multiple infrastructures, plugins, data sources and APIs. Excessive agency is what happens when these integrations result in an unacceptable trade-off between security and functionality.

Let’s consider an example where an AI-powered personal assistant has direct access to an individual’s Microsoft Teams meeting recordings stored in OneDrive for Business, the purpose being to summarize content in those meetings in a readily accessible written format. However, let’s imagine that the plugin doesn’t only have the ability to read meeting recordings but also everything else stored in the user’s OneDrive account, in which many confidential information assets are also stored. Perhaps the plugin even has write capabilities, in which case a security flaw could potentially grant attackers an easy pathway for uploading malicious content.

Once again, red teaming can help identify flaws in AI integrations, especially in environments where many different plugins and APIs are in use. Their simulated attacks and comprehensive analyses will be able to identify vulnerabilities and inconsistencies in access permissions, as well as cases where access rights are unnecessarily lax. Even if they don’t identify any security vulnerabilities, they will still be able to provide insight into how to reduce the attack surface.

The post How red teaming helps safeguard the infrastructure behind AI models appeared first on Security Intelligence.

  • ✇Security Intelligence
  • When you shouldn’t patch: Managing your risk factors Sue Poremba
    Look at any article with advice about best practices for cybersecurity, and about third or fourth on that list, you’ll find something about applying patches and updates quickly and regularly. Patching for known vulnerabilities is about as standard as it gets for good cybersecurity hygiene, right up there with using multi-factor authentication and thinking before you click on links in emails from unknown senders. So imagine my surprise when attending Qualys QSC24 in San Diego to hear a number of
     

When you shouldn’t patch: Managing your risk factors

12 de Fevereiro de 2025, 11:00

Look at any article with advice about best practices for cybersecurity, and about third or fourth on that list, you’ll find something about applying patches and updates quickly and regularly. Patching for known vulnerabilities is about as standard as it gets for good cybersecurity hygiene, right up there with using multi-factor authentication and thinking before you click on links in emails from unknown senders.

So imagine my surprise when attending Qualys QSC24 in San Diego to hear a number of conference speakers say that patching shouldn’t be an automatic reaction. In fact, they say, there are times when it is better not to patch at all.

No, you don’t need to fix everything, says Dilip Bachwani, Chief Technology Officer with Qualys.

“It’s not practical,” Bachwani adds. “Even if there is a vulnerability, it may not apply in your environment.” It could be an application that isn’t an internet-facing asset or something secured through other controls.

Knowing your risk factor

The knee-jerk reaction when a new patch is released is to get it installed as quickly as possible to prevent a vulnerability from turning into a cyber incident. However, Bachwani and his Qualys colleagues stress that security teams need to take a step back and evaluate their organization’s risk threshold.

What that evaluation will first discover is a lot of vulnerabilities across their infrastructure. A study by Coalition expects the total number of common vulnerabilities and exposures (CVEs) to increase by 25% in 2024 to 34,888 vulnerabilities, or nearly 3,000 per month.

“New vulnerabilities are published at a rapid rate and growing,” Tiago Henriques, Coalition’s Head of Research, says. “Most organizations are experiencing alert fatigue and confusion about what to patch first to limit their overall exposure and risk.”

With the steady increase in the number of CVEs, it is easy to think that every vulnerability is critical — and if every vulnerability is given an equal risk value, patching becomes overwhelming. The researchers at Qualys recommend prioritizing the risk involved with each vulnerability so that you can determine what should be patched first and what might not need to be patched at all.

How to prioritize your organization’s vulnerabilities

To prioritize vulnerabilities, it requires knowing all of your assets across the organization and identifying and monitoring the attack surface. However, Qualys research found that only 9% of companies are actively monitoring 100% of their attack surface. Shadow IT, third-party vendors and risks, a digital transformation made too quickly and without an assessment of technologies and assets added and not recognizing emerging threat vectors are just some of the reasons why organizations are unable to properly monitor their attack surface.

Deploying an attack surface management program will identify what technologies are attached to your network and where and what assets need protection. The critical requirements of an attack surface management program are:

  • Visibility across hybrid IT
  • Dynamic cybersecurity needs with rapid identification
  • Unauthorized software tracking in real-time
  • Finding and remediating blind spots

The more familiar you become with the systems accessing your network, the easier it will be to know your corporate assets and prioritize their importance. When levels of risk tolerance are assigned to these assets, it will then be easier to prioritize critical and non-critical vulnerabilities to be patched or, in some cases, not patched.

Explore vulnerability management services

When to slow down the patching process

Patching protocols should be unique to your organization, based on your internal measures of mission-critical and risk tolerance. Whereas one organization may decide that the most critical vulnerabilities must be patched immediately, another may find that seven days is the ultimate time frame to reduce risk for the most important assets. Patch management programs will tier their assets, beginning with the most critical and can’t afford downtime if something goes wrong and down through secondary tiers with longer wait times.

But there are times when it is smart to slow down or even eliminate the patching process. They include:

  • An important and time-sensitive project is in progress and requires uninterrupted computer time
  • Reports of bugs in the patch or it creates compatibility problems with the application in a testing sample
  • The vulnerable software is limited in scope within the organization and can be isolated
  • Other mitigating controls can be put in place
  • The application never uses the functions with the known vulnerability
  • The costs of patching outweigh the benefits. If the code is outdated and needs to be rewritten, for example, then it doesn’t make sense to take the time and expense to apply the patch.

Cybersecurity insurance and patching

With the increase of CVEs and the always looming threat of a cyber incident, many organizations are looking at how to maximize their cybersecurity insurance. With the strict rules and audits in place to be eligible for cybersecurity insurance, is taking an approach to only patch when it is truly necessary going to downgrade your organization with insurance companies?

Bachwani says no. “I actually think a solution like this will enable cyber insurers to be more effective.”

The way the insurance marketplace works today is that it is less focused on the company’s internal data and more on the organization’s overall cybersecurity posture.

“If I’m able to clearly demonstrate that we internally have really good hygiene, my insurance should be lower,” says Bachwani.

To patch or not to patch?

In the end, the decision on whether or not to patch will come down to one singular issue: What is the value to the business by patching or not patching? And that is determined by the organization’s risk tolerance. Recognizing the consequences of downtime or a cyber incident will help prioritize critical vulnerabilities that require time and resources to patch. But also being willing to accept that you can’t patch everything will give your team the space to focus on bigger risk threats.

The post When you shouldn’t patch: Managing your risk factors appeared first on Security Intelligence.

  • ✇Security Intelligence
  • The straight and narrow — How to keep ML and AI training on track Doug Bonderud
    Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment. According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they’re following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment. One thing they have in common? Challenges with
     

The straight and narrow — How to keep ML and AI training on track

11 de Fevereiro de 2025, 11:00

Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment.

According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they’re following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment.

One thing they have in common? Challenges with data security. Despite their success with AI and ML, security remains the top concern. Here’s why.

Full steam ahead: How AI and ML get smarter

Historically, computers did what they were told. Thinking outside the box wasn’t an option — lines of code dictated what was possible and permissible.

AI and ML models take a different approach. Instead of rigid structures, AI and ML models are given general guidelines. Companies supply vast amounts of training data that help these models “learn,” in turn improving their output.

A simple example is an AI tool designed to identify images of dogs. The underlying ML structures provide basic guidance — dogs have four legs, two ears, a tail and fur. Thousands of images of both dogs and not-dogs are provided to AI. The more pictures it “sees,” the better it becomes at differentiating dogs.

Learn more about today’s AI leaders

Off the rails: The risks of unauthorized model modification

If attackers can gain access to AI models, they can modify model outputs. Consider the example above. Malicious actors compromise business networks and flood training models with unlabeled images of cats and images incorrectly labeled as dogs. Over time, model accuracy suffers and outputs are no longer reliable.

Forbes highlights a recent competition that saw hackers trying to “jailbreak” popular AI models and trick them into producing inaccurate or harmful content. The rise of generative tools makes this kind of protection a priority — in 2023, researchers discovered that by simply adding strings of random symbols to the end of queries, they could convince generative AI (gen AI) tools to provide answers that bypassed model safety filters.

And this concern isn’t just conceptual. As noted by The Hacker News, an attack technique known as “Sleepy Pickle” poses significant risks for ML models. By inserting a malicious payload into pickle files — used to serialize Python object structures — attackers can change how models weigh and compare data and alter model outputs. This could allow them to generate misinformation that causes harm to users, steal user data or generate content that contains malicious links.

Staying the course: Three components for better security

To reduce the risk of compromised AI and ML, three components are critical:

1) Securing the data

Accurate, timely and reliable data underpins usable model outputs. The process of centralizing and correlating this data, however, creates a tempting target for attackers. If they can infiltrate large-scale AI data storage, they can manipulate model outputs.

As a result, enterprises need solutions that automatically and continuously monitor AI infrastructure for signs of compromise.

2) Securing the model

Changes to AI and ML models can lead to outputs that look legitimate but have been modified by attackers. At best, these outputs inconvenience customers and slow down business processes. At worst, they could negatively impact both reputation and revenue.

To reduce the risk of model manipulation, organizations need tools capable of identifying security vulnerabilities and detecting misconfigurations.

3) Securing the usage

Who’s using models? With what data? And for what purpose? Even if data and models are secured, use by malicious actors may put companies at risk. Continuous compliance monitoring is critical to ensure legitimate use.

Making the most of models

AI and ML tools can help enterprises discover data insights and drive increased revenue. If compromised, however, models can be used to deliver inaccurate outputs or deploy malicious code.

With Guardium AI security, businesses are better equipped to manage the security risks of sensitive models. See how.

The post The straight and narrow — How to keep ML and AI training on track appeared first on Security Intelligence.

  • ✇Security Intelligence
  • Reducing ransomware recovery costs in education Jennifer Gregory
    2024 continued the trend of ransomware attacks in the education sector making headlines. The year opened with Freehold Township School District in New Jersey canceling classes due to a ransomware attack. Students at New Mexico Highlands University missed classes for several days while employees experienced disruption of their paychecks after a ransomware attack. The attack on the Alabama Department of Education served as a reminder that all school systems are vulnerable. Ransomware attacks in e
     

Reducing ransomware recovery costs in education

10 de Fevereiro de 2025, 11:00

2024 continued the trend of ransomware attacks in the education sector making headlines. The year opened with Freehold Township School District in New Jersey canceling classes due to a ransomware attack. Students at New Mexico Highlands University missed classes for several days while employees experienced disruption of their paychecks after a ransomware attack. The attack on the Alabama Department of Education served as a reminder that all school systems are vulnerable.

Ransomware attacks in education decreasing

The year closes with some positive news about ransomware in the education sector. Sophos State of Ransomware in Education 2024 found that ransomware attacks on educational institutions decreased in 2024. Attacks on higher-education institutions dropped from 79% reporting attacks in 2023 to 66% in 2024. Lower education saw a similar decrease, from 80% in 2023 to 63% in 2024. However, the attack rates for both are still higher than the global cross-sector average of 59%.

Ransomware affects education quality

Not surprisingly, a recent study also found that students are impacted by ransomware attacks on the education sector. A study from Action1 found that the majority (64%) of education IT workers report that ransomware impacts education quality. Researchers found the reasons for the attacks are multifold, including that 44% devote only 10% of their IT budget to cybersecurity and the majority of schools (78%) do not employ cybersecurity specialists.

In an NPR article, Noelle Ellerson Ng with the School Superintendents Association said that the reason for targeting the education sector is that schools are often low-hanging fruit. Additionally, she points to the fact that school systems, which collect a lot of valuable data from both students and employees, often are the largest employers in a community.

“That makes it very, very ripe,” says Ng. “And then you layer on the fact that [the data] is so sensitive and so longitudinal and so personal, and there’s a huge vulnerability.”

Read the Cost of a Data Breach Report

Reducing cyber risks in the education sector

Even with the decline, schools should continue to focus on reducing their vulnerabilities.

Here are some ways schools can reduce ransomware risk:

  • Install antivirus and anti-malware software on all devices. Be sure to also include tablets and phones. Make sure that updates and patches are installed on a timely basis.
  • Provide training to all employees and students. Teach good cybersecurity practices, including choosing strong passwords and how to avoid being a victim of phishing. Continually send reminders on not clicking on unknown links or downloading suspicious files.
  • Install filtering software. By filtering out potentially malicious links and files, you can reduce the chance of students or employees falling victim to a phishing scheme.
  • Use multi-factor authentication (MFA). Because ransomware attacks can start with unauthorized access, educational organizations should take extra steps to ensure that every user who logs in is who they claim to be. With MFA, users must use email, text or token in addition to a password, adding an extra layer of security.

Recovery costs have increased

While the decrease in attacks was positive, Sophos’ report found a troubling trend — the recovery costs have more than doubled for ransomware attacks in education. Lower-education organizations reported a mean cost of $3.76 million to recover from a ransomware attack in 2024, compared to $1.59 million. Researchers found the increase even higher in higher education, more than four times higher from 2023 to 2024 ($1.06 million to $4.02 million).

Here are ways to reduce recovery costs:

  • Back up your data. In addition to backing up data in real-time, educational institutions should take precautions to secure the backups, such as by using air-gapped backups as well as immutable backups that cannot be erased. Sophos found that costs for lower-education institutions whose backups were compromised were five times higher ($3 million versus $562,500) than those who had a backup to revert to.
  • Segment the network. When a ransomware attack happens on a segmented network, cyber criminals can encrypt only the portion of the network that they accessed. By reducing the amount of data breached and the systems impacted, schools can significantly reduce recovery time and costs.
  • Create an incident response plan. Often, the recovery is extended due to schools not containing the ransomware quickly enough. Additionally, business disruption also adds to the recovery time. With an incident response plan, employees know exactly what to do when a ransomware attack occurs by including the four fundamentals of a response plan — planning, detection, recovery and post-incident actions.

Propensity for paying ransom has increased

Recovery costs are also increasing due to the changes in the ransom payment patterns and amounts. When an educational organization pays the ransom to gain access to their data, that exponentially increases the recovery costs.

The Sophos Report found that the decision to pay the ransom has increased in both higher and lower education. In 2023, 56% of educational organizations attacked by ransomware paid the ransom, compared with 67% in 2024. The number of higher-education institutions paying the ransom also increased from 47% to 62%.

Additionally, the amount of the ransom has increased, which also adds to the rising recovery costs. The average ransom in lower education was $3.9 million, with 44% of demands of more than $5 million. Higher education demands also increased to $4.4 million. Ransoms in critical infrastructure sectors, such as education, tend to be higher due to the urgency of restoring operations as well as the sensitive nature of the data. Additionally, cyber criminals increasingly use double extortion, demanding a ransom to unencrypt the data and then a second ransom to not make the data public, which increases recovery costs.

The future of ransomware attacks in education

While the decrease in attacks is positive, educational organizations must pay attention to the rising recovery costs. Because every dollar spent in education towards recovering from an attack means money is not available for learning, the costs of ransomware recovery are even more impactful than other sectors. By proactively taking steps to both reduce risks and reduce recovery costs, educational organizations can keep their focus on what matters most — educating students.

The post Reducing ransomware recovery costs in education appeared first on Security Intelligence.

  • ✇Security Intelligence
  • Will AI threaten the role of human creativity in cyber threat detection? Sue Poremba
    Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions. Of course, a lot of cybersecurity wo
     

Will AI threaten the role of human creativity in cyber threat detection?

7 de Fevereiro de 2025, 11:00

Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.

Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out false positive alerts, etc. Artificial intelligence (AI) has been a boon in filling the talent gaps when it comes to these types of tasks. But AI has also proven useful for many of the same things that creative thought brings to the threat table, such as addressing more sophisticated threat actors, the rapid increase of data and the hybrid infrastructure.

However, many companies are seeing the value of AI, especially generative AI (gen AI), in handling a greater share of creative work — not just in cybersecurity but also in areas like marketing and public relations, writing and research. But are these organizations using AI in a way that could threaten the importance of human creativity in threat detection?

Why creativity is important to cybersecurity

The very simple reason why cybersecurity requires innovative people is that threat actors are already coming up with novel approaches to how to get into your system. Are they using gen AI to launch their attacks? You bet they are; phishing emails have never been more grammatically constructed or realistic. But before AI was available, threat actors were designing social engineering attacks that attracted clicks. Now, they have advanced beyond “how can we lure in victims” to “how can we get more out of a single attack after we lure in the victims.”

Creativity isn’t just coming up with new ideas. It is also the ability to see things through a big-picture lens and discern historical data or where to find information you might not know you need to look for. For example, creative thought is required for the following security tasks:

  • Threat hunting or predicting a threat actor’s move or finding their tracks in a system
  • Finding buried evidence in a forensic search
  • Understanding historical data in anomaly detection
  • Ability to tell a real email or document versus a well-designed phishing attack
  • Verifying new zero day attacks and other malware variants found in otherwise unknown vulnerabilities

AI can augment human creativity, but gen AI gets a lot of things wrong. Users have found themselves in situations where AI claimed plagiarism on original work or AI hallucinations offered false information that nullified the research of human analysts. AI algorithms are also susceptible to bias that could lead to false positives.

Explore AI cybersecurity solutions

AI’s role in creative cybersecurity and beyond

While many creative people, cybersecurity professionals and beyond, see gen AI as a mixed blessing, many embrace the technology because it is a huge timesaver.

“Gen AI can help prototype much faster because the large language models can take over the refactoring and documentation of code,” wrote Aili McConnon in an IBM blog post. Also, the article pointed out, AI tools can help users create prototypes or visualize their ideas in minutes versus hours or days.

Creativity married to AI can help identify future leaders. According to research from IBM, two-thirds of company leaders found that AI is driving their growth, with four specific use cases — IT operations, user experience, virtual assistants and cybersecurity — most commonly favored by leaders.

“A Learner will typically copy predefined scenarios using out-of-the-box technologies,” Dr. Stephan Bloehdorn, Executive Partner and Practice Leader, AI, Analytics and Automation-IBM Consulting DACH, was quoted in the study. “But a Leader develops custom innovations.”

Over-reliance on AI?

As gen AI becomes more ubiquitous in the workplace and as more creative folks and leaders rely on it as a way to put their ideas in motion, are we also relying on the technology to the point that it could lead to a degradation of other important necessary skills, like the ability to analyze data and create viable solutions?

It is unclear if organizations are over-relying on gen AI, according to Stephen Kowski, Field CTO at SlashNext Email Security+, but it is becoming more of a designed feature due to unintended consequences related to resource allocation in organizations.

“While AI excels at processing massive volumes of threat data, real-world attacks constantly evolve beyond historical patterns, requiring human expertise to identify and respond to zero-day threats,” said Kowski in an email interview. “The key is achieving the right balance where AI handles high-volume routine detection while skilled analysts investigate novel attack patterns and determine strategic responses.”

Yet, Kris Bondi, CEO and Co-Founder of Mimoto, isn’t worried about AI leading to a degradation of skills — at least not for the foreseeable future.

“One of the biggest challenges for cybersecurity professionals is having too many alerts and too many false positives. AI is only able to automate a small percentage of responses. It’s more likely that AI will eventually automate additional requirements for someone deemed to be suspicious or the elevation of alert so that a human can analyze the situation,” Bondi said via email.

However, organizations should watch out for AI’s role in defining threat-hunting parameters. “If AI is the sole driver defining threat hunting parameters without spot-checks or audits, the threat intelligence approach could eventually be focused in the wrong area. The answer is more reliance on critical thinking and analytical skills,” said Bondi.

Embracing creativity in an AI-driven world

AI overall, and gen AI in particular, are going to be part of the business world going forward. It is going to play a vital role in how organizations and analysts approach cybersecurity defenses and mitigations. But the soft skills that creative thought depends on will still play an important and necessary role in cybersecurity.

“Rather than diminishing soft skills, AI integration has the opportunity to elevate the importance of communication, collaboration and strategic thinking, as security teams must effectively convey complex findings to stakeholders,” said Kowski. “The human elements of cybersecurity — leadership, adaptability and cross-functional partnership — become even more critical as AI handles the technical heavy lifting.”

The post Will AI threaten the role of human creativity in cyber threat detection? appeared first on Security Intelligence.

  • ✇Security Intelligence
  • Hacking the mind: Why psychology matters to cybersecurity Jonathan Reed
    In cybersecurity, too often, the emphasis is placed on advanced technology meant to shield digital infrastructure from external threats. Yet, an equally crucial — and underestimated — factor lies at the heart of all digital interactions: the human mind. Behind every breach is a calculated manipulation, and behind every defense, a strategic response. The psychology of cyber crime, the resilience of security professionals and the behaviors of everyday users combine to form the human element of cy
     

Hacking the mind: Why psychology matters to cybersecurity

6 de Fevereiro de 2025, 11:00

In cybersecurity, too often, the emphasis is placed on advanced technology meant to shield digital infrastructure from external threats. Yet, an equally crucial — and underestimated — factor lies at the heart of all digital interactions: the human mind. Behind every breach is a calculated manipulation, and behind every defense, a strategic response. The psychology of cyber crime, the resilience of security professionals and the behaviors of everyday users combine to form the human element of cybersecurity. Arguably, it’s the most unpredictable and influential variable in our digital defenses.

To truly understand cybersecurity is to understand the human mind — both as a weapon and as a shield.

Peering into the mind of a cyber criminal

At the core of every cyberattack is a human, driven not just by code but by complex motivations and psychological impulses. Cyber criminals aren’t merely technologists. They are people with intentions, convictions, emotions and specific psychological profiles that drive their actions. Financial gain remains a primary incentive to launch attacks like ransomware. But some are also driven by ideological motives, or they relish the chance to outsmart advanced defenses so they can later brag about it in dark web forums.

Many cyber criminals share distinct personality traits: an inclination for risk-taking, problem-solving prowess and an indifference to ethical boundaries. Furthermore, the physical and digital distance inherent in online crime can create a psychological disconnect, minimizing the moral weight of their actions. This environment enables cyber criminals to justify their behavior in ways they might not if they had to face their victims in person. Equipped with these psychological “advantages,” cyber criminals excel in social engineering tactics. They manipulate people instead of systems to gain unauthorized access.

Exploiting the human factor with social engineering

One of the most powerful weapons in a cyber criminal’s arsenal isn’t high-tech malware but the vulnerability of the human mind. Social engineering attacks, like phishing, vishing (voice phishing) and smishing (SMS phishing), exploit non-technological human factors like trust, fear, urgency and curiosity. And these tactics are alarmingly effective. A recent report from Verizon found that the human element factored into 68% of data breaches, underscoring the vulnerability of human interactions.

Phishing attacks, for instance, are designed to create a sense of urgency, fear or curiosity. Attackers manipulate users into clicking malicious links or revealing sensitive information. The success of these attacks depends on creating a false sense of trust and authority, preying on our innate tendencies. Understanding these methods is not only crucial for developing technical countermeasures but also for educating users to resist psychological manipulation.

The mental fortitude of cyber professionals

Defending against cyber threats requires more than solid technical skills; it demands resilience, ethical conviction and a keen understanding of human behavior. Cyber professionals operate in a high-stakes environment and face unrelenting pressure. Mental resilience enables them to rapidly respond to breaches, restore security and learn from the incident.

Creativity and adaptability are also indispensable in cybersecurity. As cyber criminals constantly refine their tactics, security professionals need to anticipate these moves. They, too, must innovate by developing new countermeasures before an attack even occurs. Like a chess match, staying ahead of intruders requires ingenuity that goes beyond technical skills. The best security teams have the ability to see beyond conventional approaches and the courage to pioneer novel defenses.

Finally, ethics play a defining role, particularly as security professionals are entrusted with sensitive data and powerful tools. Through misuse or negligence, these secrets and tools could cause substantial harm. Adherence to a strong ethical code serves as a psychological anchor, helping cyber pros to navigate the moral complexities of their work while prioritizing user privacy and security.

In a nutshell, working as a cybersecurity professional is one of the hardest jobs on earth.

Build your cybersecurity skills

Building a psychologically aware cybersecurity strategy

A truly effective cybersecurity strategy doesn’t just block attacks; it anticipates and adapts to human behavior. Therefore, aligning security measures with natural human tendencies can elevate an organization’s defenses significantly. This works better than relying on users to remember overly complex protocols.

For instance, training and awareness programs that incorporate psychological insights are far more impactful than traditional “box-ticking” sessions. The principles of Nudge Theory, which employs subtle prompts to influence behavior, offer a potent alternative. Well-designed programs make secure behaviors easy, attractive and timely. This guides employees toward safer practices without the punitive undertones that can breed resentment and resistance.

Creating a culture of psychological safety within an organization can also encourage employees to address security concerns proactively. When people feel safe discussing potential threats and even mistakes, the early identification of risks and a collective commitment to security becomes second nature. This “human firewall” effect, where individuals collectively protect digital assets, strengthens organizational resilience.

Behavioral analytics: The fusion of psychology and technology

User behavior analytics is where technology meets psychology in a powerful way. By analyzing behavioral patterns and detecting deviations, organizations can preemptively identify potential threats. This approach operates on the principle that individuals, even in digital spaces, follow predictable patterns. Behavioral analytics can detect anomalous behaviors — such as a sudden attempt to access restricted files or logins at unusual times — signaling a potential breach.

This combination of psychology and technology allows for dynamic, adaptive security measures that can catch threats early, often before they escalate into full-fledged incidents. By weaving human insight into the fabric of digital security, behavioral analytics represents a major step forward in cybersecurity defenses.

Rethinking the rhetoric of cybersecurity

The cybersecurity industry has long relied on fear-driven messaging to encourage secure behavior. However, experts argue that this approach, while effective in the short term, may actually discourage engagement in the long run. By using dramatic language to describe threats, the industry may be creating a sense of helplessness among the general public. Portraying cybersecurity as a field too complex and overwhelming for normal individuals to understand promotes failure.

Instead, fostering a sense of civic responsibility can empower anyone to participate in cybersecurity efforts. When people understand that their actions contribute to a safer online community, they’re more likely to engage in secure practices. Reframing cybersecurity as a shared responsibility rather than a source of fear can transform public engagement with online security.

Bridging technology and psychology for a secure future

Today, cybersecurity is no longer solely a technical issue — it is a fundamentally human one. Security strategies must weave technology and psychology together to create a comprehensive defense that accounts for both system vulnerabilities and human behavior. Cyber criminals leverage psychological tactics to manipulate individuals. A deeper understanding of this will make security stronger. Meanwhile, cybersecurity professionals rely on their mental resilience, creativity and ethical fortitude to counter these threats.

From training programs based on psychological principles to implementing behavioral analytics, incorporating human insights into cybersecurity strategies leads to a more adaptive and robust defense. By embracing psychology alongside technological advancements, we can transform cybersecurity from a reactive discipline into a proactive, resilient force.

The post Hacking the mind: Why psychology matters to cybersecurity appeared first on Security Intelligence.

  • ✇Security Intelligence
  • Stress-testing multimodal AI applications is a new frontier for red teams Charles Owen-Jackson
    Human communication is multimodal. We receive information in many different ways, allowing our brains to see the world from various angles and turn these different “modes” of information into a consolidated picture of reality. We’ve now reached the point where artificial intelligence (AI) can do the same, at least to a degree. Much like our brains, multimodal AI applications process different types — or modalities — of data. For example, OpenAI’s ChatGPT 4.0 can reason across text, vision and a
     

Stress-testing multimodal AI applications is a new frontier for red teams

5 de Fevereiro de 2025, 14:00

Human communication is multimodal. We receive information in many different ways, allowing our brains to see the world from various angles and turn these different “modes” of information into a consolidated picture of reality.

We’ve now reached the point where artificial intelligence (AI) can do the same, at least to a degree. Much like our brains, multimodal AI applications process different types — or modalities — of data. For example, OpenAI’s ChatGPT 4.0 can reason across text, vision and audio, granting it greater contextual awareness and more humanlike interaction.

However, while these applications are clearly valuable in a business environment that’s laser-focused on efficiency and adaptability, their inherent complexity also introduces some unique risks.

According to Ruben Boonen, CNE Capability Development Lead at IBM: “Attacks against multimodal AI systems are mostly about getting them to create malicious outcomes in end-user applications or bypass content moderation systems. Now imagine these systems in a high-risk environment, such as a computer vision model in a self-driving car. If you could fool a car into thinking it shouldn’t stop even though it should, that could be catastrophic.”

Multimodal AI risks: An example in finance

Here’s another possible real-world scenario:

An investment banking firm uses a multimodal AI application to inform its trading decisions, processing both textual and visual data. The system uses a sentiment analysis tool to analyze text data, such as earnings reports, analyst insights and news feeds, to determine how market participants feel about specific financial assets. Then, it conducts a technical analysis of visual data, such as stock charts and trend analysis graphs, to offer insights into stock performance.

An adversary, a fraudulent hedge fund manager, then targets vulnerabilities in the system to manipulate trading decisions. In this case, the attacker launches a data poisoning attack by flooding online news sources with fabricated stories about specific markets and financial assets. Next, they launch an adversarial attack by making pixel-level manipulations — known as perturbations — to stock performance charts that are imperceptible to the human eye but enough to exploit the AI’s visual analysis abilities.

The result? Due to the manipulated input data and false signals, the system recommends buying orders at artificially inflated stock prices. Unaware of the exploit, the company follows the AI’s recommendations, while the attacker, holding shares in the target assets, sells them for an ill-gotten profit.

Getting there before adversaries

Now, let’s imagine that the attack wasn’t really carried out by a fraudulent hedge fund manager but was instead a simulated attack by a red team specialist with the goal of discovering the vulnerability before a real-world adversary could.

By simulating these complex, multifaceted attacks in safe, sandboxed environments, red teams can reveal potential vulnerabilities that traditional security systems are almost certain to miss. This proactive approach is essential for fortifying multimodal AI applications before they end up in a production environment.

According to the IBM Institute of Business Value, 96% of executives agree that the adoption of generative AI will increase the chances of a security breach in their organizations within the next three years. The rapid proliferation of multimodal AI models will only be a force multiplier of that problem, hence the growing importance of AI-specialized red teaming. These specialists can proactively address the unique risk that comes with multimodal AI: cross-modal attacks.

Cross-modal attacks: Manipulating inputs to generate malicious outputs

A cross-modal attack involves inputting malicious data in one modality to produce malicious output in another. These can take the form of data poisoning attacks during the model training and development phase or adversarial attacks, which occur during the inference phase once the model has already been deployed.

“When you have multimodal systems, they’re obviously taking input, and there’s going to be some kind of parser that reads that input. For example, if you upload a PDF file or an image, there’s an image-parsing or OCR library that extracts data from it. However, those types of libraries have had issues,” says Boonen.

Cross-modal data poisoning attacks are arguably the most severe since a major vulnerability could necessitate the entire model being retrained on an updated data set. Generative AI uses encoders to transform input data into embeddings — numerical representations of the data that encode relationships and meanings. Multimodal systems use different encoders for each type of data, such as text, image, audio and video. On top of that, they use multimodal encoders to integrate and align data of different types.

In a cross-modal data poisoning attack, an adversary with access to training data and systems could manipulate input data to make encoders generate malicious embeddings. For example, they might deliberately add incorrect or misleading text captions to images so that the encoder misclassifies them, resulting in an undesirable output. In cases where the correct classification of data is crucial, as it is in AI systems used for medical diagnoses or autonomous vehicles, this can have dire consequences.

Red teaming is essential for simulating such scenarios before they can have real-world impact. “Let’s say you have an image classifier in a multimodal AI application,” says Boonen. “There are tools that you can use to generate images and have the classifier give you a score. Now, let’s imagine that a red team targets the scoring mechanism to gradually get it to classify an image incorrectly. For images, we don’t necessarily know how the classifier determines what each element of the image is, so you keep modifying it, such as by adding noise. Eventually, the classifier stops producing accurate results.”

Vulnerabilities in real-time machine learning models

Many multimodal models have real-time machine learning capabilities, learning continuously from new data, as is the case in the scenario we explored earlier. This is an example of a cross-modal adversarial attack. In these cases, an adversary could bombard an AI application that’s already in production with manipulated data to trick the system into misclassifying inputs. This can, of course, happen unintentionally, too, hence why it’s sometimes said that generative AI is getting “dumber.”

In any case, the result is that models that are trained and/or retrained by bad data inevitably end up degrading over time — a concept known as AI model drift. Multimodal AI systems only exacerbate this problem due to the added risk of inconsistencies between different data types. That’s why red teaming is essential for detecting vulnerabilities in the way different modalities interact with one another, both during the training and inference phases.

Red teams can also detect vulnerabilities in security protocols and how they’re applied across modalities. Different types of data require different security protocols, but they must be aligned to prevent gaps from forming. Consider, for example, an authentication system that lets users verify themselves either with voice or facial recognition. Let’s imagine that the voice verification element lacks sufficient anti-spoofing measures. Chances are, the attacker will target the less secure modality.

Multimodal AI systems used in surveillance and access control systems are also subject to data synchronization risks. Such a system might use video and audio data to detect suspicious activity in real-time by matching lip movements captured on video to a spoken passphrase or name. If an attacker were to tamper with the feeds, resulting in a slight delay between the two, they could mislead the system using pre-recorded video or audio to gain unauthorized access.

Getting started with multimodal AI red teaming

While it’s admittedly still early days for attacks targeting multimodal AI applications, it always pays to take a proactive stance.

As next-generation AI applications become deeply ingrained in routine business workflows and even security systems themselves, red teaming doesn’t just bring peace of mind — it can uncover vulnerabilities that will almost certainly go unnoticed by conventional, reactive security systems.

Multimodal AI applications present a new frontier for red teaming, and organizations need their expertise to ensure they learn about the vulnerabilities before their adversaries do.

The post Stress-testing multimodal AI applications is a new frontier for red teams appeared first on Security Intelligence.

  • ✇Security Intelligence
  • Cybersecurity awareness: Apple’s cloud-based AI security system Sue Poremba
    The rising influence of artificial intelligence (AI) has many organizations scrambling to address the new cybersecurity and data privacy concerns created by the technology, especially as AI is used in cloud systems. Apple addresses AI’s security and privacy issues head-on with its Private Cloud Compute (PCC) system. Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to cre
     

Cybersecurity awareness: Apple’s cloud-based AI security system

5 de Fevereiro de 2025, 11:00

The rising influence of artificial intelligence (AI) has many organizations scrambling to address the new cybersecurity and data privacy concerns created by the technology, especially as AI is used in cloud systems. Apple addresses AI’s security and privacy issues head-on with its Private Cloud Compute (PCC) system.

Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to create a cloud infrastructure on which to run generative AI (genAI) models that need more processing power than its devices could supply while also protecting user privacy, stated a ComputerWorld article.

Apple is opening the PCC system to security researchers to “learn more about PCC and perform their own independent verification of our claims,” the company announced. In addition, Apple is also expanding its Apple Security Bounty.

What does this mean for AI security going forward? Security Intelligence spoke with Ruben Boonen, CNE Capability Development Lead at IBM, to learn what researchers think about PCC and Apple’s approach.

SI: ComputerWorld reported this story, saying that Apple hopes that “the energy of the entire infosec community will combine to help build a moat to protect the future of AI.” What do you think of this move?

Boonen: I read the ComputerWorld article and reviewed Apple’s own statements about their private cloud. I think what Apple has done here is good. I think it goes beyond what other cloud providers do because Apple is providing an insight into some of the internal components they use and are basically telling the security community, you can have a look at this and see if it is secure or not.

Also good from the perspective that AI is constantly getting bigger as an industry. Bringing generative AI components into regular consumer devices and getting people to trust their data with AI services is a really good step.

SI: What do you see as the pros of Apple’s approach to securing AI in the cloud?

Boonen: Other cloud providers do provide high-security guarantees for data that’s stored on their cloud. Many businesses, including IBM, trust their corporate data to these cloud providers. But a lot of times, the processes to secure data aren’t visible to their customers; they don’t explain exactly what they do. The biggest difference here is that Apple is providing this transparent environment for users to test that plane.

Explore AI cybersecurity solutions

SI: What are some of the downsides?

Boonen: Currently, the most capable AI models are very big, and that makes them very useful. But when we want AI on consumer devices, there’s a tendency for vendors to ship small models that can’t answer all questions, so it relies on the larger models in the cloud. That comes with additional risk. But I think it is inevitable that the whole industry will be moving to that cloud model for AI. Apple is implementing this now because they want to give consumers trust to the AI process.

SI: Apple’s system doesn’t play well with other systems and products. How will Apple’s efforts to secure AI in the cloud benefit other systems?

Boonen: They are providing a design template that other providers like Microsoft, Google and Amazon can then replicate. I think it is mostly effective as an example for other providers to say maybe we should implement something similar and provide similar testing capabilities for our customers. So I don’t think this directly impacts other providers except to push them to be more transparent in their processes.

It’s also important to mention Apple’s Bug Bounty as they invite researchers in to look at their system. Apple has a history of not doing very well with security, and there have been cases in the past where they’ve refused to pay out bounties for issues found by the security community. So I’m not sure they’re doing this entirely out of the interest of attracting researchers, but also in part of convincing their customers that they are doing things securely.

That being said, having read their design documentation, which is extensive, I think they’re doing a pretty good job in addressing security around AI in the cloud.

The post Cybersecurity awareness: Apple’s cloud-based AI security system appeared first on Security Intelligence.

  • ✇Security Intelligence
  • How AI-driven SOC co-pilots will change security center operations Jennifer Gregory
    Have you ever wished you had an assistant at your security operations centers (SOCs) — especially one who never calls in sick, has a bad day or takes a long lunch? Your wish may come true soon. Not surprisingly, AI-driven SOC “co-pilots” are topping the lists for cybersecurity predictions in 2025, which often describe these tools as game-changers. “AI-driven SOC co-pilots will make a significant impact in 2025, helping security teams prioritize threats and turn overwhelming amounts of data into
     

How AI-driven SOC co-pilots will change security center operations

4 de Fevereiro de 2025, 11:00

Have you ever wished you had an assistant at your security operations centers (SOCs) — especially one who never calls in sick, has a bad day or takes a long lunch? Your wish may come true soon. Not surprisingly, AI-driven SOC “co-pilots” are topping the lists for cybersecurity predictions in 2025, which often describe these tools as game-changers.

“AI-driven SOC co-pilots will make a significant impact in 2025, helping security teams prioritize threats and turn overwhelming amounts of data into actionable intelligence,” says Brian Linder, Cybersecurity Evangelist at Check Point. “It’s a game-changer for SOC efficiency.”

What is an AI-driven SOC co-pilot?

AI-driven SOC co-pilots are generative AI tools that use machine learning to help security analysts run and manage the SOC. Common co-pilot tasks include detecting threats, managing incidents, triaging alerts, predicting new trends and patterns for attacks and breaches and automating responses to threats. Co-pilots may be proprietary tools built by the company for their specific needs or commercially available cybersecurity co-pilots such as Microsoft Copilot.

For example, a co-pilot can review alerts and use AI to predict which are most likely to be a high priority. This reduces a common issue in SOCs: false positives. The analysts can then focus on the alerts that are most likely to be a real threat. Because they are not chasing down noncritical alerts, analysts have more time to spend on actual threats and are more likely to be successful in containing the threat.

Co-pilots can take many different forms in a SOC. Analysts can use the co-pilot similarly to how many people use ChatGPT, assigning it a specific task such as incident response. The analyst enters information about a specific incident, and the co-pilot analyzes data to suggest possible causes as well as how the organizations should respond to the incident. However, you can also use co-pilots to automate parts of the workflow without human intervention, such as monitoring current firewalls and detecting vulnerabilities.

Explore AI cybersecurity solutions

Benefits of using AI-driven SOC co-pilots

Businesses that turn to AI-driven co-pilots to help manage their SOC see a wide range of benefits. Common benefits include:

  • Improved productivity: Because it can process a much higher volume of data than even the most efficient cybersecurity analyst, a co-pilot gets significantly more work done in less time. With humans and machines working together, co-pilots are able to more effectively monitor the SOC with fewer human resources.
  • Additional time for cybersecurity professionals to complete high-level tasks: When co-pilots handle manual and repetitive tasks, analysts have more time for higher-level tasks such as strategy and analytics. Analysts are more likely to be fully engaged when their day is filled with more interesting work, which reduces burnout.
  • Fewer errors: Humans make mistakes, especially with manual tasks such as reviewing logs. While AI tools are only as “smart” as the algorithm and the training data used for the algorithm, they are often able to spot patterns that may be undetectable to humans. This reduces errors and prevents issues that can lead to a breach or attack.
  • Quicker response to threats: Whereas humans may not recognize an area of vulnerability or may be slower to respond, a co-pilot uses automation to respond and send a notification immediately. Co-pilots also don’t take bathroom or lunch breaks; they are always “at their desk,” leading to faster response times.
  • Reduced impact of worker shortage and skills gaps: When cybersecurity positions are not filled or the analyst does not have the right skills for the job, the company’s risk increases. AI-driven co-pilots can help reduce open positions by taking on various manual tasks, which means greater coverage by the SOC.

Will AI-driven SOC co-pilots replace humans?

Like many AI tools, co-pilots can take over many manual and repetitive tasks currently done by humans. However, the fear of AI replacing the need for humans in the SOC is not likely to become reality. Setting up co-pilots to operate without human oversight or intervention would likely be a mistake. But businesses that have analysts and co-pilots work together can see a reduction in risk, better responses and higher employee satisfaction.

While co-pilots can be the first line of defense in the SOC, companies should set up gen AI tools so that humans remain the ultimate decision-makers. For example, an analyst may set up an automation with an AI-driven co-pilot to monitor and prioritize alerts based on set criteria. Yet, as threat actors begin using new tactics, the analyst may need to change the criteria to catch the latest threats. Once the co-pilot identifies a high-priority alert, the human can ask the tool to analyze the situation and provide recommended next steps. The analyst then uses human judgment to make the best decisions in the situation and instructs the tool to take the next action, such as shutting down systems or taking the network temporarily offline.

Putting AI-driven co-pilots into action in the SOC

When it comes to putting co-pilots in action, consider starting on a small scale with a limited use case. Many organizations use a commercial product to start, leaving open the option to create a proprietary tool in the future. Creating a list of time-consuming tasks in the SOC, especially those that are error-prone or frustrating for analysts, will help you determine which use case to start with. After launching the tool, a single analyst can gather feedback and make changes.

Upon seeing success, your team can begin expanding the use of co-pilots to additional analysts and use cases. By taking a measured approach to using co-pilots and continuously soliciting feedback from the analysts, businesses can create a partnership between analysts and co-pilots that improves human job satisfaction while also keeping the organization more secure.

The post How AI-driven SOC co-pilots will change security center operations appeared first on Security Intelligence.

  • ✇Security Intelligence
  • CISOs drive the intersection between cyber maturity and business continuity Jonathan Reed
    The modern corporate landscape is marked by rapid digital change, heightened cybersecurity threats and an evolving regulatory environment. At the nexus of these pressures sits the chief information security officer (CISO), a role that has gained newfound influence and responsibility. The recent Deloitte Global Future of Cyber Survey underscores this shift, revealing that “being more cyber mature does not make organizations immune to threats; it makes them more resilient when they occur, enablin
     

CISOs drive the intersection between cyber maturity and business continuity

3 de Fevereiro de 2025, 11:00

The modern corporate landscape is marked by rapid digital change, heightened cybersecurity threats and an evolving regulatory environment. At the nexus of these pressures sits the chief information security officer (CISO), a role that has gained newfound influence and responsibility.

The recent Deloitte Global Future of Cyber Survey underscores this shift, revealing that “being more cyber mature does not make organizations immune to threats; it makes them more resilient when they occur, enabling critical business continuity.” High-cyber-maturity organizations increasingly integrate cybersecurity risk strategies, security practices and trust-building approaches into their business and technology transformations. And it’s all enabled by a cyber-savvy C-suite and influential CISOs.

Let’s explore how cyber maturity enhances resilience, why cyber is now being integrated into broader business budgets and what organizations can do to bolster their business continuity.

The expanding role of CISOs in corporate strategy

Historically, CISOs were typically siloed within the IT department, focusing on technical and operational aspects of cybersecurity. However, as threats have evolved, so has the role of the CISO. According to Deloitte’s report, about one-third of organizations have seen a significant increase in CISO involvement in strategic conversations about business-critical technology decisions. Furthermore, approximately one in five CISOs now report directly to the CEO, marking a shift toward greater business alignment and visibility. This expanded role places CISOs alongside other senior leaders to guide decisions on digital transformation, cloud security, and supply chain resilience.

Emily Mossburg, Deloitte’s global cyber leader, notes that “many boards and C-suites now require or need further knowledge into potential threats, security vulnerabilities, risk scenarios and actions needed for greater resilience.” CISOs are increasingly tasked with not only understanding these complex cyber landscapes but also translating them into language that senior leadership and boards can act upon.

Cybersecurity as an integral business strategy

In high-cyber-maturity organizations, cybersecurity is embedded across operations, facilitating a seamless alignment between risk management and business goals. According to Deloitte, these organizations are more resilient when incidents occur, enabling critical business continuity by preparing for and swiftly responding to cyber threats. This proactive integration is not limited to IT. It extends into every function that touches digital infrastructure — from operations and finance to customer experience and product innovation.

In modern digitally interconnected ecosystems, a cyber incident affecting one partner could impact the entire supply chain. High-cyber-maturity organizations anticipate these risks by establishing protocols and response measures that enable them to recover quickly, ensuring continuity across all critical operations. Companies with lower cyber maturity, on the other hand, face longer recovery times and can suffer more severe impacts on their revenue, brand reputation and operational capabilities.

This integration of cybersecurity into broader strategic goals reflects a more nuanced understanding of cyber resilience. Instead of viewing cybersecurity solely as a cost center, leaders increasingly recognize it as a foundational element of business value and continuity. This understanding translates into better allocation of resources and a more balanced approach to cyber risk management.

Explore cybersecurity services

Evolving cybersecurity budgets

As cybersecurity gains prominence within business strategy, budget allocations are changing to reflect its importance across multiple areas. Deloitte’s findings indicate that many organizations are beginning to integrate cybersecurity spending with other budgets, such as digital transformation, IT programs and cloud investments. This shift acknowledges the cross-functional impact of cybersecurity, particularly in organizations with complex, interconnected digital ecosystems.

The trend is mirrored by a recent IANS and Artico Search survey, which reported an 8% increase in cybersecurity spending this year, up from 6% in 2023. While modest, this increase suggests that organizations recognize the need for sustained investment in cyber resilience to keep pace with emerging threats, especially as AI and automation reshape the cyber landscape.

Integrating cybersecurity with broader budgets also aligns with the CISO’s role in risk quantification and value communication. Techniques such as the FAIR (Factor Analysis of Information Risk) model allow CISOs to translate cybersecurity risks into financial metrics, making it easier to justify investments and demonstrate ROI to the C-suite.

Navigating regulatory mandates and disclosure requirements

Regulatory mandates are also shaping the evolving role of the CISO and cybersecurity’s integration into corporate strategy. With the U.S. Securities and Exchange Commission (SEC) now requiring companies to disclose material cyber incidents and provide insights into their cyber strategy, CISOs are under pressure to ensure regulatory compliance. This disclosure requirement applies to both U.S.-based and foreign companies trading on U.S. markets, reinforcing cybersecurity’s critical role across global business operations.

The SEC’s regulatory emphasis on transparency has heightened the importance of cybersecurity within boardrooms, leading senior executives to turn to CISOs for guidance on managing risks and compliance. Beyond U.S. markets, regulatory authorities worldwide are implementing frameworks and standards that require companies to report cyber incidents, particularly as ransomware and other cyberattacks have grown more prevalent. In addition to regulatory compliance, the reputation and operational continuity tied to regulatory adherence have pushed CISOs to develop comprehensive cybersecurity strategies that align with overall business goals.

Steps to building a cyber-resilient organization

High-cyber-maturity organizations demonstrate that integrating cybersecurity into business strategy requires more than technical defenses; it demands a multi-dimensional approach encompassing governance, culture and operational resilience. Here are several key areas where organizations can focus to build a cyber-resilient structure:

  1. Leadership and governance: Effective cybersecurity governance starts at the top. Organizations should establish clear reporting structures where CISOs communicate directly with the CEO or board. This positioning emphasizes cybersecurity’s strategic importance and enables informed decision-making at the highest levels.

  2. Risk management practices: Proactive risk management means identifying, assessing and mitigating cyber risks in line with business objectives. High-cyber-maturity organizations use both quantitative and qualitative methods to understand and prioritize risks, creating a structured approach to vulnerability management that could impact operations.

  3. Incident response and recovery: Resilient organizations are not just prepared for incidents; they are equipped to recover swiftly and minimize impact. Robust incident response plans, regularly tested and updated, are essential for ensuring that organizations can maintain continuity even amid significant cyber events. These plans should involve cross-functional teams and clear communication channels to coordinate an efficient response.

  4. Continuous improvement and innovation: Cybersecurity is a dynamic field where continuous improvement is critical. Organizations should prioritize regular evaluations and updates to their cybersecurity measures, allowing them to stay ahead of evolving threats. As AI, automation and other technologies emerge, adopting them to enhance cybersecurity capabilities—such as anomaly detection and automated incident response — can further boost resilience.

CISOs take the lead

In the evolving landscape of cyber threats, the role of the CISO is becoming more integral to organizational resilience and business continuity. High-cyber-maturity organizations are leading the way, integrating cybersecurity into their strategic goals and recognizing that it is not merely an IT function but a business-critical priority. By aligning cybersecurity spending with broader business budgets, they can enhance resilience and drive long-term value.

The post CISOs drive the intersection between cyber maturity and business continuity appeared first on Security Intelligence.

  • ✇Security Intelligence
  • AI decision-making: Where do businesses draw the line? Doug Bonderud
    “A computer can never be held accountable, therefore a computer must never make a management decision.” – IBM Training Manual, 1979 Artificial intelligence (AI) adoption is on the rise. According to the IBM Global AI Adoption Index 2023, 42% of enterprises have actively deployed AI, and 40% are experimenting with the technology. Of those using or exploring AI, 59% have accelerated their investments and rollouts over the past two years. The result is an uptick in AI decision-making that leverage
     

AI decision-making: Where do businesses draw the line?

31 de Janeiro de 2025, 11:00

“A computer can never be held accountable, therefore a computer must never make a management decision.”

– IBM Training Manual, 1979

Artificial intelligence (AI) adoption is on the rise. According to the IBM Global AI Adoption Index 2023, 42% of enterprises have actively deployed AI, and 40% are experimenting with the technology. Of those using or exploring AI, 59% have accelerated their investments and rollouts over the past two years. The result is an uptick in AI decision-making that leverages intelligent tools to arrive at (supposedly) accurate answers.

Rapid adoption, however, raises a question: Who’s responsible if AI makes a poor choice? Does the fault lie with IT teams? Executives? AI model builders? Device manufacturers?

In this piece, we’ll explore the evolving world of AI and reexamine the quote above in the context of current use cases: Do companies still need a human in the loop, or can AI make the call?

Getting it right: Where AI is improving business outcomes

Guy Pearce, principal consultant at DEGI and member of the ISACA working trends group, has been involved with AI for more than three decades. “First, it was symbolic,” he says, “and now it’s statistical. It’s algorithms and models that allow data processing and improve business performance over time.”

Data from IBM’s recent AI in Action report shows the impact of this shift. Two-thirds of leaders say that AI has driven more than a 25% improvement in revenue growth rates, and 72% say that the C-suite is fully aligned with IT leadership about what comes next on the path to AI maturity.

With confidence in AI growing, enterprises are implementing intelligent tools to improve business outcomes. For example, wealth management firm Consult Venture Partners deployed AIda AI, a conversational digital AI concierge that uses IBM watsonx assistant technology to answer potential clients’ questions without the need for human agents.

The results speak for themselves: Alda AI answered 92% of queries correctly, 47% of queries led to webinar registrations and 39% of inquiries turned into leads.

Missing the mark: What happens if AI makes mistakes?

92% is an impressive achievement for Alda AI. The caveat? It was still wrong 8% of the time. So, what happens when AI makes mistakes?

For Pearce, it depends on the stakes.

He uses the example of a financial firm leveraging AI to evaluate credit scores and issue loans. The outcomes of these decisions are relatively low stakes. In the best-case scenario, AI approves loans that are paid back on time and in full. In the worst case, borrowers default, and companies need to pursue legal action. While inconvenient, the negative outcomes are far outweighed by the potential positives.

“When it comes to high stakes,” says Pearce, “look at the medical industry. Let’s say we use AI to address the problem of wait times. Do we have sufficient data to ensure patients are seen in the right order? What if we get it wrong? The outcome could be death.”

As a result, how AI is used in decision-making depends largely on what it’s making decisions about and how these decisions impact both the company making the decisions and those the decision affects.

In some cases, even the worst-case scenario is a minor inconvenience. In others, the results could cause significant harm. 

Explore AI cybersecurity

Taking the blame: Who’s accountable if AI gets it wrong?

In April 2024, a Tesla operating in “full self-driving” mode struck and killed a motorcyclist. The driver of the vehicle admitted to looking at their phone prior to the crash despite active driver supervision being required.

So who takes the blame? The driver is the obvious choice and was arrested on charges of vehicular homicide.

But this isn’t the only path to accountability. There’s also a case to be made in which Tesla bears some responsibility since the company’s AI algorithm failed to spot the victim. Blame could also be placed on governing bodies such as the National Highway Traffic Safety Administration (NHTSA). Perhaps their testing wasn’t rigorous or complete enough.

One could even argue that the creator(s) of Tesla’s AI could be held liable for letting code that could kill someone go live.

This is the paradox of AI decision-making: Is someone at fault, or is everyone at fault? “If you bring all the stakeholders together who should be accountable, where does that accountability lie?” asks Pearce. “With the C-suite? With the whole team? If you have accountability that’s spread over the entire organization, everyone can’t end up in jail. Ultimately, shared accountability often leads to no accountability.”

Drawing the line: Where does AI end?

So, where do organizations draw the line? Where does AI insight give way to human decision-making?

Three considerations are key: Ethics, risk and trust.

“When it comes to ethical dilemmas,” says Pearce, “AI can’t do it.” This is because intelligent tools naturally seek the most efficient path, not the most ethical. As a result, any decision involving ethical questions or concerns should include human oversight.

Risk, meanwhile, is an AI specialty. “AI is good in risk,” Pearce says. “What statistical models do is give you something called a standard error, which lets you know if what AI is recommending has a high or low potential variability.” This makes AI great for risk-based decisions like those in finance or insurance.

Finally, enterprises need to prioritize trust. “There are declining levels of trust in institutions,” says Pearce. “Many citizens don’t feel confident that the data they share is being used in a trustworthy manner.”

For example, under GDPR, companies need to be transparent about data collection and handling and give citizens a chance to opt-out. To bolster trust in AI use, organizations should clearly communicate how and why they’re using AI and (where possible) allow customers and clients to opt out of AI-driven processes.

Decisions, decisions

Should AI be used for management decisions? Maybe. Will it be used to make some of these decisions? Almost certainly. The draw of AI — its ability to capture, correlate and analyze multiple data sets and deliver new insights — makes it a powerful tool for enterprises to streamline operations and reduce costs.

What’s less clear is how the shift to management-level decision-making will impact accountability. According to Pearce, current conditions create “blurry lines” in this area; legislation hasn’t kept pace with increasing AI usage.

To ensure alignment with ethical principles, reduce the risk of wrong choices and engender stakeholder and customer trust, businesses are best served by keeping humans in the loop. Maybe this means direct approval from staff is required before AI can act. Maybe it means the occasional review and evaluation of AI decision-making outcomes.

Whatever approach enterprises choose, however, the core message remains the same: When it comes to AI-driven decisions, there’s no hard-and-fast line. It’s a moving target, one defined by possible risk, potential reward and probable outcomes.

The post AI decision-making: Where do businesses draw the line? appeared first on Security Intelligence.

  • ✇Security Intelligence
  • When ransomware kills: Attacks on healthcare facilities Jonathan Reed
    As ransomware attacks continue to escalate, their toll is often measured in data loss and financial strain. But what about the loss of human life? Nowhere is the ransomware threat more acute than in the healthcare sector, where patients’ lives are literally on the line. Since 2015, there has been a staggering increase in ransomware attacks on healthcare facilities. And the impacts are severe: Diverted emergency services, delayed critical treatments and even fatalities. Meanwhile, the pledge som
     

When ransomware kills: Attacks on healthcare facilities

30 de Janeiro de 2025, 11:00

As ransomware attacks continue to escalate, their toll is often measured in data loss and financial strain. But what about the loss of human life? Nowhere is the ransomware threat more acute than in the healthcare sector, where patients’ lives are literally on the line.

Since 2015, there has been a staggering increase in ransomware attacks on healthcare facilities. And the impacts are severe: Diverted emergency services, delayed critical treatments and even fatalities. Meanwhile, the pledge some ransomware groups made during the COVID-19 pandemic to avoid attacking healthcare providers has been abandoned. It’s clear that hospitals are now fair game.

Ransomware attacks on the healthcare sector cause real harm to patients, impacting survival rates and threatening other critical services. And ransomware targeting other critical infrastructure carries serious implications for public health and safety.

Ransomware in life-and-death situations

Hospitals depend heavily on digital systems for managing patient care. When a ransomware attack strikes, these systems go offline, with often tragic results. Research highlights the risks: There’s been a 300% increase in ransomware attacks on healthcare since 2015. This led to a spike in emergency cases, including strokes and cardiac arrests, at hospitals overwhelmed by patients diverted from facilities hit by cyberattacks.

A study by the University of California San Diego showed that ransomware attacks on hospitals cause a spillover effect. This means neighboring hospitals see a surge in patients, leading to cardiac arrest cases jumping 81%. Survival rates also dropped for those cardiac arrest cases.

One recent example is the ransomware attack on Synnovis, a pathology services provider to the NHS in London. The attack caused problems with blood tests and transfusions, delaying crucial cancer treatments and elective procedures across several hospitals. This disruption illustrates a common trend in healthcare-related ransomware incidents: Delayed testing and procedures can become life-threatening as time-sensitive treatments are postponed or missed altogether.

In another study of two urban emergency departments adjacent to a healthcare organization under attack, researchers noted significant increases in patient volume, longer waiting times and increases in patient “left without being seen” rates. These delays, according to the study, underscore the need for a disaster response approach for such incidents.

In some cases, the tragic consequences of ransomware in healthcare have been documented in legal proceedings. In 2020, a woman sued an Alabama hospital, claiming that a ransomware attack had contributed to the death of her newborn daughter. The hospital’s computer systems were offline during delivery, preventing access to critical monitoring tools and allegedly leading to severe birth complications. While the case has been settled, it raises the question of whether similar events may have occurred without public awareness.

Ransomware impacts beyond healthcare

While the healthcare sector’s vulnerability to ransomware is uniquely tragic, critical infrastructure sectors are also facing increased risks. When Colonial Pipeline, a major fuel distributor, was hit by ransomware in 2021, it led to fuel shortages across the Eastern U.S. Though no direct fatalities were reported, the panic that ensued may have resulted in at least one fatal car accident as people rushed to stockpile fuel.

In critical infrastructure sectors, the potential for loss of life or injury is significant. Attacks on power grids, water supplies and transportation systems could have severe consequences. Researchers warn that a ransomware attack on an energy grid, for example, could disrupt power to hospitals, emergency services and vulnerable populations, putting lives at risk. If the healthcare industry can serve as a lesson, the fallout from critical infrastructure attacks is not a hypothetical but a looming possibility.

Read the Threat Intelligence Index

How ransomware threats exploit vulnerabilities in healthcare

Healthcare facilities are attractive targets for ransomware for several reasons. First, they hold a wealth of sensitive patient data, including medical histories, personal information and financial details. The cost of downtime in healthcare is especially high. When health centers are crippled by ransomware, people’s lives are at stake, making hospitals more likely to pay a ransom quickly. Healthcare ransomware incidents result in an average payment of $4.4 million, according to recent studies from the second quarter of 2024.

Additionally, healthcare facilities often use complex and outdated infrastructure, relying on an assortment of vendors and legacy systems that can be difficult to secure. A lack of centralized cybersecurity across networks further increases vulnerabilities, allowing ransomware groups to infiltrate systems and cause cascading disruptions.

Evidence of ransomware’s lethal potential

Although establishing a direct causal link between ransomware attacks and fatalities can be complicated, recent data provides compelling insights. One analysis estimates that from 2016 to 2021, between 42 and 67 Medicare patients died as a result of ransomware attacks. And this doesn’t include private insurer data. Research also highlights the broader health impacts, including reduced care quality and delayed treatments. During cyber incidents, hospitals often resort to manual processes that lack the safety checks and efficiency of electronic health records, increasing the risk of error and missed diagnoses.

The problem isn’t limited to fatalities. Ransomware-induced delays can exacerbate health issues, resulting in long-term complications and higher healthcare costs. A delayed diagnosis can mean the difference between life and death for conditions like heart disease, stroke and sepsis. Ransomware attacks may, therefore, lead to excess deaths, even if the connection is indirect.

The need for resilience against ransomware attacks

To mitigate the impact of ransomware on patient care, some hospitals have begun implementing ransomware response protocols, such as Children’s National Hospital’s “Code Dark” procedures. These response protocols are designed to maintain continuity of care when systems are down, including clear instructions for manual record-keeping, communication protocols and patient triage. Yet, these steps can only go so far. True resilience requires proactive measures like employee training, layered security controls and frequent system backups to minimize disruption.

As ransomware attacks grow more sophisticated, many in the cybersecurity industry argue for policy changes to address the threat. One critical need is better data sharing among healthcare facilities, cybersecurity experts and government agencies to track trends and respond quickly. Governments also need to classify healthcare cybersecurity as a matter of national security, allocating resources and support to help facilities improve resilience against ransomware and other cyber threats.

Addressing the growing ransomware threat

The threats to the healthcare sector provide a stark reminder of the broader risks ransomware poses to society. While healthcare providers are uniquely vulnerable, other critical infrastructure sectors are increasingly at risk. As demonstrated by the Colonial Pipeline incident, the ripple effects of ransomware can be felt across entire regions, affecting services as fundamental as fuel, water and transportation.

For cybersecurity professionals, the rise in ransomware attacks on critical services calls for a proactive approach to defense. This includes advocating for stronger industry standards, encouraging the use of robust cybersecurity tools and supporting cross-sector collaboration to prepare for and respond to attacks. The goal is clear: To minimize the risk that ransomware claims lives, either directly or through delayed access to essential services.

The post When ransomware kills: Attacks on healthcare facilities appeared first on Security Intelligence.

  • ✇Security Intelligence
  • AI and cloud vulnerabilities aren’t the only threats facing CISOs today Charles Owen-Jackson
    With cloud infrastructure and, more recently, artificial intelligence (AI) systems becoming prime targets for attackers, security leaders are laser-focused on defending these high-profile areas. They’re right to do so, too, as cyber criminals turn to new and emerging technologies to launch and scale ever more sophisticated attacks. However, this heightened attention to emerging threats makes it easy to overlook traditional attack vectors, such as human-driven social engineering and vulnerabilit
     

AI and cloud vulnerabilities aren’t the only threats facing CISOs today

29 de Janeiro de 2025, 11:00

With cloud infrastructure and, more recently, artificial intelligence (AI) systems becoming prime targets for attackers, security leaders are laser-focused on defending these high-profile areas. They’re right to do so, too, as cyber criminals turn to new and emerging technologies to launch and scale ever more sophisticated attacks.

However, this heightened attention to emerging threats makes it easy to overlook traditional attack vectors, such as human-driven social engineering and vulnerabilities in physical security.

As adversaries exploit an ever-wider range of potential entry points — both new and old — security leaders must strike a balance to ensure that they’re capable of addressing all risks effectively.

Cyber crime is still a human problem

Despite overwhelming hype, technology is not a panacea. It can’t replace human expertise in every domain, and AI alone can’t match the innately human qualities of intuition and creative thinking. Adversaries know this too, which is why the smarter — and much more dangerous — ones use a blend of human- and technology-powered tactics.

While major technical vulnerabilities tend to make the headlines, the reality is that the weakest link is almost always the human element. Almost all attacks involve a social engineering element, and despite the buzz around generative AI and deepfakes helping scale such attacks, it’s human-to-human interaction where the greatest risks lie.

Synthetic content is now all around us, and people are getting better at telling it apart. Whether we get to the point when that’s no longer the case is a topic for another discussion. But for now, the most dangerous and effective social engineering attacks still depend primarily on human conversations, whether by phone, email or even in person. After all, a seasoned attacker can build trust and forge sham relationships in a way that no AI nor deepfake can match.

Cyber espionage remains a serious threat

Take state-sponsored cyber espionage, for example. Highly trained social engineers are a far cry from the typical rabble of independent cyber crime rackets operating off the dark web, who tend to rely more on scale than targeting specific enterprises and individuals. These attackers may target data systems, but when it comes to their own arsenals, their talents in manipulation and deception are by far their greatest weapons.

Technology still has a long way to go before it can come close to matching the age-old tactics of spycraft.

When facing an attacker who can pose effectively as an internal employee or any other trusted individual, someone relying solely on technology to mitigate the threat stands little chance of protecting themselves. That isn’t a technology failure. It’s a process failure, hence why the human element must always be a key factor in any cybersecurity strategy.

Of course, that’s not to say technology doesn’t have a vital role to play in bolstering your cyber defenses. It most certainly does, not least, because more and more routine threats are being automated or are carried out en-masse by attackers who are less skilled or experienced. The value of technology — especially AI-powered cybersecurity automation — exists primarily in its ability to free up time for security leaders to focus on the threats that technology alone can’t solve.

Explore cybersecurity services

It’s not all about the cloud, either

The majority of business data is now stored in the cloud, and the percentage continues to rise. Many businesses, especially smaller organizations and startups, exclusively use the cloud for data storage and other IT operations. The rise of AI, given how computationally demanding it is, is further accelerating cloud adoption.

Nonetheless, cloud computing isn’t the best option in all situations. On-premises remains the preferred choice for high-performance workloads that require extremely low latencies. In some cases, on-premises computing is also the cheaper option, and that’s unlikely to change in the near future.

Even though more companies are migrating to the cloud, that doesn’t mean they don’t keep sensitive data on-site. For instance, edge computing, which brings data processing closer to where it’s needed, has become a critical enabler in certain use cases. Examples include smart energy grids, remote monitoring of industrial assets and autonomous vehicles. These include cases where you can’t always rely on internet connectivity.

The smarter and better-funded adversaries aren’t just targeting cloud-hosted infrastructure. They’re also setting their sights on local servers and cyber-physical systems, such as industrial control systems and hardware supply chains. The fact that there’s often minimal collaboration between logistics, production and cybersecurity departments makes these risks all the more serious.

Ransomware remains one of the biggest threats targeting on-premises systems despite the small reduction in attacks over the last year. While cloud systems aren’t inherently immune from ransomware attacks, the vast majority target bare-metal hypervisors and local servers. In one recent case, the Akira ransomware group reverted to its earlier double extortion tactics, experimenting with different code frameworks to target systems running ESXi and Linux.

Botnets are another growing concern as the number of IoT devices continues to soar. Used to launch distributed denial of service (DDoS) attacks spanning thousands of devices, these botnets primarily target unsecured IoT devices, like those that monitor and operate industrial machines and critical infrastructure. One recent report discovered that DDoS attacks against critical infrastructure have increased by 55% in the last four years. These attacks don’t directly involve the exfiltration of sensitive data, but given how they can cause widespread disruption, adversaries may rely on them to draw attention away from more serious threats.

Why physical security is still relevant

As security leaders focus on locking down their cloud-hosted assets, they cannot afford to lose sight of the risks facing their physical infrastructure. Sometimes, the easiest way into the cloud is from within.

Even thin clients and dumb terminals — both widely used in high-security environments like healthcare and finance — can potentially give attackers a foothold in wider systems, including cloud infrastructure and remote data centers. Edward Snowden proved that while working at the National Security Agency when he exfiltrated 20,000 government documents stored on the servers in NSA’s headquarters 5,000 miles away. He did so without using any advanced technology. While that happened way back in 2013, and the NSA has long since updated its physical security protocols, the risk is just as relevant today as it was then.

While most thin clients are now protected by multiple layers of security, including encryption and multifactor authentication, these solutions alone can’t fully protect against physical compromise. If an attacker gains access to a terminal — perhaps by way of social engineering — they may be able to compromise it using unauthorized peripherals or by directly manipulating the device’s firmware. This could give them access to the wider network, potentially allowing for the injection of customized malware that goes undetected by regular security scans.

IoT devices are another leading reason behind the expansion of attack surfaces. They often lack adequate security, also giving attackers a potential entry point into the broader computing infrastructures they’re connected to. The fact that these connected technologies are being rolled out en masse in areas like smart cities, critical infrastructure and transportation networks, greatly magnifies such vulnerabilities.

Ultimately, if an attacker is able to get past your physical safeguards, then these connected systems present far easier pathways to an organization’s so-called “crown jewels” than trying to break through multi-layered cloud defenses.

Cloud data is not always the true target

In other cases, data hosted in the cloud might not be the attacker’s end goal. Many companies, such as those subject to stringent data residency regulations or that require high performance for real-time applications, still store their data on on-premises servers.

Some of these systems are air-gapped, meaning they’re entirely disconnected from any other networks, including the Internet itself. While more secure than any cloud-hosted server, at least in theory, their security can’t be taken for granted. For instance, anyone with physical access to the servers may be able to compromise them, either maliciously or accidentally.

Physical security, such as CCTV and biometric security checkpoints, is as important as ever in such cases. But it’s not just about protecting against intentional physical tampering. Indirect attacks orchestrated by highly skilled social engineers can also dupe unsuspecting employees into taking a desired action — such as lending them a biometric security access card.

These are not the sort of adversaries that usually work by email or use AI to scale their attacks – they’re far likelier to deceive someone in person, a tactic as old as humanity itself. In fact, the attacker could be anyone, such as a disgruntled former employee, a hacker operating in the interests of a rival company or even a rogue state.

Bridging the gap between digital and human security

Technology alone can’t protect an organization from the myriad threats out there, and neither can humans keep up with ever-expanding system logs and security information feeds if they’re relying solely on manual processes.

The reality is that you need both, starting with people and using technology to broaden their capabilities. A layered security strategy should typically start with locking down physical access to any data-bearing system or system that is connected to another.

The next layer of defense is the human one. This revolves heavily around security awareness training. But the reality is that many programs are ineffective, either because they lack practical application, are overly reliant on generic content or focus too much on technical factors that are beyond the target audience’s understanding.

Phishing simulations are often similarly limited in their scope, focusing on common lures like trending news topics, a sense of urgency or even outright threats. However, more sophisticated attackers tend to use subtler ways to elicit a response. This could be something as simple as sending messages about a routine policy update regarding company dress code or remote work guidelines. These topics might seem trivial, but they can pique interest, especially when they concern changes to daily routines and work-life balance. Attackers could then use this to dupe unsuspecting victims into divulging sensitive information via a sham survey.

Like any other security measure, physical systems and awareness training will only ever be effective if they’re tested regularly. That’s where physical red teaming comes in. Whereas red teaming in the context of IT focuses on technical measures like penetration testing, physical red teaming is all about having teams try to gain entry to restricted areas and systems. To do so, they might use a blend of simulated social engineering attacks and technology to hack into physical security systems. By attempting to bypass physical security barriers or impersonate staff, red teams can reveal gaps that might otherwise go unnoticed. That’s what makes them a valuable part of any comprehensive information security program.

The post AI and cloud vulnerabilities aren’t the only threats facing CISOs today appeared first on Security Intelligence.

  • ✇Security Intelligence
  • 4 trends in software supply chain security Sue Poremba
    Some of the biggest and most infamous cyberattacks of the past decade were caused by a security breakdown in the software supply chain. SolarWinds was probably the most well-known, but it was not alone. Incidents against companies like Equifax and tools like MOVEit also wreaked havoc for organizations and customers whose sensitive information was compromised. Expect to see more software supply chain attacks moving forward. According to ReversingLabs’ The State of Software Supply Chain Security
     

4 trends in software supply chain security

28 de Janeiro de 2025, 11:00

Some of the biggest and most infamous cyberattacks of the past decade were caused by a security breakdown in the software supply chain. SolarWinds was probably the most well-known, but it was not alone. Incidents against companies like Equifax and tools like MOVEit also wreaked havoc for organizations and customers whose sensitive information was compromised.

Expect to see more software supply chain attacks moving forward. According to ReversingLabs’ The State of Software Supply Chain Security 2024 study, attacks against the software supply chain are getting easier and more ubiquitous.

“For example, Operation Brainleeches, identified by ReversingLabs in July, showed elements of software supply chain attacks supporting commodity phishing attacks that use malicious email attachments to harvest Microsoft.com logins,” the report stated.

It is easier to conduct software supply chain attacks, so they are increasing at an alarming rate. The ReversingLabs report saw a 1,300% increase in threats coming from open-source package repositories last year. That’s the bad news.

The good news is that cybersecurity teams and government entities recognize the risks coming from the software supply chain, and there is a lot of action toward defending against these attacks and steps to solidify security before the software is released into the wild.

Who controls the software?

Who controls the software and who controls the device are the game-changers in software supply chain security, according to Xin Qiu, Sr., Director of Security Product Marketing and Management at CommScope. But that’s hyper-focused down to the developers and system engineers creating the software and setting up the systems. The problem is that there is little integration within an organization to enable effective control.

Companies have a lot of tools, but they are scattered around, says Qiu. Everyone is siloed, doing things in different ways. That approach has to change.

It is the federal government that is taking the lead in tackling software supply chain security with technical regulations and laws.

“To improve your software supply chain security, you need to have a common standard,” says Qiu. “I think this is a good way to fill those gaps.”

The most recognizable action taken by the government entities was the Executive Order(EO) from the Biden administration, which addresses the nation’s cybersecurity but especially emphasizes protecting the software supply chain. In conjunction with that EO, a cross-sector group representing different government agencies, the Enduring Security Framework (ESF) Software Supply Chain Working Panel, put together a comprehensive guide for recommended practices of security in the software supply chain for developers. NIST also has a framework to secure the software supply chain.

4 security solution trends for the software supply chain

But government guidelines and regulations only go so far, and it is up to organizations to better equip themselves with the tools, solutions and processes that allow developers, engineers and security and IT teams to address risks within the software supply chain. There are a number of ideas and tools out there, some initiated by the government, that are trending in the battle against vulnerabilities and threats.

1. Secure by design

At RSAC2024, CISA Director Jen Easterly and a panel of cybersecurity professionals gave a panel on CISA’s Secure by Design initiative. The idea is to build security into products and make it a business feature and core technical requirement rather than the more standard approach of treating security as a failure. “During the design phase of a product’s development lifecycle, companies should implement Secure by Design principles to significantly decrease the number of exploitable flaws before introducing them to the market for widespread use or consumption,” the initiative’s website states.

Part of the presentation was the introduction of the initial group of businesses that took the Secure by Design pledge. According to CISA, “By participating in the pledge, software manufacturers are pledging to make a good-faith effort to work towards the goals listed below over the following year.” The pledge includes a list of goals for developers and organizations to work toward. These goals include standards around MFA, reducing default passwords and better transparency around vulnerability disclosure and reporting. More than 200 organizations have taken the pledge so far.

Learn how cybersecurity shapes supply chain resilience

2. Software bill of materials (SBOMs)

SBOMs are a nested inventory of all the components that make up a software application. The components can include open source, third parties, patch status and licenses. SBOMs have become a key part of the software supply chain security structure and are endorsed by CISA as a way for developers to build a community that works together to share ideas and experiences around operationalization, scaling, technologies, new tools and use cases. To encourage SBOM use and understanding, CISA facilitates regular meetings from those across the software development and design community and also offers a resource library.

SBOMs can help an organization identify risks, especially in third-party and proprietary software packages: track vulnerabilities within the different components; ensure compliance and help the team make better security decisions by being more aware of the component parts of their software.

3. Supply-chain levels for software artifacts (SLSA) frameworks

SLSA is a security framework to safeguard the integrity of software artifacts. It is a checklist of standards to better improve the integrity of the software, prevent tampering and exploitation and keep the infrastructure and application packages secure. The framework was based on Google’s production workloads and offers a structured approach to evaluating the security posture of software components throughout the supply chain.

4. Governance, risk and compliance (GRC) management

GRC management is used to mitigate security risks within a software development supply chain while ensuring the software meets required regulatory compliances and security standards. Some of the areas that GRC monitors include:

  • Identifying risks across the entire software supply chain
  • Vendor risk management and assessment of third-party security posture before integrating the software into your organization’s system
  • Compliance management to meet industry and government standards
  • Policy enforcement across the development lifecycle
  • Incident response after a cyber incident caused by the software supply chain

GRC management tools can also be used with SBOM analysis.

The evolving puzzle of software supply chain security

This is just a sample of the tools and solutions used to protect the software supply chain from risk. As security is more consciously built into the software and developers and engineers share information in communities rather than working in silos, there is a fighting chance of slowing the threats against the software supply chain.

The post 4 trends in software supply chain security appeared first on Security Intelligence.

  • ✇Security Intelligence
  • How secure are green data centers? Consider these 5 trends Jennifer Gregory
    As organizations increasingly measure environmental impact towards their sustainability goals, many are focusing on their data centers. KPMG found that the majority of the top 100 companies measure and report on their sustainability efforts. Because data centers consume a large amount of energy, Gartner predicts that by 2027, three in four organizations will have implemented a data center sustainability program, which often includes implementing a green data center. “Responsibilities for sustai
     

How secure are green data centers? Consider these 5 trends

27 de Janeiro de 2025, 11:00

As organizations increasingly measure environmental impact towards their sustainability goals, many are focusing on their data centers.

KPMG found that the majority of the top 100 companies measure and report on their sustainability efforts. Because data centers consume a large amount of energy, Gartner predicts that by 2027, three in four organizations will have implemented a data center sustainability program, which often includes implementing a green data center.

“Responsibilities for sustainability are increasingly being passed down from CIOs to infrastructure and operations (I&O) leaders to improve IT’s environmental performance, particularly around data centers,” said Autumn Stanish, Senior Principal Analyst at Gartner. “This has led many down the path of greater spending and investment in environmental solutions, but environmental impact shouldn’t be the only focus. Sustainability can also have a significant positive impact on non-environmental factors, such as brand, innovation, resilience and attracting talent.”

Organizations increasingly building green data centers

The International Energy Agency (IEA) found data centers account for 1 to 1.5 percent of global electricity consumption. Reducing energy consumption is often a top priority when designing and building a green data center. Because AI uses more computing power than traditional methods, data centers are increasingly using more energy, which is only predicted to increase as use cases for AI continue to expand.

The term green data center does not refer to a single technology, but instead a strategic approach designed to more efficiently use resources that starts at the very beginning of the process. Every decision regarding processes, environment and technology is made with sustainability as a top priority. For example, green data centers often use a smaller physical space and typically use low-emission materials in construction.

However, green data centers add new cybersecurity risks as well as increase known risks. Organizations must keep cybersecurity at the center of each green data center decision.

Here are five green data center trends to consider in terms of cybersecurity when designing and implementing a green data center.

1. Advanced cooling technologies

Many green data centers reduce their reliance on traditional air conditioning by using advanced cooling techniques, liquid cooling or precision cooling. These techniques often use IoT devices for monitoring temperatures and energy use. However, IoT devices can provide entry points for cyber criminals to access the network and all connected systems. Additionally, IoT devices expand the potential attack surface area.

 By proactively taking steps for each IoT device, organizations can effectively use advanced cooling techniques without significantly increasing their risk. As part of the installation process for each IoT device, administrators should change the preinstalled passwords with complex passwords. Many organizations also use a VPN local virtual private network for IoT devices to limit access to other systems in case of a cybersecurity incident.

2. Extending life of equipment

Purchasing new equipment regularly for a data center increases its environmental impact as well as costs. Many organizations are using upgrades, refurbishments and efficient maintenance to extend the lifespan. However, older equipment may have more cybersecurity vulnerabilities and be less likely to use the latest (and most secure) technologies and techniques. By regularly evaluating the benefits of continuing to use a piece of equipment for sustainability reasons with its cybersecurity risk, organizations can make a balanced decision. Additionally, installing all updates in a timely manner also reduces risk.

Explore cybersecurity services

3. Virtualization

A common technique to reduce resources in data centers is virtualization. Because virtualization involves creating an abstract layer over computer hardware, organizations can use less physical equipment, resulting in lower energy consumption. A single physical server often runs multiple servers. Because virtual servers consume less energy, this often significantly reduces energy consumption.

However, virtual servers contain more entry points for breaches and attacks than physical servers. Additionally, cyber criminals often target the hypervisor that manages the virtual machines. By compromising the hypervisor, threat actors take control of a large portion of the data center and can inflict significant damage, especially through a ransomware attack.

Organizations can reduce their virtualization risk by ensuring that the user privileges for the virtual machines and hypervisor are appropriate for each person’s work-related tasks. By using segmentation in virtualized environments, cyber criminals can only access a small portion of the network and systems, which limits damage. Additionally, organizations should regularly audit which users have escalated privileges in a domain controller to reduce attackers waiting in the wings.

4. Renewable energy sources

By shifting from such as fossil fuels to renewable sources like solar, wind or hydropower, data centers can decrease their reliance on non-renewable energy and reduce the emission of greenhouse gasses. Because solar and wind farms are often in different locations than the data centers themselves, using these energy sources creates a larger attack area that increases the risk. Additionally, each system used for the new energy source adds to the surface area as well. Renewable energy sources also often use the power grid and the internet, which creates new sources of vulnerability. Because these energy sources often contain a high volume of sensitive data, organizations must proactively mitigate the risk of a data breach and compliance issues.

5. Data center infrastructure management (DCIM)

Green data centers typically use a DCIM to monitor and manage all aspects of the data center infrastructure, including power distribution and cooling systems, from a single location. Because of the real-time monitoring of power consumption, organizations can identify issues and make changes quickly to reduce the environmental impact instead of waiting until after the impact has occurred.

Due to its integration with other systems, the DCIM creates a target for attackers to gain access to other data. The high level of integration makes it possible for threat actors to gain access to the DCIM from other interconnected systems. Organizations must focus on creating strong access controls to make sure that only authorized users gain access to reduce data leaks and breaches.

Balancing security and sustainability

Because sustainability is the top concern with a green data center, organizations can inadvertently make decisions that increase cybersecurity vulnerabilities. With a balanced approach that considers both sustainability and cybersecurity, organizations can reduce the environmental impact of their data center while also reducing the risk of a breach or attack.

The post How secure are green data centers? Consider these 5 trends appeared first on Security Intelligence.

  • ✇Security Intelligence
  • Are successful deepfake scams more common than we realize? Jennifer Gregory
    Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud. Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the meeting was over, they then dutifully followed t
     

Are successful deepfake scams more common than we realize?

24 de Janeiro de 2025, 14:00

Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.

Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the meeting was over, they then dutifully followed their boss’s instructions to send $200 million Hong Kong dollars, which equals $25 million.

But it wasn’t actually their boss — just an AI video representation called a deepfake. Later that day, the employee realized their terrible mistake after checking with the corporate offices of their multinational firm. They had been a victim of a deepfake scheme that defrauded the organization out of $25 million.

Businesses are often deepfake targets

The term deepfake refers to AI-created content — video, image, audio or text — that contains false or altered information, such as Taylor Swift promoting cookware and the infamous fake Tom Cruise. Even the recent hurricanes hitting the U.S. led to multiple deepfake images, including fake flooded Disney World photos and heartbreaking AI-generated pictures of people with their pets in floodwaters.

While deepfakes, also referred to as synthetic media, targeted at individuals typically serve to manipulate people, cyber criminals targeting businesses are looking for monetary gain. According to the CISA Contextualizing Deepfake Threats to Organizations information sheet, threats targeting businesses tend to fall into one of three categories: executive impersonation for brand manipulation, impersonation for financial gain or impersonation to gain access.

But the recent incident in Hong Kong wasn’t just one employee making a mistake. Deepfake schemes are becoming increasingly common for businesses. A recent Medus survey found that the majority (53%) of finance professionals have been targeted by attempted deepfake schemes. Even more concerning is the fact that more than 43% admitted to ultimately falling victim to the attack.

Watch Unmask the Deepfake

Are deepfake attacks underreported?

The key word from the Medus research is “admitted.”  And it raises a big question. Do people fail to report being a victim of a deepfake attack because they are embarrassed? The answer is probably.  After the fact, it seems obvious it was a fake to other people. And it’s tough to admit that you fell for an AI-generated image.  But the underreporting only adds to the shame and makes it easier for cyber criminals to get away with it.

Most people assume that they could spot a deepfake. But that’s not the case. The Center for Humans and Machines and CREED found a wide gap between people’s confidence in identifying a deepfake and their actual performance. Because many people overestimate their ability to identify a deepfake, it adds to the shame when someone falls victim, which likely leads to underreporting.

Why people fall for deepfake schemes

The employee who was tricked by the deepfake of the CFO to the tune of $25 million later admitted that when they first got the email supposedly from his CFO, the mention of a secret transaction made them wonder if the email was actually a phishing email. But once he got on the video, they recognized other members of his department in the video and decided it was authentic. However, the employee later learned that the video images of his department members were also deepfakes.

Many people who are victims overlook their concerns, questions and doubts. But what makes people, even those educated on deepfakes, push their concerns to the side and choose to believe an image is real? That’s the $1 million — or $25 million — question that we need to answer to prevent costly and damaging deepfake schemes in the future.

Sage Journals asked the question about who was more likely to fall for deepfakes and didn’t find any pattern around age or gender. However, older individuals may be more vulnerable to the scheme and have a hard time detecting it. Additionally, the researchers found that while awareness is a good starting point, it appears to have limited effectiveness in preventing people from falling for deepfakes.

However, computational neuroscientist Tijl Grootswagers of Western Sydney University likely hit the nail on the head as to the challenge of spotting a deepfake: it’s a brand new skill for each of us. We’ve learned to be skeptical of news stories and bias, but questioning the authenticity of an image we can see goes against our thought processes. Grootswagers told Science Magazine “In our lives, we never have to think about who is a real or a fake person. It’s not a task we’ve been trained on.”

Interestingly, Grootswagers discovered that our brains are better at detection without our intervention. He discovered that when people looked at a picture of a deepfake, the image resulted in a different electrical signal to the brain’s visual cortex than a legitimate image or video. When asked why, he wasn’t quite sure — maybe the signal never reached our consciousness due to interference from other brain regions, or maybe humans don’t recognize the signals that an image is fake because it’s a new task.

This means that each of us must begin to train our brain to consider that any image or video that we view could possibly be a deepfake. By asking this question each and every time we begin to act on content, we may be able to begin detecting our brain signals that are spotting the fakes before we can. And most importantly, if we do fall victim to a deepfake, especially at work, it’s key that each of us reports all instances. Only then can experts and authorities begin to curb the creation and proliferation.

The post Are successful deepfake scams more common than we realize? appeared first on Security Intelligence.

  • ✇Security Intelligence
  • How cyberattacks on grocery stores could threaten food security Jennifer Gregory
    Grocery store shoppers at many chains recently ran into an unwelcome surprise: empty shelves and delayed prescriptions. In early November, Ahold Delhaize USA was the victim of a cyberattack that significantly disrupted operations at more than 2,000 stores, including Hannaford, Food Lion and Stop and Shop. Specific details of the nature of the attack have not yet been publicly released. Because the attack affected many digital systems, some stores were not able to accept credit/debit cards, whil
     

How cyberattacks on grocery stores could threaten food security

24 de Janeiro de 2025, 11:00

Grocery store shoppers at many chains recently ran into an unwelcome surprise: empty shelves and delayed prescriptions. In early November, Ahold Delhaize USA was the victim of a cyberattack that significantly disrupted operations at more than 2,000 stores, including Hannaford, Food Lion and Stop and Shop. Specific details of the nature of the attack have not yet been publicly released.

Because the attack affected many digital systems, some stores were not able to accept credit/debit cards, while others had to shut down online ordering. Additionally, Hannaford’s website was offline for several days. Food supply issues have lasted several weeks in some cases, especially in the New England area, illustrating the impact cyberattacks have on people’s everyday lives.

Cybersecurity in the agrifood industry

The importance of cybersecurity in the food supply chain continues to increase as the agrifood industry becomes increasingly digitized. The increase in smart farming means a cybersecurity attack can even impact growing and harvesting. In addition to the production and distribution processes, a cyberattack can even impact food safety. For example, a cyberattack could interfere with technology that monitors food temperature during production, which can lead to contamination.

Cybersecurity is especially key in this industry because one issue in one segment can quickly compound across the globe. Because of the complex process of bringing food from farm to table, a single vulnerability in one small company can have a major impact on the food supply chain. Additionally, many agrifood companies rely heavily on third-party vendors.

“One challenge with ransomware attacks is that they can cause consequences for suppliers or partners of the victim company, in addition to the direct impact on the victim company itself. Considering the integrated and interconnected nature of the food and agriculture industry, a disruption in one company likely will have a cascading [effect],” according to the Farm to Table Ransomware Report by Food Ag ISAC.

For example, many grocery store chains hire vendors to transport products from warehouses to stores. A cyberattack on the transportation company can shut down critical systems, meaning that food does not arrive as scheduled, which leads to empty shelves.

“Attacks targeting suppliers, distributors or logistics providers can lead to delays in product delivery, shortages or the introduction of counterfeit products. Disruptions in the supply chain can have far-reaching consequences, affecting not only the profitability of companies but also impacting food availability and increasing prices for consumers,” reports Food Safety magazine.

According to Forbes, FBI Special Agent Gene Kowel, speaking at the August FBI Agriculture Threats Symposium in Nebraska, said: “The cyber risk and national security threat to farms, ranches and food processing facilities is growing exponentially. The threats are evolving, becoming more complex and severe.” He also stated that the four key threats facing the agriculture sector are ransomware attacks, foreign malware, data and intellectual property theft and bioterrorism impacting food production and the water supply. Additionally, he warned that foreign entities are actively attempting to destabilize the U.S. agriculture industry.

Explore cybersecurity services

Recent agrifood cyberattacks

While grocery stores have dominated the headlines lately regarding agrifood cyberattacks, other companies faced cybersecurity attacks in recent years.

In October 2021, Schreiber Foods, a milk processing company, was the victim of a ransomware attack. According to ZDNET, the attack disrupted the entire milk supply due to a change in the digital processes for milk processing. Wisconsin State Farmer reported that milk deliveries resumed five days after the attack. Additionally, milk transporters were unable to access the building and the company faced a $2.5 million ransomware demand.

The highly publicized attack on JBS, the world’s largest meat-packing company, also happened in 2021. Business was disrupted at 47 locations in Australia and nine locations in the U.S. for five days after Russian hacker group Revil encrypted the organization’s systems. JBS reportedly paid $11 million in ransomware following the attack. The attack also led to some meat shortages as well as temporarily higher meat prices.

Farm and Food Cybersecurity Act

To strengthen cybersecurity in the agrifood industry, the Farm and Food Cybersecurity Act is currently in committee in both the U.S. House of Representatives and the U.S. Senate. A key component of the act is that the secretary of agriculture will conduct a study every two years on cybersecurity threats and vulnerabilities within the agriculture and food sectors.

Additionally, the secretary of agriculture will work with other agencies to conduct an annual cross-sector crisis simulation exercise for food-related cyber emergencies or disruptions.

“Food security is national security, so it’s critical that American agriculture is protected from cyber threats,” says Rep. Elissa Slotkin, D-Mich. “No longer just some tech issue, cyberattacks have the potential to upend folks’ daily lives and threaten our food supply — as we saw a couple of years ago when the meat-packing company JBS was taken offline by a ransomware attack. This legislation will require the Department of Agriculture to work closely with our national security agencies to ensure that adversaries like China can’t threaten our ability to feed ourselves by ourselves.”

Reducing the risk of agrifood cyberattacks

Because of the critical nature of their services in relation to the food supply, all companies involved in the agrifood industry should make cybersecurity a high priority. To help improve cybersecurity in the industry, the Cybersecurity and Infrastructure Security Agency (CISA) recently released a Food and Agriculture Cybersecurity Checklist.

Tips from the sheet include:

While the recent empty shelves in grocery stores are a stark reminder of the importance of cybersecurity, the agrifood industry must stay proactive about addressing cybersecurity risks every day of the year.

The post How cyberattacks on grocery stores could threaten food security appeared first on Security Intelligence.

  • ✇Security Intelligence
  • Taking the fight to the enemy: Cyber persistence strategy gains momentum Jonathan Reed
    The nature of cyber warfare has evolved rapidly over the last decade, forcing the world’s governments and industries to reimagine their cybersecurity strategies. While deterrence and reactive defenses once dominated the conversation, the emergence of cyber persistence — actively hunting down threats before they materialize — has become the new frontier. This shift, spearheaded by the United States and rapidly adopted by its allies, highlights the realization that defense alone is no longer enou
     

Taking the fight to the enemy: Cyber persistence strategy gains momentum

23 de Janeiro de 2025, 11:00

The nature of cyber warfare has evolved rapidly over the last decade, forcing the world’s governments and industries to reimagine their cybersecurity strategies. While deterrence and reactive defenses once dominated the conversation, the emergence of cyber persistence — actively hunting down threats before they materialize — has become the new frontier. This shift, spearheaded by the United States and rapidly adopted by its allies, highlights the realization that defense alone is no longer enough to secure cyberspace.

The momentum behind this proactive cyber strategy can be found in America’s Defend Forward initiative, the rise of cyber persistence among U.S. allies and the successful takedowns of infamous groups like LockBit ransomware. Meanwhile, the broader implications of this shift are revealed in the U.S. Department of State’s focus on digital solidarity in contrast to digital sovereignty.

Cyber persistence: A strategic pivot

The idea of cyber persistence, as opposed to cyber deterrence, is reshaping global cybersecurity efforts. Traditional deterrence theory, which aims to dissuade adversaries through the promise of retaliation, has failed to address the complexities of cyber criminal behavior. Malicious cyber actors, including state-sponsored entities and organized crime groups, continue to exploit vulnerabilities, which leads to critical infrastructure compromise, sensitive data theft and government or corporate network disruption.

In response, the U.S. Department of Defense 2023 Cyber Strategy reinforced the country’s commitment to “Defend Forward,” a proactive approach designed to directly disrupt adversaries’ operations. This strategy empowers cybersecurity forces to identify malicious activities before they escalate, track adversaries and take action to prevent or mitigate attacks. U.S. allies like the United Kingdom, Japan, Canada and the Netherlands have subsequently adopted similar strategies. They’ve all come to realize that cyberspace requires constant vigilance and operational persistence to stay ahead of evolving threats.

As the U.S. DoD outlines, engaging adversaries early in planning is essential to creating a more secure cyberspace. This involves tracking the capabilities and intentions of malicious actors and degrading their ability to act. Such a proactive stance requires cooperation, coordination and trust among allies. This is especially true since cyber campaigns often involve joint operations where one nation may invite another into its networks to assist in defense.

The shift from deterrence to persistent engagement

Increasingly, nations like the UK and the Netherlands are taking proactive measures to combat cyber threats by operationalizing cyber persistence. For example, the UK’s National Cyber Strategy highlights the importance of actively tackling adversaries’ cyber dependencies and emphasizing the need for persistent engagement in cyberspace. Further examples of this shift include Japan’s efforts to introduce active cyber defense and Canada’s participation in “Hunt Forward” operations. Both aim to actively search for and disarm malicious actors.

NATO has also acknowledged the necessity of a more proactive cyber stance. The 2022 NATO Strategic Concept recognizes that cyberspace is “contested at all times.” The document explicitly states that the cumulative effect of cyber activities could reach the level of an armed attack, potentially triggering NATO’s mutual defense obligations under Article 5. This signals the acceptance of cyber persistence as a critical aspect of national and collective security.

While deterrence remains a core strategy for nuclear and conventional warfare, it is becoming clear that in cyberspace, persistence — constantly identifying, mitigating and neutralizing threats — is critical to preventing large-scale cyber incidents.

Explore IBM X-Force Red offensive security services

The LockBit ransomware takedown: A case study in persistence

The February 2024 takedown of the LockBit ransomware group under Operation Cronos serves as a prime example of how persistent cyber strategies can effectively neutralize significant threats. LockBit, one of the most prolific Ransomware-as-a-Service (RaaS) groups, was responsible for approximately a quarter of all ransomware attacks in 2023. This included attacks on hospitals and other critical services during the COVID-19 pandemic.

Operation Cronos, a coordinated international effort, resulted in significant arrests, sanctions and the seizure of LockBit’s operational infrastructure. This was not just a technical takedown but a broader effort to undermine the group’s viability. Law enforcement agencies managed to access LockBit’s internal communications, expose its affiliates and disrupt its financial networks. This cumulative disruption severely damaged the group’s reputation, making it difficult for them to regain support within the cyber crime community.

While LockBit’s ringleader, known as “LockBitSupp,” has tried to claim the group’s resurgence, analysis shows that the law enforcement operation has had lasting effects. The exposure of the group’s inner workings has sowed distrust among affiliates, with many distancing themselves from the group. The takedown’s success demonstrates the power of cyber persistence, as it involved not only technical measures but also strategic psychological operations aimed at eroding the group’s support base.

Digital solidarity vs. digital sovereignty

At the heart of the United States’ international cyber strategy lies the concept of digital solidarity, which stands in stark contrast to the protectionist policies of digital sovereignty. Digital solidarity promotes collaboration and mutual support among nations, emphasizing the need for a secure, inclusive and resilient digital ecosystem. This strategy, unveiled in the U.S. Department of State’s 2024 International Cyberspace and Digital Policy Strategy, advocates for building international coalitions, aligning regulatory frameworks and fostering a free flow of data across borders.

The key pillars of digital solidarity include promoting an inclusive digital ecosystem, aligning governance approaches to data and advancing responsible state behavior in cyberspace. These efforts aim to ensure that all nations, especially emerging economies, have access to secure digital infrastructure and that global cooperation can thwart cyber threats through shared intelligence and mutual defense efforts.

In contrast, digital sovereignty emphasizes national control over digital infrastructure and data. Countries that adopt this stance seek to protect their digital assets by restricting foreign access to their markets and mandating data localization. While proponents argue that this approach can reduce dependence on foreign technology and enhance security, critics warn that it fragments the global digital ecosystem and makes it harder to respond collectively to cyber threats.

The tension between digital solidarity and digital sovereignty has significant implications for global cybersecurity. As the world’s digital infrastructure becomes more interconnected, the U.S. and its allies argue that collaboration, not isolation, is the key to addressing the complex cyber challenges of the future.

The future of proactive cyber defense

The shift from deterrence to persistence in cyberspace represents a new era of proactive cyber defense. By identifying vulnerabilities, disrupting adversaries’ operations and engaging in continuous cyber campaigns, the U.S. and its allies are reshaping the way nations approach cybersecurity.

Operations like the LockBit takedown underscore the effectiveness of this strategy. Plus, the emphasis on digital solidarity highlights the importance of international cooperation in creating a safer and more resilient digital ecosystem. As cyber threats continue to evolve, the persistence approach will likely become a cornerstone of modern cybersecurity. The goal is to ensure that nations can stay ahead of their adversaries and secure the future of cyberspace.

The post Taking the fight to the enemy: Cyber persistence strategy gains momentum appeared first on Security Intelligence.

  • ✇Security Intelligence
  • 2024 Cloud Threat Landscape Report: How does cloud security fail? Jennifer Gregory
    Organizations often set up security rules to help reduce cybersecurity vulnerabilities and risks. The 2024 Cost of a Data Breach Report discovered that 40% of all data breaches involved data distributed across multiple environments, meaning that these best-laid plans often fail in the cloud environment. Not surprisingly, many organizations find keeping a robust security posture in the cloud to be exceptionally challenging, especially with the need to enforce security policies consistently acros
     

2024 Cloud Threat Landscape Report: How does cloud security fail?

22 de Janeiro de 2025, 11:00

Organizations often set up security rules to help reduce cybersecurity vulnerabilities and risks. The 2024 Cost of a Data Breach Report discovered that 40% of all data breaches involved data distributed across multiple environments, meaning that these best-laid plans often fail in the cloud environment.

Not surprisingly, many organizations find keeping a robust security posture in the cloud to be exceptionally challenging, especially with the need to enforce security policies consistently across dynamic and expansive cloud infrastructures. The recently released X-Force Cloud Threat Landscape 2024 Report delved into which specific rules are most commonly failing. By understanding key vulnerabilities, organizations can then figure out the best approach for reducing their risks.

“Regulations are increasing, requiring organizations to implement more compliance policies with security top of mind, which puts a lot of overhead on these organizations,” says Mohit Goyal, Product Management at Red Hat Insights. “The Compliance service within Red Hat Insights provides a more elegant way to manage and deploy these policies on systems to get ahead of any gaps.”

Environment influences failure of security rules

During the research, X-Force analyzed two sets of data across the cloud — one set operating in 100% cloud-only environments and the other with a hybrid of 50% to 99% of their Red Hat Enterprise Linux (RHEL) systems in the cloud. Interestingly, researchers found a different set of most failed rules for each of the two different groups.

Goyal says that the team intentionally looked at both environments because Red Hat caters to customers across the hybrid cloud. During the research, the team discovered that in the 100% cloud group, security rules often failed due to misconfiguring assets, meaning that organizations should focus on configuration guidelines. Meanwhile, in the hybrid environment, most failed rules revolved around authentication and cryptography policies.

When asked who is often responsible for the configurations, Goyal says it varies at different organizations. At smaller companies, a single employee often wears multiple hats. However, at larger organizations, the roles are typically well defined with multiple people involved — for example, a system administrator, a security/risk administrator and a compliance administrator.

Top failed rules in organizations with 100% cloud systems

Researchers found that in situations where all data was stored in the public cloud, the most commonly failed rule was configuration and security guidelines for Linux systems. Researchers described this rule as focusing on configuring essential security and management settings in Linux systems. Examples include setting the default zone for the firewall and isolating the /tmp directory on a separate partition to enhance security and manage disk space effectively. The mitigation is configuring the default zone for the firewall service to make sure the network security is properly configured in Red Hat-based systems.

Other top failed rules include:

  • Secure mount options for critical directories
  • User home directory management
  • Service management
  • NFS service management
Read the Cloud Threat Landscape Report

Top failed rules in organizations with hybrid environments

After analyzing data within a hybrid environment, researchers found that authentication and cryptography policies often failed. These rules focus on standardizing and securing authentication mechanisms and cryptographic requirements in a given policy. Organizations set these rules to ensure consistent and strong security practices across the system. The mitigation involves authselect to standardize and simplify the management of authentication settings.

Other commonly failed rules in hybrid environments include:

  • Account and SSH configuration
  • SSH security measures
  • Umask configuration
  • Process debugging restrictions

Why mitigation commonly fails

Because each rule contains mitigation, a common question from the report was why mitigations so often fail. But the answer is not a simple one. The reasons can include a wide range of factors, including misconfiguration, lack of training and different environments.

“Security, in general, is a complex area, and with the threat landscape constantly changing and evolving, it’s hard to maintain the status quo,” Goyal says. “As new technologies and new requirements come into play and the footprint increases, it ultimately leads to a lot of complexity.”

Goyal predicts that the policies are going to increase in number and only become more complex. Organizations need solutions to keep their head wrapped around the complexities in a way that reduces the burden of operational overhead. By highlighting the gaps, leaders can understand where the risk lies and create a plan to close those gaps.

Reducing rule failures

Confirming that all rules are followed and the mitigation is used correctly when a rule fails is time-consuming, explains Goyal. At large enterprises, cybersecurity professionals bear a lot of burden with complex processes. Team members must constantly optimize and check for security while also completing other tasks. Organizations are increasingly turning to Ansible automation, such as with Red Hat Insights, for more effective and efficient remediation.

With Red Hat Insights, an organization can deploy its compliance policies (i.e.: a PCI or HIPAA data governance policy, etc.) on RHEL systems. After analyzing these systems, Insights then displays the level of compliance/non-compliance of the systems to the organization’s policies; it also recommends actions to address the non-compliance. Organizations can select to deploy the Ansible playbook on the systems with just a few clicks to become compliant again. Because the process is automated, it’s more effective and efficient than manually identifying and remediating each system separately.

“Large enterprises need this ability to help keep their costs in control and prevent security gaps from being exploited by bad actors,” says Goyal.

Cloud security: A shared responsibility

Because multiple organizations are involved in a cloud environment, a key question is often about who bears the responsibility for security — the organization or the vendor. Goyal says that security is a dual responsibility.

“As a vendor to our customer, there is a responsibility to make sure they have a product that is built with its security posture front-and-center and has feature-rich functionality that allows organizations to effectively manage their organizational IT security strategy. However, they have to also configure and deploy the product correctly,” says Goyal. “Additionally, organizations need to make sure that their cloud provider emphasizes operational security. At the same time, organizations also need to take ownership for the security of the configurable components of their environment.”

The post 2024 Cloud Threat Landscape Report: How does cloud security fail? appeared first on Security Intelligence.

❌
❌