Visualização de leitura

When you shouldn’t patch: Managing your risk factors

Look at any article with advice about best practices for cybersecurity, and about third or fourth on that list, you’ll find something about applying patches and updates quickly and regularly. Patching for known vulnerabilities is about as standard as it gets for good cybersecurity hygiene, right up there with using multi-factor authentication and thinking before you click on links in emails from unknown senders.

So imagine my surprise when attending Qualys QSC24 in San Diego to hear a number of conference speakers say that patching shouldn’t be an automatic reaction. In fact, they say, there are times when it is better not to patch at all.

No, you don’t need to fix everything, says Dilip Bachwani, Chief Technology Officer with Qualys.

“It’s not practical,” Bachwani adds. “Even if there is a vulnerability, it may not apply in your environment.” It could be an application that isn’t an internet-facing asset or something secured through other controls.

Knowing your risk factor

The knee-jerk reaction when a new patch is released is to get it installed as quickly as possible to prevent a vulnerability from turning into a cyber incident. However, Bachwani and his Qualys colleagues stress that security teams need to take a step back and evaluate their organization’s risk threshold.

What that evaluation will first discover is a lot of vulnerabilities across their infrastructure. A study by Coalition expects the total number of common vulnerabilities and exposures (CVEs) to increase by 25% in 2024 to 34,888 vulnerabilities, or nearly 3,000 per month.

“New vulnerabilities are published at a rapid rate and growing,” Tiago Henriques, Coalition’s Head of Research, says. “Most organizations are experiencing alert fatigue and confusion about what to patch first to limit their overall exposure and risk.”

With the steady increase in the number of CVEs, it is easy to think that every vulnerability is critical — and if every vulnerability is given an equal risk value, patching becomes overwhelming. The researchers at Qualys recommend prioritizing the risk involved with each vulnerability so that you can determine what should be patched first and what might not need to be patched at all.

How to prioritize your organization’s vulnerabilities

To prioritize vulnerabilities, it requires knowing all of your assets across the organization and identifying and monitoring the attack surface. However, Qualys research found that only 9% of companies are actively monitoring 100% of their attack surface. Shadow IT, third-party vendors and risks, a digital transformation made too quickly and without an assessment of technologies and assets added and not recognizing emerging threat vectors are just some of the reasons why organizations are unable to properly monitor their attack surface.

Deploying an attack surface management program will identify what technologies are attached to your network and where and what assets need protection. The critical requirements of an attack surface management program are:

  • Visibility across hybrid IT
  • Dynamic cybersecurity needs with rapid identification
  • Unauthorized software tracking in real-time
  • Finding and remediating blind spots

The more familiar you become with the systems accessing your network, the easier it will be to know your corporate assets and prioritize their importance. When levels of risk tolerance are assigned to these assets, it will then be easier to prioritize critical and non-critical vulnerabilities to be patched or, in some cases, not patched.

Explore vulnerability management services

When to slow down the patching process

Patching protocols should be unique to your organization, based on your internal measures of mission-critical and risk tolerance. Whereas one organization may decide that the most critical vulnerabilities must be patched immediately, another may find that seven days is the ultimate time frame to reduce risk for the most important assets. Patch management programs will tier their assets, beginning with the most critical and can’t afford downtime if something goes wrong and down through secondary tiers with longer wait times.

But there are times when it is smart to slow down or even eliminate the patching process. They include:

  • An important and time-sensitive project is in progress and requires uninterrupted computer time
  • Reports of bugs in the patch or it creates compatibility problems with the application in a testing sample
  • The vulnerable software is limited in scope within the organization and can be isolated
  • Other mitigating controls can be put in place
  • The application never uses the functions with the known vulnerability
  • The costs of patching outweigh the benefits. If the code is outdated and needs to be rewritten, for example, then it doesn’t make sense to take the time and expense to apply the patch.

Cybersecurity insurance and patching

With the increase of CVEs and the always looming threat of a cyber incident, many organizations are looking at how to maximize their cybersecurity insurance. With the strict rules and audits in place to be eligible for cybersecurity insurance, is taking an approach to only patch when it is truly necessary going to downgrade your organization with insurance companies?

Bachwani says no. “I actually think a solution like this will enable cyber insurers to be more effective.”

The way the insurance marketplace works today is that it is less focused on the company’s internal data and more on the organization’s overall cybersecurity posture.

“If I’m able to clearly demonstrate that we internally have really good hygiene, my insurance should be lower,” says Bachwani.

To patch or not to patch?

In the end, the decision on whether or not to patch will come down to one singular issue: What is the value to the business by patching or not patching? And that is determined by the organization’s risk tolerance. Recognizing the consequences of downtime or a cyber incident will help prioritize critical vulnerabilities that require time and resources to patch. But also being willing to accept that you can’t patch everything will give your team the space to focus on bigger risk threats.

The post When you shouldn’t patch: Managing your risk factors appeared first on Security Intelligence.

Will AI threaten the role of human creativity in cyber threat detection?

Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.

Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out false positive alerts, etc. Artificial intelligence (AI) has been a boon in filling the talent gaps when it comes to these types of tasks. But AI has also proven useful for many of the same things that creative thought brings to the threat table, such as addressing more sophisticated threat actors, the rapid increase of data and the hybrid infrastructure.

However, many companies are seeing the value of AI, especially generative AI (gen AI), in handling a greater share of creative work — not just in cybersecurity but also in areas like marketing and public relations, writing and research. But are these organizations using AI in a way that could threaten the importance of human creativity in threat detection?

Why creativity is important to cybersecurity

The very simple reason why cybersecurity requires innovative people is that threat actors are already coming up with novel approaches to how to get into your system. Are they using gen AI to launch their attacks? You bet they are; phishing emails have never been more grammatically constructed or realistic. But before AI was available, threat actors were designing social engineering attacks that attracted clicks. Now, they have advanced beyond “how can we lure in victims” to “how can we get more out of a single attack after we lure in the victims.”

Creativity isn’t just coming up with new ideas. It is also the ability to see things through a big-picture lens and discern historical data or where to find information you might not know you need to look for. For example, creative thought is required for the following security tasks:

  • Threat hunting or predicting a threat actor’s move or finding their tracks in a system
  • Finding buried evidence in a forensic search
  • Understanding historical data in anomaly detection
  • Ability to tell a real email or document versus a well-designed phishing attack
  • Verifying new zero day attacks and other malware variants found in otherwise unknown vulnerabilities

AI can augment human creativity, but gen AI gets a lot of things wrong. Users have found themselves in situations where AI claimed plagiarism on original work or AI hallucinations offered false information that nullified the research of human analysts. AI algorithms are also susceptible to bias that could lead to false positives.

Explore AI cybersecurity solutions

AI’s role in creative cybersecurity and beyond

While many creative people, cybersecurity professionals and beyond, see gen AI as a mixed blessing, many embrace the technology because it is a huge timesaver.

“Gen AI can help prototype much faster because the large language models can take over the refactoring and documentation of code,” wrote Aili McConnon in an IBM blog post. Also, the article pointed out, AI tools can help users create prototypes or visualize their ideas in minutes versus hours or days.

Creativity married to AI can help identify future leaders. According to research from IBM, two-thirds of company leaders found that AI is driving their growth, with four specific use cases — IT operations, user experience, virtual assistants and cybersecurity — most commonly favored by leaders.

“A Learner will typically copy predefined scenarios using out-of-the-box technologies,” Dr. Stephan Bloehdorn, Executive Partner and Practice Leader, AI, Analytics and Automation-IBM Consulting DACH, was quoted in the study. “But a Leader develops custom innovations.”

Over-reliance on AI?

As gen AI becomes more ubiquitous in the workplace and as more creative folks and leaders rely on it as a way to put their ideas in motion, are we also relying on the technology to the point that it could lead to a degradation of other important necessary skills, like the ability to analyze data and create viable solutions?

It is unclear if organizations are over-relying on gen AI, according to Stephen Kowski, Field CTO at SlashNext Email Security+, but it is becoming more of a designed feature due to unintended consequences related to resource allocation in organizations.

“While AI excels at processing massive volumes of threat data, real-world attacks constantly evolve beyond historical patterns, requiring human expertise to identify and respond to zero-day threats,” said Kowski in an email interview. “The key is achieving the right balance where AI handles high-volume routine detection while skilled analysts investigate novel attack patterns and determine strategic responses.”

Yet, Kris Bondi, CEO and Co-Founder of Mimoto, isn’t worried about AI leading to a degradation of skills — at least not for the foreseeable future.

“One of the biggest challenges for cybersecurity professionals is having too many alerts and too many false positives. AI is only able to automate a small percentage of responses. It’s more likely that AI will eventually automate additional requirements for someone deemed to be suspicious or the elevation of alert so that a human can analyze the situation,” Bondi said via email.

However, organizations should watch out for AI’s role in defining threat-hunting parameters. “If AI is the sole driver defining threat hunting parameters without spot-checks or audits, the threat intelligence approach could eventually be focused in the wrong area. The answer is more reliance on critical thinking and analytical skills,” said Bondi.

Embracing creativity in an AI-driven world

AI overall, and gen AI in particular, are going to be part of the business world going forward. It is going to play a vital role in how organizations and analysts approach cybersecurity defenses and mitigations. But the soft skills that creative thought depends on will still play an important and necessary role in cybersecurity.

“Rather than diminishing soft skills, AI integration has the opportunity to elevate the importance of communication, collaboration and strategic thinking, as security teams must effectively convey complex findings to stakeholders,” said Kowski. “The human elements of cybersecurity — leadership, adaptability and cross-functional partnership — become even more critical as AI handles the technical heavy lifting.”

The post Will AI threaten the role of human creativity in cyber threat detection? appeared first on Security Intelligence.

Cybersecurity awareness: Apple’s cloud-based AI security system

The rising influence of artificial intelligence (AI) has many organizations scrambling to address the new cybersecurity and data privacy concerns created by the technology, especially as AI is used in cloud systems. Apple addresses AI’s security and privacy issues head-on with its Private Cloud Compute (PCC) system.

Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to create a cloud infrastructure on which to run generative AI (genAI) models that need more processing power than its devices could supply while also protecting user privacy, stated a ComputerWorld article.

Apple is opening the PCC system to security researchers to “learn more about PCC and perform their own independent verification of our claims,” the company announced. In addition, Apple is also expanding its Apple Security Bounty.

What does this mean for AI security going forward? Security Intelligence spoke with Ruben Boonen, CNE Capability Development Lead at IBM, to learn what researchers think about PCC and Apple’s approach.

SI: ComputerWorld reported this story, saying that Apple hopes that “the energy of the entire infosec community will combine to help build a moat to protect the future of AI.” What do you think of this move?

Boonen: I read the ComputerWorld article and reviewed Apple’s own statements about their private cloud. I think what Apple has done here is good. I think it goes beyond what other cloud providers do because Apple is providing an insight into some of the internal components they use and are basically telling the security community, you can have a look at this and see if it is secure or not.

Also good from the perspective that AI is constantly getting bigger as an industry. Bringing generative AI components into regular consumer devices and getting people to trust their data with AI services is a really good step.

SI: What do you see as the pros of Apple’s approach to securing AI in the cloud?

Boonen: Other cloud providers do provide high-security guarantees for data that’s stored on their cloud. Many businesses, including IBM, trust their corporate data to these cloud providers. But a lot of times, the processes to secure data aren’t visible to their customers; they don’t explain exactly what they do. The biggest difference here is that Apple is providing this transparent environment for users to test that plane.

Explore AI cybersecurity solutions

SI: What are some of the downsides?

Boonen: Currently, the most capable AI models are very big, and that makes them very useful. But when we want AI on consumer devices, there’s a tendency for vendors to ship small models that can’t answer all questions, so it relies on the larger models in the cloud. That comes with additional risk. But I think it is inevitable that the whole industry will be moving to that cloud model for AI. Apple is implementing this now because they want to give consumers trust to the AI process.

SI: Apple’s system doesn’t play well with other systems and products. How will Apple’s efforts to secure AI in the cloud benefit other systems?

Boonen: They are providing a design template that other providers like Microsoft, Google and Amazon can then replicate. I think it is mostly effective as an example for other providers to say maybe we should implement something similar and provide similar testing capabilities for our customers. So I don’t think this directly impacts other providers except to push them to be more transparent in their processes.

It’s also important to mention Apple’s Bug Bounty as they invite researchers in to look at their system. Apple has a history of not doing very well with security, and there have been cases in the past where they’ve refused to pay out bounties for issues found by the security community. So I’m not sure they’re doing this entirely out of the interest of attracting researchers, but also in part of convincing their customers that they are doing things securely.

That being said, having read their design documentation, which is extensive, I think they’re doing a pretty good job in addressing security around AI in the cloud.

The post Cybersecurity awareness: Apple’s cloud-based AI security system appeared first on Security Intelligence.

4 trends in software supply chain security

Some of the biggest and most infamous cyberattacks of the past decade were caused by a security breakdown in the software supply chain. SolarWinds was probably the most well-known, but it was not alone. Incidents against companies like Equifax and tools like MOVEit also wreaked havoc for organizations and customers whose sensitive information was compromised.

Expect to see more software supply chain attacks moving forward. According to ReversingLabs’ The State of Software Supply Chain Security 2024 study, attacks against the software supply chain are getting easier and more ubiquitous.

“For example, Operation Brainleeches, identified by ReversingLabs in July, showed elements of software supply chain attacks supporting commodity phishing attacks that use malicious email attachments to harvest Microsoft.com logins,” the report stated.

It is easier to conduct software supply chain attacks, so they are increasing at an alarming rate. The ReversingLabs report saw a 1,300% increase in threats coming from open-source package repositories last year. That’s the bad news.

The good news is that cybersecurity teams and government entities recognize the risks coming from the software supply chain, and there is a lot of action toward defending against these attacks and steps to solidify security before the software is released into the wild.

Who controls the software?

Who controls the software and who controls the device are the game-changers in software supply chain security, according to Xin Qiu, Sr., Director of Security Product Marketing and Management at CommScope. But that’s hyper-focused down to the developers and system engineers creating the software and setting up the systems. The problem is that there is little integration within an organization to enable effective control.

Companies have a lot of tools, but they are scattered around, says Qiu. Everyone is siloed, doing things in different ways. That approach has to change.

It is the federal government that is taking the lead in tackling software supply chain security with technical regulations and laws.

“To improve your software supply chain security, you need to have a common standard,” says Qiu. “I think this is a good way to fill those gaps.”

The most recognizable action taken by the government entities was the Executive Order(EO) from the Biden administration, which addresses the nation’s cybersecurity but especially emphasizes protecting the software supply chain. In conjunction with that EO, a cross-sector group representing different government agencies, the Enduring Security Framework (ESF) Software Supply Chain Working Panel, put together a comprehensive guide for recommended practices of security in the software supply chain for developers. NIST also has a framework to secure the software supply chain.

4 security solution trends for the software supply chain

But government guidelines and regulations only go so far, and it is up to organizations to better equip themselves with the tools, solutions and processes that allow developers, engineers and security and IT teams to address risks within the software supply chain. There are a number of ideas and tools out there, some initiated by the government, that are trending in the battle against vulnerabilities and threats.

1. Secure by design

At RSAC2024, CISA Director Jen Easterly and a panel of cybersecurity professionals gave a panel on CISA’s Secure by Design initiative. The idea is to build security into products and make it a business feature and core technical requirement rather than the more standard approach of treating security as a failure. “During the design phase of a product’s development lifecycle, companies should implement Secure by Design principles to significantly decrease the number of exploitable flaws before introducing them to the market for widespread use or consumption,” the initiative’s website states.

Part of the presentation was the introduction of the initial group of businesses that took the Secure by Design pledge. According to CISA, “By participating in the pledge, software manufacturers are pledging to make a good-faith effort to work towards the goals listed below over the following year.” The pledge includes a list of goals for developers and organizations to work toward. These goals include standards around MFA, reducing default passwords and better transparency around vulnerability disclosure and reporting. More than 200 organizations have taken the pledge so far.

Learn how cybersecurity shapes supply chain resilience

2. Software bill of materials (SBOMs)

SBOMs are a nested inventory of all the components that make up a software application. The components can include open source, third parties, patch status and licenses. SBOMs have become a key part of the software supply chain security structure and are endorsed by CISA as a way for developers to build a community that works together to share ideas and experiences around operationalization, scaling, technologies, new tools and use cases. To encourage SBOM use and understanding, CISA facilitates regular meetings from those across the software development and design community and also offers a resource library.

SBOMs can help an organization identify risks, especially in third-party and proprietary software packages: track vulnerabilities within the different components; ensure compliance and help the team make better security decisions by being more aware of the component parts of their software.

3. Supply-chain levels for software artifacts (SLSA) frameworks

SLSA is a security framework to safeguard the integrity of software artifacts. It is a checklist of standards to better improve the integrity of the software, prevent tampering and exploitation and keep the infrastructure and application packages secure. The framework was based on Google’s production workloads and offers a structured approach to evaluating the security posture of software components throughout the supply chain.

4. Governance, risk and compliance (GRC) management

GRC management is used to mitigate security risks within a software development supply chain while ensuring the software meets required regulatory compliances and security standards. Some of the areas that GRC monitors include:

  • Identifying risks across the entire software supply chain
  • Vendor risk management and assessment of third-party security posture before integrating the software into your organization’s system
  • Compliance management to meet industry and government standards
  • Policy enforcement across the development lifecycle
  • Incident response after a cyber incident caused by the software supply chain

GRC management tools can also be used with SBOM analysis.

The evolving puzzle of software supply chain security

This is just a sample of the tools and solutions used to protect the software supply chain from risk. As security is more consciously built into the software and developers and engineers share information in communities rather than working in silos, there is a fighting chance of slowing the threats against the software supply chain.

The post 4 trends in software supply chain security appeared first on Security Intelligence.

ISC2 Cybersecurity Workforce Study: Shortage of AI skilled workers

AI has made an impact everywhere else across the tech world, so it should surprise no one that the 2024 ISC2 Cybersecurity Workforce Study saw artificial intelligence (AI) jump into the top five list of security skills.

It’s not just the need for workers with security-related AI skills. The Workforce Study also takes a deep dive into how the 16,000 respondents think AI will impact cybersecurity and job roles overall, from changing skills approaches to creating generative AI (gen AI) strategies.

Budgets and the skills gap

According to the study, two-thirds of respondents think that their expertise in cybersecurity will augment AI technology; on the flip side, a third are concerned their jobs could be eliminated in an AI-focused world.

That, of course, is not going to happen immediately. Not even half the respondents have implemented gen AI into their tools. The more immediate concern for cybersecurity professionals is budgets.

“In 2024, 25% of respondents reported layoffs in their cybersecurity departments, a 3% rise from 2023, while 37% faced budget cuts, a 7% rise from 2023,” the report stated.

These budget cuts have impacted the skills gap, as two-thirds of the respondents said not only have the budget cuts led to current staffing shortages but they are expected to make closing the skills gap even more difficult in the next few years.

Many of the respondents pointed out that the skills gap has had a more negative effect on organizational security than the decrease in on-site staff. In part because the funding isn’t available for training and because those with skills in high demand are moving on to better-paying positions, many security teams struggle to address the threats and risks in today’s cybersecurity landscape.

Explore IBM SkillsBuild

The role of AI in the skills gap

Two years ago, AI wasn’t even considered a required skill set for cybersecurity jobs, but now it is a top five skill, said Jon France, CISO with ISC2.

“And we suspect that probably next year, it will be the number one in-demand skill set around security,” France said in a conversation at ISC2’s Security Congress in Las Vegas.

(If you’re wondering, the other skills in the top five are cloud, zero trust architecture, forensics, incident response and application security — all areas that have been at the top of the skills need list for a long time.)

AI’s role in cybersecurity is changing because of the exponential increase in data and the need to gather good intelligence on the data being generated.

“AI is one of the tools that can obviously consider large data sets very quickly,” said France. Still, human eyes are necessary to validate the results generated from AI models. This is where AI security skills will be most needed to advance the changes in how analysts and incident responders analyze data.

France also believes that AI will change the scope of entry-level security positions. “I think if you’re coming into the profession, and if you’ve got to pick up one thing to learn, you’ll get the most favorable opportunities if you have experience of using generative AI coding.”

Right now, however, there is a bit of a disconnect between the technical skills that hiring managers think are needed and what non-hiring managers want. Both types of managers list cloud computing security skills at the top of the list, but when asked about AI/ML skills, only 24% of hiring managers said it was a skill they want right now, ranking last on the skills-need list. When non-hiring managers are asked about the skills most in demand to advance careers, 37% said AI/ML, higher than every other listed skill but cloud security.

AI is reinventing cybersecurity skills

In its study AI in Cyber 2024, ISC2 found that 82% of respondents are optimistic that AI will improve work efficiency, and 88% thought it would impact their job role in some way. Relying more on AI in the cyber world has a lot of positive points, but there are also issues around the technology causing stress. Four in ten respondents said they aren’t prepared for the explosion of AI, according to the AI study, and 65% said their organization needs more regulations around the safe use of gen AI, according to the Workforce study.

But there are also a lot of question marks surrounding what skills will be needed. “While study participants speculated on what skills may be automated or streamlined, they cannot yet predict what activities, if any, AI will replace,” the study reported. Perhaps this is why hiring managers are showing some reluctance to hire cybersecurity professionals who have AI technical expertise.

With AI, many anticipate an uptick in the need for non-technical skills. Cybersecurity has been more open to finding potential professionals outside of the traditional technical areas and training them for their new roles, so it isn’t too surprising that, because hiring managers aren’t certain of the type of skills that will be required for using gen AI as a security tool (or for securing gen AI, for that matter), there is a greater willingness to default to non-tech skills that are seen as more transferable as the technology evolves. Overall, strong communication skills were listed as the most in-demand skill set across all of cybersecurity, followed closely by strong problem-solving skills and teamwork/collaboration skills.

The cyber workforce in the world of AI

Looking at the overall picture of how AI skills will fit into the cybersecurity workforce going forward, it is likely that the issues that hamper hiring today will have a similar impact on AI expertise. Budget cuts will decrease the workforce, as already mentioned. France pointed to the human resources gap as well, where entry-level positions are posted with requirements such as certifications that require five years of work experience.

“We also need to blow this myth: New entrance into the cybersecurity workforce doesn’t mean young. It can be a career change. In fact, career changes bring a lot of different viewpoints and experiences,” said France.

Hire for the skills the employee is bringing to the table, even if they aren’t what you need right now. “The rest,” said France, “can be taught.”

The post ISC2 Cybersecurity Workforce Study: Shortage of AI skilled workers appeared first on Security Intelligence.

Why do software vendors have such deep access into customer systems?

To the naked eye, organizations are independent entities trying to make their individual mark on the world. But that was never the reality. Companies rely on other businesses to stay up and running. A grocery store needs its food suppliers; a tech company relies on the business making semiconductors and hardware. No one can go it alone.

Today, the software supply chain interconnects companies across a wide range of industries. Software applications and operating systems depend on segments of the software supply chain to offer improved functionality. But while the software supply chain has improved efficiency and productivity for most organizations, it also means that if there is a vulnerability or a glitch in the software, it can halt business operations at hundreds or thousands of companies. Even the security programs that are used to protect users from cyberattacks can release exploitable software or an update with a coding mistake that can result in anything from massive data breaches to canceled flights to shutting down medical facilities because they can’t access patient records.

These software supply chain failures don’t just hurt the company. Millions of people are impacted. So why do software vendors have such deep access to an individual organization’s system so that one problem could create a nightmare scenario?

The evolution of computing

To understand why systems are so interconnected, you have to look at the evolution of both computing and software applications, according to Shiv Ramji, President of Customer Identity with Okta.

“We started from a world where programmers write on mainframes, and then we went from mainframes to the cloud and a distributed computing model,” Ramji explained during a conversation at the Oktane conference.

The benefit is that companies can now deploy applications faster, and they can be scaled with elasticity. Applications in the cloud are faster. There are a lot of benefits to architecting applications embedded in the cloud and network systems.

However, says Ramji, this also means that the application stack becomes more complicated and more sophisticated.

“The classic example would be if I had to store if I had an app that was a social media app or photo sharing,” explained Ramji. If the user relied on a single data center and single storage mechanism, scaling would become more difficult and expensive.

“But today, you can scale this really fast because you can use S3 from Amazon for storage, and you can scale your compute,” Ramji adds. “And so, it doesn’t matter if I have two users or end up having 200 million users; I’m able to address the needs.”

This evolution in computing has brought application stacks that have become much more complex, with a lot of interdependencies across the system. Cloud computing services, security services and networking capabilities work seamlessly because they are able to be embedded into an organization’s infrastructure.

Explore cybersecurity services

Locking in with a vendor

These interdependencies are increasingly making organizations overly reliant on specific vendors and applications to keep their business operations running smoothly. The upside to this is having third-party partnerships that integrate with your infrastructure and can be built out seamlessly. The downside is added costs from not shopping around for better deals and the greater risk of a security flaw taking down your system without warning. One bad piece of code due to an embedded vendor application can cause irreparable damage.

According to research from Dashdevs, “vendor lock-in is proven to lead to unanticipated costs and technical debt.” Reliance on these embedded applications is “proven to increase risks and vendor-specific vulnerabilities.”

When these embedded applications have a flaw — a vulnerability exploited or misconfigured code, for example — the fix can be complex. It might look as easy as deleting the bad file or applying a patch, but what happens if the problem doesn’t allow you access to the system at all? To do that, you have to identify which program is causing the problem and where within your system it is located. Is it a problem that can be fixed once via the cloud and will automatically change across all devices, or will it require updating individual machines? Finally, what is the communication between the vendor and your organization? Is the problem something you discovered or was it revealed to you, and how willing and quick is the third party able to take responsibility?

Unfortunately, there are no easy answers. It will come down to the individual situation — the type of vendor, how the application is embedded into your network and the problem that it causes.

“Some of those systems, some of those controls that you have in place have the potential from a resiliency standpoint to mean the difference between your customers having your service being on and available or having a complete destruction caused by an outage similar to what we’ve seen with other vendors recently,” says Charlotte Wylie, Deputy CSO with Okta.

How vendors can keep customers secure

Vendors can take steps to protect their customers from a software breakdown, beginning with recognizing their role inside their customers’ infrastructure. Wylie provided the following tips on how vendors and customers can work together to add security to embedded applications:

  • Implement access with least privilege permissions on both sides
  • Have controls and protocols in place if there is a degradation of service
  • Have well-managed accounts that are maintained and secured with your organization’s IAM team

“I think least privilege and having the right identity is super important,” says Wylie. “And then testing that on a regular basis so you have the right enterprise resiliency in place and know that your disaster recovery plan is ready to go — these are your backup plans when you have a collaboration of vendors.”

Every organization has become more reliant on the software supply chains and applications used across their complex network architecture. It’s almost impossible to run a business efficiently today without this interdependence on third parties who have deep access to not just your system directly but also through the other applications and software you use. Failure will happen. Being prepared with a recovery plan for any worst-case scenario and thinking about how to best architect networks with third-party vendors to work through failure will prevent the downtime from turning into a news event.

The post Why do software vendors have such deep access into customer systems? appeared first on Security Intelligence.

Cybersecurity trends: IBM’s predictions for 2025

Cybersecurity concerns in 2024 can be summed up in two letters: AI (or five letters if you narrow it down to gen AI). Organizations are still in the early stages of understanding the risks and rewards of this technology. For all the good it can do to improve data protection, keep up with compliance regulations and enable faster threat detection, threat actors are also using AI to accelerate their social engineering attacks and sabotage AI models with malware.

AI might have gotten the lion’s share of attention in 2024, but it wasn’t the only cyber threat organizations had to deal with. Credential theft continues to be problematic, with a 71% year-over-year increase in attacks using compromised credentials. The skills shortage continues, costing companies an additional $1.76 million in a data breach aftermath. And as more companies rely on the cloud, it shouldn’t be surprising that there has been a spike in cloud intrusions.

But there have been positive steps in cybersecurity over the past year. CISA’s Secure by Design program signed on more than 250 software manufacturers to improve their cybersecurity hygiene. CISA also introduced its Cyber Incident Reporting Portal to improve the way organizations share cyber information.

Last year’s cybersecurity predictions focused heavily on AI and its impact on how security teams will operate in the future. This year’s predictions also emphasize AI, showing that cybersecurity may have reached a point where security and AI are interdependent on each other, for both good and bad.

Here are this year’s predictions.

Shadow AI is everywhere (Akiba Saeedi, Vice President, IBM Security Product Management)

Shadow AI will prove to be more common — and risky — than we thought. Businesses have more and more generative AI models deployed across their systems each day, sometimes without their knowledge. In 2025, enterprises will truly see the scope of “shadow AI” – unsanctioned AI models used by staff that aren’t properly governed. Shadow AI presents a major risk to data security, and businesses that successfully confront this issue in 2025 will use a mix of clear governance policies, comprehensive workforce training and diligent detection and response.

Identity’s transformation (Wes Gyure, Executive Director, IBM Security Product Management)

How enterprises think about identity will continue to transform in the wake of hybrid cloud and app modernization initiatives. Recognizing that identity has become the new security perimeter, enterprises will continue their shift to an Identity-First strategy, managing and securing access to applications and critical data, including gen AI models. In 2025, a fundamental component of this strategy is to build an effective identity fabric, a product-agnostic integrated set of identity tools and services. When done right, this will be a welcome relief to security professionals, taming the chaos and risk caused by a proliferation of multicloud environments and scattered identity solutions.

Explore cybersecurity services

Everyone must work together to manage threats (Sam Hector, Global Strategy Leader, IBM Security)

Cybersecurity teams will no longer be able to effectively manage threats in isolation. Threats from generative AI and hybrid cloud adoption are rapidly evolving. Meanwhile, the risk quantum computing poses to modern standards of public-key encryption will become unavoidable. Given the maturation of new quantum-safe cryptography standards, there will be a drive to discover encrypted assets and accelerate the modernization of cryptography management. Next year, successful organizations will be those where executives and diverse teams jointly develop and enforce cybersecurity strategies, embedding security into the organizational culture.

Prepare for post-quantum cryptography standards (Ray Harishankar, IBM Fellow, IBM Quantum Safe)

As organizations begin the transition to post-quantum cryptography over the next year, agility will be crucial to ensure systems are prepared for continued transformation, particularly as the U.S. National Institute of Standards and Technology (NIST) continues to expand its toolbox of post-quantum cryptography standards. NIST’s initial post-quantum cryptography standards were a signal to the world that the time is now to start the journey to becoming quantum-safe. But equally important is the need for crypto agility, ensuring that systems can rapidly adapt to new cryptographic mechanisms and algorithms in response to changing threats, technological advances and vulnerabilities. Ideally, automation will streamline and accelerate the process.

Data will become a vital part of AI security (Suja Viswesan, vice president of Security Software Development, IBM)

Data and AI security will become an essential ingredient of trustworthy AI. “Trustworthy AI” is often interpreted as AI that is transparent, fair and privacy-protecting. These are critical characteristics. But if AI and the data powering it aren’t also secure, then all other characteristics are compromised. In 2025, as businesses, governments and individuals interact with AI more often and with higher stakes, data and AI security will be viewed as an even more important part of the trustworthy AI recipe.

Organizations will continue learning the juxtaposition of AI’s benefits and threats (Mark Hughes, Global Managing Partner, Cybersecurity Services, IBM)

As AI matures from proof-of-concept to wide-scale deployment, enterprises reap the benefits of productivity and efficiency gains, including automating security and compliance tasks to protect their data and assets. But organizations need to be aware of AI being used as a new tool or conduit for threat actors to breach long-standing security processes and protocols. Businesses need to adopt security frameworks, best practice recommendations and guardrails for AI and adapt quickly — to address both the benefits and risks associated with rapid AI advancements.

Greater understanding of AI-assisted versus AI-powered threats (Troy Bettencourt, Global Partner and Head of IBM X-Force)

Protect against AI-assisted threats; plan for AI-powered threats. There is a distinction between AI-powered and AI-assisted threats, including how organizations should think about their proactive security posture. AI-powered attacks, like deepfake video scams, have been limited to date; today’s threats remain primarily AI-assisted — meaning AI can help threat actors create variants of existing malware or a better phishing email lure. To address current AI-assisted threats, organizations should prioritize implementing end-to-end security for their own AI solutions, including protecting user interfaces, APIs, language models and machine learning operations, while remaining mindful of strategies to defend against future AI-powered attacks.

There’s a very clear message from these predictions that understanding how AI can help and hurt an organization is vital to ensuring your company and its assets are protected in 2025 and beyond.

The post Cybersecurity trends: IBM’s predictions for 2025 appeared first on Security Intelligence.

CISO vs. CEO: Making a case for cybersecurity investments

Ask CISOs why they think there is a cyber skills shortage in their organization, what keeps them up at night or what the most important issue facing the industry is — at some point, even if not the first response, they will bring up budgets.

For example, at RSA Conference 2024, a roundtable discussion about issues facing the cybersecurity industry, one CISO stated bluntly that budgets — or lack thereof — are the biggest problem. At a time when everything is getting more expensive, the CISO said, security budgets are being slashed.

As for the cybersecurity talent shortage, the 2024 ISC2 Cybersecurity Workforce Study noted that “39% said a lack of budget was the top reason for cyber shortages, replacing a shortage of talent as the previous top reason for staff shortages.” According to Forrester’s 2024 Cybersecurity Benchmarks Global Report, the cybersecurity budget is just 5.7% of the entire IT budget, making it very difficult for CISOs to bring in the right personnel or upgrade tools and solutions.

However, it might not be the dollar amount that is the problem as much as where the budget is coming from. CEOs think about cybersecurity differently when it is tied to IT and when the CISO reports directly to the CIO versus when the CISO can present cybersecurity as a vital cog in overall business operations and tie it directly to business risk, the Forrester report found.

“CISOs who can articulate the business value of cybersecurity, demonstrating how it can drive revenue and support strategic goals, are more likely to secure the necessary funding. This shift also reflects a growing recognition of cybersecurity’s strategic importance beyond mere IT operations,” Louis Columbus wrote.

Key issues in cybersecurity funding

Once cybersecurity is approached as a key factor in business operations rather than as a function of IT, CEOs and CISOs are more likely to be on the same page when it comes to budget.

“Security funding and oversight is a top priority for both the management team and the Board of Directors,” said Dave Gerry, CEO of Bugcrowd.

“Cybersecurity investment uplift is prioritized against the cyber threats we face as a business; the IT risks that we have identified and need to remediate or the customer and compliance obligations that we need to ensure,” Gerry added. “Thematically, however, it all points back to ensuring that the confidentiality, integrity and availability of our data we reside over is protected — whether it’s that of customers, employees or critical business partners, whilst enabling our business in-turn.”

Risk prioritization and business continuity are two key areas that George Jones, CISO at Critical Start, focuses on. Along with emerging threats and vulnerability management, Jones says these four items are the pillars of security for the enterprise as they are aligned with overall business goals and objectives.

One of the drivers behind realigning cybersecurity investments is the Security and Exchange Commission’s (SEC) new rules around the disclosure of cybersecurity incidents. Organizations are now also required to share details about their cybersecurity risk management programs, particularly around any financial information.

“After recent SEC guidelines were announced, Boards are more focused than ever on cyber risk reduction and ensuring adequate funding is critical, especially as organization’s attack surfaces continue to rapidly expand,” said Gerry.

Explore AI cybersecurity solutions

Collaboration between CISOs and CEOs

While CISOs and CEOs (and, in many cases, in conjunction with the CFO) have to build an ongoing dialogue about cybersecurity investments, they are coming to the table with two different interests.

“The CEO lens will be focused on obtaining satisfaction that the security initiatives deliver value with tolerable impacts on productivity, but more importantly looking for the potential of competitive advantage,” said Gareth Lindahl-Wise, CISO at Ontinue. The CISO’s approach, on the other hand, focuses on risk prevention, mitigation and solutions to meet all of the organization’s legal, regulatory and contractual obligations.

The overall goal should be to create a security posture advantageous in gaining or retaining customers or attracting investment. Ultimately, said Lindahl-Wise, these decisions lie with the CEO and board.

“When it comes to funding and risk acceptance, CISO is, largely, an expert advisor — if an informed and conscious decision has been made by a CEO, then one should argue the CISO has discharged their responsibilities,” Lindahl-Wise added.

CEO Gerry, however, said the final decision on funding allocation is made by the Board of Directors, and it is up to both the CEO and the CISO to get their buy-in on where and what security investments should be made.

“This is a key reason that the CISO should report to the CEO and have direct access to the Board of Directors,” said Gerry. “While oftentimes security can be viewed as a cost center, the new reality is that a robust security program should be a competitive differentiator and a revenue enabler, in addition to simply being the cost of doing business in an ever-expanding threat environment.”

The Future is AI

CISOs have long understood the role AI plays in cybersecurity, particularly handling some of the most mundane tasks that free up time for overworked security teams to handle issues that require hands-on management. As generative AI becomes ubiquitous in the workplace, CEOs have become increasingly aware of AI’s impact on business and security risks. Some companies are turning to adding Chief AI Officers to their IT and security teams, but even when they aren’t CEOs still recognize the need to include AI in future security budgets.

“As threats become more sophisticated, leveraging AI tools enables us to enhance our threat detection, automate responses and improve incident management,” said Darren Guccione, CEO at Keeper Security. “Skilled professionals are needed to navigate the rapidly evolving threat landscape and ensure that our AI-driven strategies remain effective and secure and must be a budget consideration.”

How it is defined within the cybersecurity budget will depend on how it is used. Will it be a fringe use of AI in commercial tools for productivity gains or an embedded use of AI in the organization’s core offerings?

“If it is the latter, the CEO must satisfy themselves that the organization has the right experience to manage the opportunities and risks,” Lindahl-Wise said. As for the security side of things, “My hunch is we will see AI responsibilities feature heavily in CIO/CTO roles before standalone CAIOs become the norm.”

AI might be the most current technology and security disrupter, but it won’t be the last. Where it is similar is that it creates risk, both to the business and to cybersecurity, and risk is where CEOs and CISOs will focus on investments as a team.

The post CISO vs. CEO: Making a case for cybersecurity investments appeared first on Security Intelligence.

CISA’s cyber incident reporting portal: Progress and future plans

On August 29, 2024, CISA announced the launch of a new cyber-incident Reporting Portal, part of the new CISA Services Portal.

“The Incident Reporting Portal enables entities and individuals reporting cyber incidents to create unique accounts, save reports and return to submit later, and eliminate the repetitive nature of inputting routine information such as contact information,” says Lauren Boas Hayes, Senior Advisor for Technology & Innovation, at CISA.

Shortly after the announcement, Security Intelligence reported on how the portal was designed and how it differs from other cyber incident reporting structures. We noted that CISA’s biggest advantage was its ability to assist the reporting organization with response and remediation.

“Any organization experiencing a cyberattack or incident should report it — for its own benefit and to help the broader community. CISA and our government partners have unique resources and tools to aid with response and recovery, but we can’t help if we don’t know about an incident,” said CISA Executive Assistant Director for Cybersecurity Jeff Greene in a formal statement covering the portal’s announcement.

Four months later

Since the announcement in August, a lot has happened. There was a presidential election, and a new administration will take charge on January 20. The current CISA director and other political appointees will step down. The agency’s future is uncertain as of this writing, particularly regarding who will oversee it and whether its functions will be divided across different federal departments. Still, it is expected that its work will continue.

Before these changes occur, we wanted to check in with CISA to follow up on the portal’s progress and what the future might look like.

Explore cybersecurity services

Long history of collecting cyber incident reports

CISA was first created in 2018, but federal agencies have collected cyber incident reports for decades.

“The launch of the Incident Reporting Portal is a significant step forward for CISA’s ability to collect operationally relevant data from reporters in a system which is more usable for reporters,” says Hayes. “The vision for the Incident Reporting Portal is for CISA’s Incident Reporting Portal to continue to enhance the functionality of the system to enable entities to share submitted reports with colleagues or clients to facilitate more effective third-party reporting, communicate directly with CISA, and access information and services relevant to the reporter.”

The portal is expected to make compliance with the Cyber Incident Reporting for Critical Infrastructure Act of 2022 easier. This act will “require CISA to coordinate with Federal partners and others on various cyber incident reporting and ransomware-related activities” across the 16 sectors, agencies and industries deemed “vital to the health, economy and security of the community or region.”

Hayes adds that while reporting under the Cyber Incident Reporting for Critical Infrastructure Act of 2022 will not be required until the Final Rule goes into effect, the agency encourages critical infrastructure owners and operators to voluntarily share information on cyber incidents prior to that date to help prevent other organizations from becoming victims of similar incidents.

“Sharing information allows us to work with our full breadth of partners to help prevent attackers from compromising other victims using the same techniques,” says Hayes.  “Sharing information can provide insight into the scale of an adversary’s campaign.”

Why reporting is vital to overall cybersecurity

While reporting cyber incidents to the portal is voluntary at the moment, all organizations are encouraged to share the information. If they feel the need, they can do so anonymously. As cyberattacks and nation-state threats become more sophisticated and increasingly target critical infrastructure industries, sharing this information with CISA allows the agency to help other organizations prepare for emerging threats and implement preventive measures before the damage is done.

“Isolating cyberattacks and preventing them in the future requires the coordination of many groups and organizations,” CISA explained. “By rapidly sharing critical information about attacks and vulnerabilities, the scope and magnitude of cyber events can be greatly decreased.”

And it isn’t just CISA that uses this information. According to the U.S. Government Accountability Office (GAO), 14 federal agencies are responsible for protecting critical infrastructure from cyberattacks, many in unexpected ways. For example, TSA, which handles airport security screening, is also responsible for safeguarding the country’s gasoline pipelines.

“Entities representing critical infrastructure owners and operators told us there are great benefits in getting information about threats from federal agencies,” the GAO reported.

What comes next

Despite a changing presidential administration, CISA is moving forward. It is planning a future designed to keep the critical infrastructure safe from cyber threats, which, in turn, will provide a layer of protection for the nation’s citizens and businesses.

“Sharing information allows us to work with our full breadth of partners so that the attackers can’t use the same techniques on other victims and can provide insight into the scale of an adversary’s campaign,” Jeff Greene was quoted in Federal News Network. “CISA is excited to make available our new portal with improved functionality and features for cyber reporting.”

As for the Incident Reporting Portal’s future, Hayes says, “In the future, we are planning to implement additional features that will take time to develop and incorporate user feedback. Our user experience team is actively working to get feedback on how we can improve the system over time.”

The post CISA’s cyber incident reporting portal: Progress and future plans appeared first on Security Intelligence.

❌