Human analysts can no longer effectively defend against the increasing speed and complexity of cybersecurity attacks – the amount of data is simply too large to screen manually.
Generative AI (GenAI), the most transformative tool of our time, enables a kind of digital jiu jitsu. It lets companies shift the force of data that threatens to overwhelm them into a force that makes their defences stronger.
Business leaders seem ready for the opportunity at hand. In a recent survey, CEOs said cybersecurity is one of their top three concerns – and they see generative AI as a lead technology that will deliver competitive advantages.
Generative AI brings both risks and benefits, but here are three ways generative AI can bolster cybersecurity:
Begin with developers
First, give developers a security copilot. Everyone plays a role in security, but not everyone is a security expert. So, this is one of the most strategic places to begin.
The best place to start bolstering security is on the front-end, where developers are writing software. An AI-powered assistant, trained as a security expert, can help them ensure their code follows best practices in security.
The AI software assistant can get smarter every day if it’s fed previously reviewed code. It can learn from prior work to help guide developers on best practices.
To give users a leg up, Nvidia is creating a workflow for building such co-pilots or chatbots. This particular workflow uses components from Nvidia NeMo, a framework for building and customizinglarge language modules (LLMs).
Whether users customise their own models or use a commercial service, a security assistant is just the first step in applying generative AI to cybersecurity.
An agent to analyse vulnerabilities
Second, let generative AI help navigate the sea of known software vulnerabilities. At any moment, companies must choose among thousands of patches to mitigate known exploits. That’s because every piece of code can have roots in dozens, if not thousands, of different software branches and open-source projects.
An LLM focused on vulnerability analysis can help prioritise which patches a company should implement first. It’s a particularly powerful security assistant because it reads all the software libraries a company uses, as well as its policies on the features and APIs it supports.
To test this concept, Nvidia built a pipeline to analyse software containers for vulnerabilities. The agent identified areas that needed patching with high accuracy, speeding the work of human analysts up to 4x.
The takeaway is clear: It’s time to enlist generative AI as a first responder in vulnerability analysis.
Fill the data gap
Finally, use LLMs to help fill the growing data gap in cybersecurity. Users rarely share information about data breaches because they’re so sensitive. That makes it difficult to anticipate exploits.
Enter LLMs. Generative AI models can create synthetic data to simulate never-before-seen attack patterns. Such synthetic data can also fill gaps in training data so machine-learning systems learn how to defend against exploits before they happen.
Staging safe simulations
Don’t wait for attackers to demonstrate what’s possible – create safe simulations to learn how they might try to penetrate corporate defences.
This kind of proactive defence is the hallmark of a strong security program. Adversaries are already using generative AI in their attacks. It’s time users harness this powerful technology for cybersecurity defence.
To show what’s possible, another AI workflow uses generative AI to defend against spear phishing – the carefully targeted bogus emails that cost companies an estimated $2,4-billion in 2021 alone.
This workflow generated synthetic emails to make sure it had plenty of good examples of spear phishing messages. The AI model trained on that data learned to understand the intent of incoming emails through natural language processing capabilities in Nvidia Morpheus, a framework for AI-powered cybersecurity.
The resulting model caught 21% more spear phishing emails than existing tools.
Wherever users choose to start this work, automation is crucial given the shortage of cybersecurity experts and the thousands upon thousands of users and use cases that companies need to protect.
These three tools – software assistants, virtual vulnerability analysts, and synthetic data simulations – are great starting points for applying generative AI to a security journey that continues every day.
But this is just the beginning. Companies need to integrate generative AI into all layers of their defences.