Kaspersky’s experts have revealed the profound influence of AI on the 2023 cybersecurity landscape in their latest report. Adopting a multifaceted approach, the analysis explores the implications of AI, focusing on its use by defenders and regulators – and separately assessing its potential exploitation by cybercriminals.

Amid the rapid pace of technological progress and societal shifts, the term “AI” has firmly positioned itself at the forefront of global conversations. With the increasing spread of large language models (LLMs) the surge in security and privacy concerns directly links AI with the cybersecurity world.

Kaspersky researchers illustrate how AI tools have helped cybercriminals in their malicious activity in 2023, while also showcasing the potential defensive applications of this technology. The company’s experts also reveal the evolving landscape of AI-related threats in the future that might include:

 

More complex vulnerabilities

As instruction-following LLMs are integrated into more consumer-facing products new, complex vulnerabilities will emerge on the intersection of probabilistic generative AI and traditional deterministic technologies expanding the attack surface for cybersecurity professionals to secure. This will require developers to study new security measures like user approval for actions initiated by LLM agents.

 

A comprehensive AI assistant to cybersecurity specialists

Red teamers and researchers leverage generative AI for innovative cybersecurity tools potentially leading to an assistant using LLM or machine learning (ML). This tool could automate red teaming tasks and offer guidance based on executed commands in a pentesting environment.

 

Neural networks will be increasingly used to generate visuals for scams

In the coming year, scammers may amplify their tactics using neural networks, leveraging AI tools to create more convincing fraudulent content. With the ability to effortlessly generate convincing images and videos, malicious actors pose an increased risk of escalating cyberthreats related to fraud and scams.

 

AI will not become a driver for groundbreaking change in the threat landscape in 2024

Despite the above trends, Kaspersky experts remain skeptical about AI changing the threat landscape significantly any time soon. While cybercriminals do adopt generative AI, the same is true about cyberdefenders who will use the same or even more advanced tools to test enhance security of software and networks making it unlikely to drastically alter the attack landscape.

 

More AI-related regulatory initiatives, with private sector’s contribution

As fast-growing technology develops it has become a matter of policy making and regulation. The number of AI-related regulatory initiatives is set to rise. Non-state actors, such as tech companies and given their expertise in developing and utilising AI, can provide invaluable insights for discussions on AI regulation on both global and national platforms.

 

Watermark for AI-generated content

More regulations, as well as service provider policies, will be required to flag or identify synthetic content with the latter continuing to invest in detection technologies. Developers and researchers, on their part, will contribute to methods of watermarking synthetic media for easier identification and provenance.

“AI in cybersecurity is a double-edged sword,” says Vladislav Tushkanov, security expert at Kaspersky. “Its adaptive capabilities fortify our defences offering a proactive shield against evolving threats. However, the same dynamism poses risks as attackers leverage AI to craft more sophisticated assaults. Striking the right balance and ensuring responsible use without oversharing sensitive data is paramount in securing our digital frontiers.”