Cyber threats have become increasingly sophisticated thanks to the use of artificial intelligence (AI), and attacks can now be executed rapidly and scaled beyond anything a human is capable of.

By Ivaan Captieux, information security consultant at Galix

Add in machine learning (ML), and attacks can now adapt and evolve in real time, becoming more sophisticated and stealthier than ever. Traditional security measures are simply no longer effective; we need to counter the offensive AI with the use of defensive AI.

More than that, however, we need to understand that humans remain the weakest link in any security chain, and awareness of threats and security measures is a critical component in any robust and resilient cyber defence strategy.

The human element

When it comes to social engineering, AI has changed the game for bad actors. Attackers leverage AI and ML tools to analyse social media profiles, online activity, and other publicly available information to create increasingly tailored and convincing phishing messages. This vastly increases the likelihood of success.

While AI can be used in several ways to counter this, from fully automated firewalls and policy management to segmentation, firmware updates, and more, this is not a foolproof solution. Humans are still essential links in the security chain.

Education and awareness are critical. We need to be mindful of how we share personal information and what information we place online in the public domain to safeguard our own privacy. Regular training and awareness can help educate people on cyberattack techniques and best practices for adopting a security-driven culture.

In addition, the human element remains essential in verifying what AI tools are doing; while AI can speed processes and automate manual tasks, people provide the contextual understanding of nuance that AI struggles with. People are also critical in ensuring that ethical considerations are taken into account when building AI models and when using and processing data.

To counter today’s threats, it has become vital to create ‘human in the loop’ defence models, where AI works in tandem with human analysts to respond to threats.

Collaboration is critical

Managed security service providers (MSSPs) can be an invaluable asset for businesses in providing guidance on best practices and industry standards related to AI. This can help organisations understand the ethical implications of AI and security and develop appropriate strategies to address ethical risk. This includes assessing fairness and transparency in the design of both algorithms and processes.

MSSPs can also assist with providing education and training, documenting and communicating processes, and implementing and managing solutions. This includes how algorithms are selected, trained, and deployed, as well as how data is collected, processed, and used for analysis.

Working with MSSPs in collaboration with regulatory bodies can help organisations align security objectives, ensure compliance, and assist in implementing ethical practices effectively. Working together is the key to successfully implementing AI, especially when it comes to AI as part of a cyber defence strategy.

It is essential to build trust between humans and AI, invest in robust defence systems, and monitor for emerging threats, and this requires human oversight as the ultimate decision-maker.

Organisations, MSSPs, and regulatory bodies, by working together, can create collaborative ecosystems that foster the development of solutions built on trust that can enhance security posture and mitigate cyber risk effectively.