The AI landscape is changing faster than ever, and this is especially true in cybersecurity. Many organisations in South Africa are now focusing their investments on application security, putting a significant emphasis on AI-driven solutions to boost their cybersecurity efforts.
This trend highlights a growing awareness of AI’s potential benefits, even as the technology itself keeps advancing.
David Roth, chief revenue officer for enterprise America at Trend Micro, and Jeff Pollard, vice-president and principal analyst at Forrester, hosted a webinar to cut through the hype of AI. They emphasised a crucial point: while AI and machine learning have long been part of security strategies, the arrival of generative AI (Gen AI) adds a new layer of complexity.
As we dive into the practicalities and challenges of bringing Gen AI into cybersecurity operations, it’s clear that South African security teams face both exciting opportunities and tough hurdles. Here are some key points for consideration highlighted during the webinar.
Be aware of the impact on skills
It’s not just the allure of a ‘more sophisticated new toy’ that’s driving the buzz around generative AI for cybersecurity. Security teams are in desperate need of help – they’re stretched thin, lacking resources, and constantly dealing with evolving threats.
So, it’s no wonder that when Gen AI entered the scene, people started dreaming about fully autonomous security operations centers (SOCs) with Terminator-like malware hunters.
However, the reality is that today’s Gen AI systems aren’t quite ready to run without human oversight. Instead of solving the skills shortage, Gen AI might even create new training challenges in the short term.
Plus, integrating these AI tools into existing workflows takes time, even for seasoned professionals.
Despite these hurdles, there are some really promising uses for Gen AI in security right now. By enhancing what teams can already do, AI can help them achieve better results with less repetitive work. This is especially true in areas like application development and detection and response.
Understand how to achieve quick wins
One quick win for security teams using Gen AI is automating documentation. Creating action summaries, event write-ups, and reports can be tedious and time-consuming, but Gen AI can handle these tasks instantly. This gives security pros more time to focus on incidents.
However, strong communication skills are still essential for these roles, and AI-generated reports shouldn’t replace professional growth.
Gen AI can also suggest next steps and pull information from knowledge bases faster than a human. It’s crucial though that AI outputs align with organisational needs. If a process has seven steps and AI suggests only four, a human needs to ensure all steps are followed to meet goals and stay compliant. Skipping steps can lead to serious issues.
Look out for gaps in data, impacting AI performance
It’s no secret that Gen AI can help security teams harness the big data opportunity by enabling them to get proactive, spotting changes in attack surfaces, and running attack path scenarios. While it may not predict exact threats, it can help teams stay ahead of potential issues.
However, how well this works depends on how aware an organisation is of its systems and configurations. Gaps in knowledge mean gaps in AI performance, and unfortunately, many organisations still struggle with scattered data and documentation.
Security teams need to focus on good data hygiene and standardised data management. The better your data, the more effective your AI will be.
Introduce safety measures for shadow AI
Businesses globally are rightly worried about AI leaking sensitive information, whether through unauthorised tools or even approved software that’s been enhanced with AI. In the past, hackers needed to know how to break into systems to get this data, but now, a simple prompt could make it accessible.
Companies need to protect themselves from employees using unauthorised AI tools and ensure that even approved AI tools are used properly. When building their own applications with large language models (LLMs), they must secure the data, the app, the LLM, and the prompts used.
These concerns boil down to a few main issues: bring-your-own-AI, enterprise apps, and product security. All require their own safety measures and affect the Chief Information Security Officer’s (CISO) responsibilities, even if the CISO isn’t directly managing these projects.
Don’t get caught unprepared
Think about the early days of cloud and the frenzy over shadow IT apps – there’s a lot to learn from those times. When security teams saw unsanctioned apps as ‘shadow IT’, business leaders called it ‘product-led growth’. Banning them only pushed their use underground, making things worse.
We can’t make that mistake with AI. Now’s the time to craft security-focused AI strategies, get familiar with the tech, and be ready for its big moment. Remember how security teams were caught off guard with the cloud, even with ample warning? AI’s complexity and power mean we just can’t afford to be unprepared this time.
GenAI hasn’t hit its full potential yet, but it’s already making waves in cybersecurity. While it won’t immediately solve the skills gap, it can significantly ease the burden on security teams. By learning from past experiences with shadow IT and cloud adoption, teams can better equip themselves for AI’s transformative future. The key is preparation and proactive management to harness AI’s true power and keep enterprises secure.