A massive 72,5% of professionals surveyed in South Africa say that they use artificial intelligence (AI) tools for work tasks – but only 30% have received training on the cybersecurity aspects of using neural networks, which is one of the critical elements of protection against AI-related risks ranging from data leaks to prompt injections.

This is among the findings from new Kaspersky research conducted in the Middle East, Turkiye and Africa (META) region, entitled “Cybersecurity in the workplace: Employee knowledge and behaviour”.

The vast majority of survey respondents in South Africa (97%) said that they understand what the term “generative AI” means, and for many employees this knowledge is no longer just theoretical: AI tools have become part of their every workday.

Overall, 72,5% of respondents use AI tools for work: most often – to write or edit texts (54,5%) and work e-mails (52,4%), to create images or videos with the help of neural networks (33,8%), and for data analytics (47,9%).

The survey uncovered a serious gap in employee preparedness for AI risks. Over a third (46,5%) of professionals reported receiving no AI-related training. Among those who had courses, 33,5% said the focus was on how to effectively use AI tools and create prompts; while only 30% received guidance on the cybersecurity aspect of AI use.

While AI tools, which help automate everyday tasks, are becoming ubiquitous in many organisations, they often remain part of ‘shadow IT’, when employees use them without corporate guidance. Sixty-seven percent of respondents said generative AI tools are permitted at their work, 24% acknowledged these tools are not allowed, while 9% were unsure.

To make employee use of AI more clear and secure, organisations should implement a company-wide policy regarding this aspect. This policy can prohibit AI use in specific functions and for certain types of data, regulate which AI tools are provided to employees, and allow only tools from the approved list. The policy should be formally documented, and employees should receive proper training. After setting a list of hygiene measures and restrictions, companies should monitor AI usage, identify popular services, and use this information to plan future actions and refine their security measures.

“For successful AI implementation, companies should avoid the extremes of a total ban as well as a free-for-all. Instead, the most effective strategy is a tiered access model, where the level of AI use is calibrated to the data sensitivity of each department. Backed by comprehensive training on cybersecurity aspects of AI, this balanced approach fosters innovation and efficiency while rigorously upholding security standards,” says Chris Norton, GM for sub-Saharan Africa at Kaspersky.