If you are the kind of person who does not pay much attention to technology, you might assume that advances have slowed down in recent years.

After all, every phone upgrade only seems to come with marginal improvements and the same is broadly true for your TV, laptop, and most other personal devices, writes By Zaakir Mohamed, director, head of corporate investigations and forensics at CMS South Africa.

You may have heard about some exciting advances in augmented and virtual reality (AR and VR) but they’re nowhere near becoming as ubiquitous as the smartphones we so depend on.

Those with a closer eye on the space will, however, tell you that technology is changing faster than ever. A good example of this is artificial intelligence (AI) and more specifically generative AI. Capable of producing text, images, videos, and other forms of data using generative models, typically in response to prompts, generative AI has gone from something experimental just a few years ago to a technology that hundreds of millions of people use daily.

If you want to understand how rapid the adoption of generative AI tools has been, you only need to look at how quickly ChatGPT, the most high-profile tool in the sector, has grown. It took just five days to reach 1-million users and two months to reach 100-million.

As much as these kinds of technological advances offer incredible opportunities across a broad range of sectors, they also come with risks. Nowhere is that risk more apparent than in the field of cybersecurity. Already a challenge for most organisations, things are only going to become more difficult.

Accelerating cybercriminals, additional attack avenues

In part, that is because cybercriminals are alert to technological advancements and are often among the first to use new technologies to enhance their own capabilities. That is as true for AI as it has been for any other nascent technology.

Cybersecurity firms have, for example, already identified a trend whereby attackers use AI to bolster their attacking capabilities. One way they do so is by using generative AI tools to create more advanced malicious code. Another way is to use it to write more convincing phishing emails which ensure that more people fall for them.

Upskilling and education remain the best defences

Imagine a scenario where employees are uploading proprietary company information or sensitive customer data to tools like ChatGPT. If that data gets leaked or compromised because a cybercriminal logs into an employee’s ChatGPT account, the legal and financial ramifications could be enormous.

It is critical, therefore, that organisations stay on top of the latest technological developments and understand what risks they present. And while they can, and should, use the best available technological tools to combat cyberattacks that result from the use of those new technologies, there are other, more effective defensive strategies they can take too.

Education and upskilling, for example, have consistently been shown to be among the most effective defences against cybercrime. Cyberattacks can happen at any time, and having staff that are trained and ready to recognise and respond to cyber risks such as ransomware, password management, and phishing is essential to protecting any organisation.

Of course, it is also important that organisations update their policies in line with new technologies. It is equally important, however, that they do not ban the use of those new technologies. With the current rate of technological change, adopting a wait-and-see approach risks being left behind.

Accepting and mitigating risks

Ultimately, if any organisation is to reap the benefits and opportunities that come with new technologies, it must accept that there will be risks too.

With the right approach, however, those risks can be mitigated. When it comes to doing so and avoiding legal, financial, and reputational damage, organisations must use all the available tools with a particular focus on education and upskilling.