Mark Davison reports from Kaspersky’s Cyber Security Weekend in Kuala Lumpur – When ChatGPT launched in late 2022 it put artificial intelligence firmly in the spotlight, but the advance of the technology to generative AI (GenAI) has seen that glare intensify – so much so that some in the IT industry are speculating that AI could have a bigger global impact than the advent of the Internet.

And, just like the Internet and the World Wide Web have come under scrutiny over the years for misuse, the same is happening in the AI world as prominent individuals and politicians call for some form of “regulation” or “ethics” to be introduced as the technology’s potential for both “good and evil” starts to become apparent.

Political blocs like the European Union and the African Union are already investigating the prospect of regulating AI. But should governments be responsible for regulating technology that, in as little as a few years, could have profound impact on the human race?

Genie Gan, head of Government Affairs & Public Policy Asia-Pacific, Japan, Middle East, Turkey and Africa at Kaspersky, believes they should.

“I think there is a special role for governments to play in regulating AI,” Gan says. “The question is not who we’re regulating or sanctions against vendors and so forth. It’s not that. It’s about why we regulate. Because of the sheer impact on individuals, it’s important that it falls in a regulatory space.

“Very basically, what is AI?” Gan explains. “It uses data sets, machine learning, training etc. then goes through a series of algorithms and calls up focused results.

“At a fundamental level, we’re talking about data … data that belongs to everyone,” she continues. “It is no different from quantum computing or the cloud, and as long as data is involved and an individual is concerned, it’s something that we have to regulate with some form of structure or policy framework because of the value to you as an individual.

“And it is important that governments have to take the lead,” Gan says.

But, arguably more important regarding AI, is the speed at which governments take that lead in some form of regulation. Because, in the meantime, unscrupulous cybercriminals are taking full advantage of AI as another, very powerful, tool in their arsenal.

Gan says we can’t continue to waste time on the issue. “We don’t have time,” she says. “That’s why cyber attackers already have the capacity and are making use of AI technology to launch attacks. AI is developing so fast … and we’re dealing with cybercriminals who, remember, had no hesitation in taking advantage of the Coronavirus pandemic to steal medical records for ransomware opportunities.

“Now, with AI, it is the same thing,” Gan says. “Cybercriminals see it as just another opportunity and they’re on the attack.”