The extraordinary arrival of the AI-powered chatbot ChatGPT has been the talk of the digital town, but is it really so good that nobody can tell the difference between man and machine?

ChatGPT hurtled to 1-million users in just five days in November 2022. By January 2023, the AI-powered chatbot was enjoying the attention of around 10-million users a day. The platform, capable of answering questions, doing research, helping with some writing tasks, and resolving code issues, is not perfect with limitations in its access to information (it only has knowledge of events up to the end of 2021) and its skillsets. It didn’t, for example, know which came first – the chicken or the egg – when science already knows the answer.

It does, however, introduce very real concerns when it comes to security. Can people tell the difference between an AI-generated conversation and a human?

“While today’s technology does not scare me that much, even though ChatGPT is all the rage, I do believe we are in a paradigm shift,” explains Roger Grimes, data-driven defence evangelist at KnowBe4. “ChatGPT already demonstrated in its first generation that it could be extraordinary – it recently passed the US Medical Licensing Exam (USMLE) that physicians take after four years of medical school. The big companies have realised that the ground beneath them has changed and they will be spending billions on implementing AI solutions.”

Yes, ChatGPT is a game-changer, he says, but it is not actually presenting anything new; it also does not have any original or creative ideas like a human being. It’s an intelligent data scraper. As Grimes points out, “a human can look at a cloud in the sky, decide it reminds them of something and then instantly solve a problem that was on their mind. AI cannot do that – it cannot laterally shift its thinking to resolve an issue as a person can.”

Humans have the ability for massive parallelism and to connect completely unrelated concepts. There are plenty of stories about inventions that came about from mistakes, perhaps the most famous being penicillin. The mistake solved a completely different problem from the one postulated. A computer would look at the penicillin “mistake” and write it off as just that – a mistake. It does not have the ability to look beyond that and see its potential.

“However, even though AI is not at the level of the human on multiple levels, there are concerns when it comes to cybersecurity,” says Grimes. “Malware is already becoming more and more sophisticated – it can identify vulnerabilities associated with a specific device or software and then apply that knowledge to its attack vectors.

“Once it penetrated the system, it can do things that a hacker used to do, like inventory the network and find financial loopholes. Looking ahead, it is clear that AI is going to be used to recreate criminality and do it faster and more broadly than humans.”

Fortunately, AI can also be used to mitigate these threats. These AI tools are designed to actively hunt for threats within the network or a device and to automatically neutralise them. It can also alert users to vulnerabilities, remind them that their machines need to be patched, and fix simple issues like misconfigurations that would often go unnoticed. Good bots will roam the networks to protect against the bad bots, and the winner will be the bot with the best algorithm.

“This hunt for the perfect AI will create jobs for people who are trained at making really good AI algorithms,” says Grimes. “In fact, it is likely that the parents of the future will be the ones who brag about their kid’s great ‘algo’ – and that this is likely something that will become as valuable as a commodity on the stock market.”

Looking ahead, it is still too soon to label AI as “powerful” or “intelligent” enough to be mistaken for a human. This was demonstrated by the recent use of AI at a marine base that was designed to detect intruders. To test the capabilities of the system, the base ran a competition to see how many soldiers could fool the robot and many of them did; the AI was unable to recognise someone doing backflips or wearing a cardboard box on their head.

“What this shows us is that AI may seem super intelligent and scary but it is really just learning about past human behaviour and trying to implement it going forward,” says Grimes. However, the point at which AI will be able to move beyond just repeating what humans have already said is within reach in the next five to 10 years.

“It will probably happen within the next generation of AI, if it has not happened already.”

But will AI ever be able to mimic human thought? Perfect its mannerisms? Achieve the ability of massively parallel thinking? “Maybe, but we’re nowhere near that today.”