The concept of the AI Singularity, where machines become self-aware, is cause for some concern among several prominent researchers and thought leaders.
While the concerns may seem worlds away for most South Africans, they are real and South African universities should play a part in investigating this new realm, says Johan Vorster, head of the faculty of ICT at The Independent Institute of Education (The IIE).
“The issue of developments in AI and its potential risks are as relevant to people in South Africa as it is to people around the world,” he says. “As AI becomes more integrated into our daily lives, it will likely significantly impact the economy, the labour market and society as a whole.”
Vorster says recent developments will particularly impact South Africa’s large and growing technology sector, which is developing and investing in AI applications for various purposes, including sales and marketing, healthcare and education.
“AI has already led to significant changes in these and many other industries, a change which will be rapidly increasing in a fashion that is currently not entirely predictable – hence the concerns being raised globally at the highest levels about the rapid, exponential and mostly unpredictable developments.”
He points out that the late Professor Stephen Hawking feared that AI could essentially be a “new form of life that will outperform humans”. Hawking also warned that AI would decimate middle-class jobs, and called for an outright ban on developing of AI agents for military use.
Recently, personalities like Tesla, Twitter and SpaceX boss Elon Musk and Microsoft co-founder Bill Gates have echoed Hawkings’ sentiments.
What is the issue?
There are many different ways to describe AI Singularity. But, in essence, AI Singularity refers to the point in time when the development of intelligent systems and machines will become uncontrollable.
At this point, artificial intelligence will overtake the brainpower of humans and will be able to replicate and evolve on its own.
Opinions are divided about when this will happen, with forecasts ranging from some time within this decade to a century from now, notes Vorster.
How worried should we be?
“AI is already part of everyday lives in ways that we don’t usually notice, but it already touches many parts of our daily routines,” says Vorster. “We see AI at work when we do online shopping, when we see targeted ads while using apps on our phones, when we use our GPS apps to navigate the complexities of peak time traffic during load-shedding, when we get music and movie recommendations, and when we use our digital assistants.”
The recent emergence of chatbots has, however, brought AI much more into the spotlight, and its unprecedentedly rapid evolution, much of it behind closed doors, is raising alarm bells.
“This is understandable, given that language and conversation are very close to our experience of being human, and something that we equate closely to our human intelligence. Once AI starts talking to us, we can immediately recognise how it approaches human intelligence. This is probably why we have seen such a huge public reaction to ChatGPT and other AI applications in recent times,” Vorster adds.
However, there are now increasing fears that we are approaching AIs that can outperform humans at general-purpose tasks, and even start performing tasks that are not yet accounted for.
A recent letter by the world’s top minds cautions: “AI systems with human-competitive intelligence can pose profound risks to society and humanity”. The authors call on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4”, as AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control”.
If a moratorium can be put in place, it would allow AI companies to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt”.
The duty of academia
While the response to the call for pause remains to be seen, Vorster says the call of duty to academia right now could not be more precise: there is an urgent need for South African Universities to invest in the field and equip a new generation of graduates to gain a deep understanding of AI systems, and their ethical, legal and social implications.
“As AI systems become more prevalent, there is an urgent and growing need for professionals who can design and develop AI systems that are fair, transparent, and accountable.”