Kathy Gibson reports – As worldwide excitement about generative artificial intelligence (GenAI) reaches fever pitch, it’s important for governments to focus on how the technology can be used to improve people’s lives.
Omar Sultan Al Olama, Minister of State for artificial intelligence in the United Arab Emirates, believes that government needs to understand its role as a technology enabler, while guiding its societal agenda.
“In the UAE, we think about the long-term implications of technology,” he tells delegates to Gitex Global Dubai.
Not only has the UAE invested heavily in AI technologies over the last decade, it is also able to quickly show results, he adds. For instance, UAE-based companies have, within the last 18 months, launched several large language models (LLMs) that are among the most-used LLMs in the world.
A point of note, Al Olama says, is the fact that he has been the country’s minister of AI for about six years and has been guiding its development in the region for all of that time.
The issue of AI governance is a controversial one and Al Olama warns that it needs to be carefully negotiated.
“In most cases, governments want to be in charge of regulating things,” he says. “But too often bureaucrats, if they don’t understand something, restrict it. Instead, we need to think more like entrepreneurs.”
And balance is important, he adds. “As governments, we need an approach that balances heavy-handedness where technology could be detrimental – for instance, in the case of deep fakes – and nurturing the positive aspects where GenAI and LLMs have had a positive impact.”
The perfect world, he adds, would be where technology regulations are government-led, private sector-driven, and multilateral sector-endorsed.
“This means we set certain rules to be sure the direction of innovation aligns with our rule of government – and this is not to maximise profits, but to improve quality of life.”
The risks that AI can pose include not only the creation of deep fakes, but AI-driven weaponry and even the existential threat of artificial general intelligence (AGI), which some believe will make human beings obsolete.
“AGI is not a singular country’s problem,” Al Olama points out. “It could be created by anyone, anywhere in the world, so we need to be having realistic conversations. If, as some believe, it is five years away, the ramifications for us all are huge.”
Deep fakes are a more immediate concern, he says. But there are already tools available to help people discern real from fake news and images, so we need to encourage their widespread use.
“We see that people are not trusting this technology as much as they trusted others which is good in that people don’t just believe the content they see, but ask questions.”
The issue of AI-driven autonomous weapons can be classified under the broad heading of “bad AI”, Al Olama says. “I believe the three big challenges facing humanity are climate change, pandemics, and bad AI. None of these is confined to a geographic boundary and all require people around the world to co-operate with one another.”
But bad AI in itself is not what keeps Al Olama awake at night. That particular nightmare is biotechnology. “I worry about someone creating something that we cannot see, that spreads quickly and, by the time we catch it, it is too late.”
Creating AI that can cause significant damage in the world is still too expensive for most actors, but it could be used as a tool to help create other more insidious threats.
“As governments, we can’t afford to be absolutely negative or absolutely positive: we need to be realists,” Al Olama concludes. “And we have to have constant dialogues to ensure we are not ignorant when making decisions – ignorance never leads to good results.”