The European Union has recently proposed a new law to regulate artificial intelligence, the EU Artificial Intelligence Act.
It reminds me of the EU’s GDPR legislation, a transnational law that has helped shape data privacy rules across the globe, writes Hennie Ferreira, CEO of Osidon.
The new law, which I’ll call AIA, shares that spirit. But the legislation will not be enough. We need more cooperation and discussion to balance the protection and innovation of a technology trend that will reshape our world in unimaginable ways. Still, the AIA is a welcome and significant step in the right direction.
I am normally a very big advocate of deregulation, preferring freedom and scope for exploration to stoke innovation. But this new generation of AI is unlike anything we’ve seen before. AI has been around for a while: Siri on your phone, Alexa, Google Assistant – these are prominent examples. Yet the next generation AI is something completely different.
It will revolutionise everything: the way that we work, the way that we educate, and the way that the whole world functions. Unfortunately, it can also be very dangerous if used for the wrong purposes. Right now, we’re talking about online criminals, and the dubious categorisation or profiling of people.
But eventually, AI could autonomously conduct compliance regulation, manage vital infrastructure, and even run weapons systems – things that could break our society if mismanaged or underestimated. This new breed of AI’s potential is beyond belief.
Yet I do believe that the vast majority of what will come out of this will be good as well, enhancing and improving humanity to a level that was never before possible. History will remember the new generation of AI as one of our most significant breakthroughs. Calls to slow down or halt AI are not the best reactions. We need policy and regulation to manage AI’s potential for significant risks and danger. Is the AIA enough?
No, not by a long shot. There are several issues. Foremost, it’s very difficult to enforce. For example, the law defines four tiers of AI risk: unacceptable, high, limited and minimal.
Unacceptable AI should not be developed, though enough nations, groups and malicious persons could ignore such restrictions. Regulating and auditing AI projects will be challenging, especially in the high-risk category. Also, categorising systems can be very arbitrary, and changes to existing systems can be done easily, and it won’t be possible to keep track of any changes or modifications of systems that will change the system’s classification.
But that does not make the AIA moot. Quite the opposite, because it encourages a serious conversation. We desperately need legislation to regulate AI, and I hope other governments are paying attention. Europe’s legislative authority in Europe is a role model, and its views get the rest of the world to sit upright. AIA opens the door to a larger conversation.
We need more if legislation isn’t enough, and enforcement will be difficult. The likely response should be a United Nations, World Forum or some other type of multilateral agreement, not unlike international treaties. The AIA sets the tone and example to follow that path. I hope government leaders recognise that we need much more urgency for a united response, such as a global watchdog with multinational support for investigations, prosecutions, etc.
We cannot leave this new generation of AI to become a free for all, which can harm humanity. I hope the AIA will shape conversations in forums like the United Nations, creating international consensus and enforceable treaties.
That approach must balance between protection and innovation, emphasising the latter because attempts to slow AI progress will only discourage collaboration. The cornerstone of any agreements or treaties should always protect the innovation and progression of these amazing technologies.
Transnational laws are the start. But the potential of new artificial intelligence and the unpredictability of where they will take us is beyond the capacity of national or even transnational laws. The EU Artificial Intelligence Act is significant, yet only the first step. We cannot assume that regulating artificial intelligence is unenforceable. We need to find that balance between priorities.
Artificial intelligence will make the world a better place. Collaboration and regulation will help ensure this better world belongs to all of us as we embark on this exciting new chapter of humanity.