Now is a good time to start thinking about the governance of superintelligence, with future AI systems set to be dramatically more capable than today’s systems.
This is the word from OpenAI CEO Sam Altman, president Greg Brockman and chief scientist Ilya Sutskever, writing in the company’s blog.
“Given the picture as we see it now, it’s conceivable that, within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” they believe.
“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.
“We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and co-ordination.”
The three map out what they believe needs to happen.
“First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree that the rate of growth in AI capability at the frontier is limited to a certain rate per year.
“And of course, individual companies should be held to an extremely high standard of acting responsibly.”
The second requirement would be for an international authority to inspect any development above a certain capability, which would require audits, tests for compliance with safety standards, restrictions on degrees of deployment and levels of security and more.
“Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable,” they add.
The third step would be developing the technical capability to make a superintelligence safe. “This is an open research question that we and others are putting a lot of effort into.”
Altman, Brockman and Sutskever stress that the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight.
“We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don’t yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that, within these wide bounds, individual users should have a lot of control over how the AI they use behaves.
“Given the risks and difficulties, it’s worth considering why we are building this technology at all.”
They add that OpenAI believe superintelligence could lead to a better world that can improve societies and drive economic growth. They also think it will be risky and difficult to stop the creation of superintelligence.