A new company, Safe SuperIntelligence, has been formed with the aim of producing superintelligence, a machine that is more intelligent than humans, in a safe way.

Formed by ex-OpenAI founder and chief scientist Ilya Sutskever, Daniel Gross and Daniel Levy, Safe SuperIntelligence wants to make superintelligence in a safe way.

“Superintelligence is within reach,” reads a statement from the company. “Building safe superintelligence (SSI) is the most important technical problem of our time.”

SSI has a single goal and single product.

“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

“This way, we can scale in peace.”

The company is registered in the US, with offices in Palo Alto and Tel Aviv, and is actively recruiting.

Sutskever was one of the OpenAI board members who was involved in forcing Sam Altman out of the company in November.