Business

AGI is coming fast and needs ‘reasonable limits’: OpenAI co-founder

The era of AGI is approaching and could be just a few years away, according to OpenAI co-founder John Schulman.

Speaking on a podcast with Dwarkesh Patel, Schulman predicted that artificial general intelligence could be achieved in “two or three years.”

He added that technology companies must be willing to cooperate to ensure technology is developed safely.

“Everyone has to agree on reasonable limits to deployment or continuing education, for it to work. Otherwise, you have a race dynamic where everyone is trying to stay ahead, and that may require compromises on security.”

Schulman also said there would need to be “some coordination between the larger entities that provide this type of training.”

AGI is a somewhat contested term, but it is generally understood to refer to AI systems that have the ability to acquire complex human abilities such as common sense and reasoning.

Experts have long warned that this level of advanced AI poses various existential threats to humanity, including the risk of an AI takeover or the obsolescence of humans in the workforce.

Tech companies are rushing to develop this futuristic technology. OpenAI, where Schulman still works, is one of the pioneers in achieving AGI first.

Schulman told Patel’s podcast: “If the AGI came much sooner than expected, we would certainly want to pay attention to it.” We might want to slow down the training and deployment a bit until we are pretty sure we know we can handle it safely. “.

He added that companies should be prepared to “pause either continuing education or deployment or avoid certain types of training that we think might be riskier.” So just establishing reasonable rules about what everyone should do is enough for everyone to limit these things somewhat. “.

Some industry experts called for a similar pause after OpenAI released its GPT-4 model. In March last year, Elon Musk was among several experts who signed a letter expressing concerns about the development of AI. The signatories called for a six-month pause on training AI systems more powerful than GPT-4.

OpenAI did not immediately respond to a request for comment from Business Insider, made outside of normal business hours.

Last week, an OpenAI spokeswoman, Kayla Wood, told the Washington Post that Schulman had taken over leadership of its security research efforts.

The changes came after Jan Leike, who led its Superalignment team, resigned last week and later accused the company of prioritizing “shiny products” over safety.

The team has since disbanded following several departures of its members, including chief scientist Ilya Sutskever. An OpenAI spokesperson told The Information that the remaining staff members are now part of its core research team.

Schulman’s comments come amid protests calling for a pause in the training of AI models. Groups such as Pause AI fear that if companies like OpenAI create superintelligent AI models, they could pose existential risks to humanity.

Pause AI protesters staged a demonstration outside OpenAI headquarters last week as it announced its GPT-4o model.

businessinsider

Back to top button