Business

OpenAI board members defend Sam Altman in op-ed

This has been the week of dueling opinion pieces from current and former OpenAI board members.

Current OpenAI board members Bret Taylor and Larry Summers issued a response to AI security concerns on Thursday, stating that “the board is taking proportionate steps to ensure safety and security.

The response comes days after The Economist published an opinion piece by former OpenAI board members Helen Toner and Tasha McCauley that criticized CEO Sam Altman and OpenAI’s security practices while calling for the need to regulate AI. The title of the article stated that “AI companies should not govern themselves.”

Taylor and Summers, the two current members, objected to the former directors’ claims in a response also published in The Economist. They defended Altman and discussed OpenAI’s stance on security, including the company’s formation of a new security committee and a set of voluntary commitments OpenAI made to the White House to strengthen security. safety and security.

The two men said they previously chaired a special committee within the newly created board and established an external review by the WilmerHale law firm of the events leading to Altman’s ouster. The process involved reviewing 30,000 documents, as well as dozens of interviews with OpenAI’s former board, executives and other relevant witnesses, they added.

Taylor and Summers reiterated that WilmerHale concluded that Altman’s ouster “did not arise from concerns about product safety or security” or “pace of development.”

They also took issue with the editorial’s characterization that Altman had created “a toxic culture of lying” and engaged in psychologically abusive behavior. Over the past six months, both current board members said they found Altman “very open on all relevant issues and always collegial with his leadership team.”

In an interview published the same day as his op-ed, Toner explained why the former board previously decided to remove Altman, saying he lied to them “repeatedly” and withheld information. She also said that the former OpenAI board found out about ChatGPT’s release on Twitter.

“While it may be hard to remember now, Openai released Chatgpt in November 2022 as a research project to learn more about the usefulness of its models in conversational contexts,” Taylor and Summers wrote in response . “It was built on top of gpt-3.5, an existing AI model that had already been available for over eight months at the time.”

Toner did not respond to a request for comment before publication. OpenAI did not respond to a request for comment.

OpenAI supports “effective regulation of artificial general intelligence” and Altman, who was reinstated just days after his ouster, has “implored” lawmakers to regulate AI, the two current board directors added .

Altman has been advocating for some form of AI regulation since 2015, most recently saying he favored the formation of an international regulatory agency – but he also said he was “very nervous about the idea of excessive regulation.

At the World Government Summit in February, Altman suggested a “regulatory sandbox” where people could experiment with technology and write regulations on what “went really wrong” and what went “really right” .

OpenAI has seen several high-profile departures in recent weeks, including machine learning researcher Jan Leike, chief scientist Ilya Sutskever and policy researcher Gretchen Krueger. Both Leike and Krueger expressed safety concerns after their departure.

Leike and Sutskever, who is also a co-founder of OpenAI, were co-leads of the company’s superalignment team. The team was tasked with studying the long-term risks of AI, including the possibility of it going “rogue.”

OpenAI disbanded the superalignment security team before later announcing the formation of a new security committee.

businessinsider

Back to top button