BusinessUSA

It’s “crazy not to be a little afraid” of AI


  • OpenAI CEO Sam Altman said AI risks include “misinformation issues or economic shocks.”
  • Altman said he sympathizes with people who are very afraid of advanced AI.
  • OpenAI said it taught GPT-4 to avoid answering questions asking for “illegal advice.”

OpenAI CEO Sam Altman is still sounding the alarm about the potential dangers of advanced artificial intelligence, saying that despite its “tremendous benefits”, he also fears the potentially unprecedented scale of its risks.

His company — the creator of successful generative AI tools like ChatGPT and DALL-E — keeps that in mind and strives to teach AI systems to avoid spreading harmful content, Altman said. on tech researcher Lex Fridman’s podcast, in an episode posted Saturday. .

“I think it’s weird when people think it’s like a big dunk that I say, I’m a little scared,” Altman told Fridman. “And I think it would be crazy not to be a little scared, and I sympathize with people who are very scared.”

“The current concerns that I have are that there will be misinformation issues or economic shocks or something else at a level far beyond anything we are prepared for,” he said. he adds. “And it doesn’t require super intelligence.”

Hypothetically, he raised the possibility that large language patterns, known as LLMs, could influence social media users’ information and interactions on their feeds.

“How would we know if on Twitter we had mostly LLMs leading everything that crossed that hive mind?” Altman said.

Twitter CEO Elon Musk did not respond to Insider’s email request for comment. OpenAI representatives did not respond to a request for comment beyond Mr. Altman’s remarks on the podcast.

OpenAI released its latest GPT-4 model this month, saying it was better than previous versions in areas such as excellence in standardized tests such as the bar exam for lawyers. The company also said the updated model is able to understand and comment on images, and teach users by engaging with them like a tutor.

Companies like Khan Academy, which offers online courses, are already leveraging the technology by using GPT-4 to build AI tools.

But OpenAI has also been upfront about the issues that still need to be addressed with these kinds of large language models. AI models can “amplify biases and perpetuate stereotypes,” according to a document from OpenAI explaining how it has addressed some of the risks of GPT-4.

For this reason, the company tells users not to use its products where the stakes are more serious, such as “high-risk government decision-making (e.g., law enforcement, criminal justice, migration and asylum), or to offer legal or health advice”. according to the document.

During this time, the model also learns to be more judgmental about answering questions, according to Altman.

“In the spirit of building in public and, and bringing society up gradually, we release something, it has flaws, we’ll make better versions,” Altman told Fridman. “But yes, the system is trying to learn questions it shouldn’t be answering.”

For example, an early version of GPT-4 had less filter on what it shouldn’t say, according to OpenAI’s document on its approach to AI security. It was more inclined to answer questions about where to buy unlicensed firearms or about self-harm, while the released version declined to answer such questions, according to the OpenAI document.

“I think we as OpenAI are responsible for the tools we put out into the world,” Altman told Fridman.

“There will be huge benefits, but, you know, tools do good and bad,” he added. “And we will minimize the bad and maximize the good.”

businessinsider

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button