Sam Altman, founding CEO of OpenAI and creator of ChatGPT, has signed a statement with some of the world’s top AI researchers, saying that AI is more dangerous than nuclear war and pandemics, and therefore should be treated as such .
A group of leading AI researchers, engineers and CEOs has expressed concerns about the existential threat AI poses to humanity. In a concise 22-word statement, they highlight the need to prioritize global efforts to mitigate the risks associated with AI, comparable to tackling other major societal risks like pandemics and nuclear war.
Mitigating the risk of AI extinction should be a global priority alongside other society-wide risks such as pandemics and nuclear war.
OpenAI has no plans to leave Europe, says CEO Sam Altman
ChatGPT under scanner: Canada will investigate OpenAI’s chatbot for privacy reasons
The statement, released by the Center for AI Safety, a San Francisco-based nonprofit, has garnered support from influential figures including Google DeepMind and OpenAI CEOs Demis Hassabis and Sam Altman, respectively, as well as Geoffrey Hinton. and Yoshua Bengio, two of three 2018 Turing Award recipients.
Notably, the third recipient, Yann LeCun, chief AI scientist at Meta, Facebook’s parent company, has yet to sign the statement.
This statement represents a notable contribution to the ongoing and contentious debate about AI safety. Earlier this year, a group of signatories, including some of those who have now backed the concise warning, signed an open letter advocating a six-month “pause” in AI development.
However, this letter received criticism from various points of view. While some experts thought this overstated the risks posed by AI, others agreed with the concerns but disagreed with the suggested solution.
Dan Hendrycks, executive director of the Center for AI Safety, explained that the brevity of the recent statement, which did not offer specific measures to mitigate the AI threat, was intended to avoid such disagreements.
He stressed that the intention was not to present a long list of thirty potential interventions, as this would dilute the message.
Hendrycks called the statement a collective acknowledgment from industry insiders who are concerned about the risks associated with AI. He pointed to the misconception that only a few individuals express concerns about these issues within the AI community, when in fact, many privately share apprehensions.
While the fundamentals of this debate are well known, the details can be lengthy and revolve around hypothetical scenarios in which AI systems are rapidly advancing in capabilities, potentially compromising security.
Proponents of this view often cite the rapid progress seen in large language models as evidence of future intelligence gains. They argue that once AI systems reach a certain level of sophistication, it might become impossible to control their actions.
However, there are those who doubt these predictions. They point to the inability of AI systems to handle even relatively mundane tasks such as driving a car.
Despite years of dedicated research and substantial investment, fully autonomous vehicles remain far from reality. Skeptics question the technology’s ability to match all other human achievement in years to come if it struggles with this specific challenge.
Today, proponents and skeptics of AI risks recognize that AI systems pose several threats, regardless of later advancements. These threats include enabling mass surveillance, fueling flawed “predictive policing” algorithms, and facilitating the spread of misinformation and misinformation.
Read all Latest news, New trends, Cricket News, bollywood news,
India News And Entertainment News here. Follow us on Facebook, Twitter and Instagram.