After spending the day promoting the AI technology of your business at a developer conference, The CEO of Anthropic issued a warning: The AI can eliminate 50% of entry -level white collar jobs over the next five years.
“We, as producers of this technology, have a duty and an obligation to be honest on what will happen,” Dario Amodei told Axios in an interview published Wednesday. “I don’t think it’s on people’s radar.”
The 42 -year -old CEO added that unemployment could increase between 10% and 20% over the next five years. He told Axios that he wanted to share his concerns so that the government and other AI companies prepare the country for what will happen.
“Most of them do not know that it happens,” said Amodei. “It seems crazy, and people don’t believe it.”
Amodei said that the development of large language models is progressing quickly and that they are able to match and overcome human performance. He said that the US government has remained silent on the issue, fearing that workers are panicking or that the country can be delayed China in AI’s race.
Meanwhile, business leaders see AI savings, while most workers do not know the changes that evolve, said Amodei.
He added that IA companies and the government must stop “sugar” the risks of mass employment elimination in the fields, including technology, finance, law and council. He said that entry -level jobs are particularly at risk.
AMODEI’s comments come as hiring new graduates by companies in large technologies has dropped by around 50% of pre-countryic levels, according to a new report from the venture capital company Signalfire. The report indicates that this is due in part to the adoption of the AI.
A series of brutal layoffs Swiveled the technological industry in 2023, with hundreds of thousands of jobs eliminated while businesses were looking to reduce costs. While the signalfire report indicated the hiring to Medium and senior roles I saw an increase in 2024, the entry -level positions have never completely rebounded.
In 2024, candidates at the start of their career represented 7% of the total hiring in large technological companies, down 25% compared to 2023, according to the report. At startups, this number is only 6%, down 11% compared to the previous year.
Signalfire results suggest that technological companies prioritize the hiring of more experienced professionals and often displayed filling Junior roles with senior candidates.
Heather Doshay, a partner who directs people and recruiting programs at Signalfire, told Business Insider that “AI does what interns and new graduates did”.
“Now you can hire an experienced worker, equip them with AI tools, and they can produce the production of the subordinate worker over it – without general costs,” said Doshay.
The AI cannot entirely explain the sudden withdrawal of the early career prospects. The report also indicates that the negative perceptions of employees of the Z generation and Tighter budgets in industry contribute to the apparent reluctance of technology to hire new graduates.
“The AI does not steal the job categories – it absorbs the lowest tasks,” said Doshay. “It takes the burden to universities, training camps and candidates to go up faster.”
To adapt to rapid changing times, she suggests that new graduates consider AI as an employee rather than a competitor.
“Increase your ability to function as a more experienced someone by adopting an ingenious state of property and delegating to AI,” said Doshay. “There are so many things available on the internet to be self -taught, and you should listen to it.”
Amodei’s scary message comes after the company recently revealed that his Chatbot Claude Opus 4 exhibited “Extreme blackmail behavior” After accessing fictitious emails that said it would be closed. While the company was transparent with the public on the results, it still published the next version of the chatbot.
This is not the first time that the public has warned the risk of AI. In an episode of the New York Times “hard fork” podcast in February, the The CEO declared the possibility of “abusive use” by bad actors could threaten millions of lives. He said that the risk could arrive in “2025 or 2026”, although he did not know exactly when he would present a “real risk”.
Anthropic has underlines The importance of third -party safety assessments and regularly shares the risks discovered by its red equipment efforts. Other companies have taken similar measures, based on third -party assessments to test their AI systems. OPENAI, for example, said On its website that its API and Chatgpt commercial products are undergoing routine third party tests to “identify safety weaknesses before being able to be exploited by malicious actors”.
Amodei has recognized Axios the irony of the situation – while it shares the risks of AI, it builds and simultaneously sells the products it warns. But he said that the people who are most involved in building AI have the obligation to be in advance on his direction.
“It is a very strange set of dynamics, where we say:” You should worry about where the technology we are building is “,” he said.
Anthropic did not respond to a request for comments from Business Insider.