Skip to content
Google fires engineer who claims its AI is sentient

SAN FRANCISCO — Google fired one of its engineers, Blake Lemoine, on Friday more than a month after raising ethical concerns about how the company was testing an artificial intelligence chatbot that it says has achieved consciousness.

A Google spokesperson, Chris Pappas, said Mr. Lemoine, a senior software engineer with his organization Responsible AI, “has chosen to persistently violate clear employment and data security policies that include the need to protect product information”.

The company, which denies that its chatbot language model is sensitive, had placed Mr. Lemoine on paid leave in June. Mr. Lemoine confirmed his dismissal in a text message on Friday, and said he was meeting with lawyers to consider his options. The shooting was first reported by the Big Technology newsletter.

Mr Lemoine caused a stir last month when he told the Washington Post that he believed Google’s language model for dialogue applications, or LaMDA, was sensitive – sparking fears that AI was approaching a dystopian sci-fi movie and a heated debate about whether a computer program can really have a soul. Its suspension has also led to conspiracy theories as to whether it was part of a cover-up by Google.

For Google, it was another in a series of local controversies over the ethics and role of its AI, an area of ​​technology the company has staked its future on.

Mr. Lemoine’s assertion that LaMDA is sentient has been criticized by the company and many other AI experts who have said that these types of chatbots – software that simulates a text-based conversation with another human, often used to customer service – are not advanced enough to be aware of.

“If an employee shares concerns about our work, as Blake did, we investigate them thoroughly,” Pappas said. “We found Blake’s claims that LaMDA is sentient to be completely unfounded and worked to clarify this with him for many months.”

In addition to raising his concerns with the media, Lemoine said in June that he had given documents to a US senator, whom he did not identify, saying they provided evidence that Google and its technology engaged in religious discrimination.


Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.