Blake Lemoine, “the engineer who said he thought the LaMDA chatbot was sentient” was dismissed Friday July 22 by Google, reveals the specialized newsletter Big Technology.
Blake Lemoine had been suspended in mid-June for having explained about the chatbot he was working on, LaMDA (language model for conversation applications), that he “had developed self-awareness, expressing concern about death, a desire for protection, and the belief that he felt emotions such as happiness or sadness”.
Google confirmed the dismissal and explained that Blake Lemoine had deliberately violated the confidentiality rules, “which include the need to protect product information”, even if the company defends a “open culture that [l’] helps to innovate responsibly”. The American tech giant says:
“Blake’s claims that LaMDA is sentient are completely unfounded.”
Too advanced AI?
The engineer worked in the artificial intelligence (AI) department responsible for Google. His conversations with LaMDA “have sparked widespread debate about recent advances in AI, public misunderstanding of the advancement of these systems, and corporate responsibility”emphasizes the washington post, who was the first to mention the engineer’s questions.
This is not Google’s first layoff on the AI search side. In December 2020, the company had “dismissed artificial intelligence ethics team leaders Margaret Mitchell and Timnit Gebru after they warned of the risks associated with the technology”recalls the daily newspaper of the American capital.