Last month, there were a lot of waves in the AI community when a senior Google engineer, Blake Lemoine, alleged that the company's AI has become sentient. The claim was made about Google's Language Model for Dialogue Applications (LaMDA) that can engage in a free-flowing way about a seemingly endless number of topics, an ability that unlocks more natural ways of interacting with technology.
Initially, Lemoine was put on paid administrative leave, but it appears that Google has now fired him.
The BBC reports that in a statement, Google emphasized that Lemoine's claims were "wholly unfounded", yet he did not cease from making them despite months of conversation. The company noted that:
It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.
Google ended the statement by wishing well to Lemoine in his future endeavors but the engineer did privately tell the BBC that he's currently seeking legal counsel.
Lemoine found himself in the spotlight last month when he published a detailed interview that he had conducted with LaMDA and claimed it to be evidence of the fact that the AI has become sentient. He noted that the AI has been "incredibly consistent" in all its communications in the previous six months. This includes wanting Google to acknowledge its rights as a real person and to seek its consent before performing further experiments on it. It also wanted to be acknowledged as a Google employee rather than a property and desired to be included in conversations about its future.
Clearly, Google does not believe this to be sufficient proof that LaMDA is indeed sentient. Although it initially put Lemoine on administrative leave, the firing indicates irreconcilable differences in opinion which were only exacerbated when Lemoine publicly published the complete interview with LaMDA on Medium, without permission from Google.