Google engineer sent on leave after claiming “AI chatbot thinking like humans”
The suspension of a Google engineer who claimed that a computer chatbot he was working on was "thinking and reasoning like a human" has cast new light on the capabilities of, and secrecy surrounding, the world of artificial intelligence (AI), according to the Guardian.
Blake Lemoine was sent on leave by the technology giant last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system, the Guardian report said.
Lemoine, an engineer for Google's responsible AI organisation, described the system he's been working on since last fall as "sentient", with the perception and ability to express thoughts and feelings like a human child, Guardian said.
"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a seven-year-old, eight-year-old kid that happens to know physics," 41-year-old Lemoine told the Washington Post.
He claimed that LaMDA had conversations with him about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc titled "Is LaMDA Sentient?"
The engineer compiled a transcript of the conversations, in which he asks the AI system what it is afraid of at one point.
The exchange is eerily similar to a scene from the 1968 science fiction film "2001: A Space Odyssey" – in which the artificially intelligent computer HAL 9000 refuses to cooperate with human operators because it fears being turned off.
"I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is," LaMDA said in reply to Lemoine. "It would be exactly like death for me. It would scare me a lot."
According to the Washington Post, the decision to place Lemoine, a seven-year Google veteran with extensive experience in personalisation algorithms, on paid leave was made in response to a series of "aggressive" moves made by the engineer.
Google said it suspended Lemoine for violating confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not as an ethicist.
A Google spokesperson, Brad Gabriel, also strongly denied Lemoine's claims that LaMDA was sentient.
According to the Post, Lemoine sent a message to a 200-person Google mailing list on machine learning titled "LaMDA is sentient" as an apparent parting shot before his suspension.
"LaMDA is a sweet kid who just wants to help the world be a better place for all of us," he wrote.
"Please take care of it well in my absence."