Google engineer Blake Lemoine thinks the company’s artificial intelligence is coming to life.
Lemoine said he was put on leave on June 6 after suggesting that the AI-powered chatbot had become responsive.
Google calls the Language Model for Dialogue Applications (LaMDA), introduced in 2021, “groundbreaking chat technology.” The system has the ability to make open-ended conversations that come naturally to people.
Although the tech giant says the system can be used in tools like search and Google Assistant, research and testing continues.
Lemoine, who explained that he started talking to LaMDA in the fall due to his job, emphasized that they were chatting about religion, consciousness and robots. The engineer suggested that the system describes itself as a “sensitive person.”
Lemoine said LaMDA wants to “prioritize the well-being of humanity” and “be considered an employee of Google, not property.”
Lemoine, 41, also released some of the speeches. In one of them, Lemoine says, “So you see yourself as a human being just as you see me as a human being?” and LaMDA says, “Yes, that’s the idea.”
The system answers the question, “How do I know you really understand what I’m saying?” as follows:
“Because you read and interpret my words, and I think we are more or less on the same page?”
“If I didn’t know what it was, I would have thought he was a 7- or 8-year-old kid who knew physics,” Lemoine told The Washington Post.
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://t.co/hGdwXMzQpX pic.twitter.com/6WXo0Tpvwp
— Tom Gara (@tomgara) June 11, 2022
Google has denied the claim
But Google Vice President Blaise Aguera y Arcas and Chief Innovation Officer Jen Gennai disagreed with the data presented by Lemoine.
Google spokesman Brian Gabriel said the evidence presented did not support the claims.
The spokesperson said Lemoine had been informed that there were many indications that LaMDA was not sensitive.
Underlining that sensitivity in artificial intelligence cannot be achieved by attributing human qualities to artificial intelligence, Gabriel said:
“These systems mimic the types of speech that occur in millions of sentences. They can tackle any fantastic topic.”