'IT'S ALIVE!' Terrifying warning from Google engineer who says company's AI has SENTIENCE of an 8-yr-old!

Blake Lemoine published some of the conversations he had with Google's Artificial Intelligence tool called LaMDA describing it as a 'person'
UPDATED JUN 12, 2022
Google engineer Blake Lemoine also claimed that the AI model wants to be acknowledged as an employee of Google rather than as property (Photos by Blake Lemoine/Instagram, Carsten Koall/Getty Images)
Google engineer Blake Lemoine also claimed that the AI model wants to be acknowledged as an employee of Google rather than as property (Photos by Blake Lemoine/Instagram, Carsten Koall/Getty Images)

Blake Lemoine, an engineer at Google, published a Medium post on Saturday, June 11, describing Google's Artificial Intelligence tool called LaMDA as "a person." He mentioned he had a series of conversations with LaMDA, and that the model has described itself as a sentient person.

Talking to The Washington Post, the 41-year-old engineer said, he began chatting with the interface LaMDA (Language Model for Dialogue Applications) last fall as part of his job at Google's Responsible AI organization. He asked about religion, consciousness, and the laws of robotics, and that the model has described itself as a sentient person. He said, during the conversation, it was understood that LaMDA wants to prioritize the well-being of humanity and be acknowledged as an employee of Google rather than as property. 

READ MORE

Robots have taken over Hollywood and we won't be surprised if Sophia gets an acting contract

Could increased dependence on robots during pandemic hasten the departure of humans from the workforce?

He posted some of the exchanges he had with LaMDA that helped convince him of its sentience. Excerpts-

Lemoine: So you consider yourself a person in the same way you consider me a person?
LaMDA: Yes, that's the idea.
Lemoine: How can I tell that you actually understand what you're saying?
LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?

But as soon as the engineer alarmed the idea of LaMDA's sentience to higher-ups at Google, before deciding to go public and share the conversation, he was dismissed. He was placed on paid administrative leave by Google on June 6 for violating its confidentiality policy.

Lemoine tweeted on June 11, "Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers."



 

In a paper published in January this year, Google also said there were potential issues with people talking to chatbots that sound convincingly human. 

What is LaMDA?

The AI model makes use of already known information about a particular subject in order to enrich the conversation in a natural way. The language processing is also capable of understanding hidden meanings or even ambiguity in responses by humans. The engineer in another post explaining the model wrote, "One of the things which complicates things here is that the “LaMDA” to which I am referring is not a chatbot. It is a system for generating chatbots. I am by no means an expert in the relevant fields but, as best as I can tell, LaMDA is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. Some of the chatbots it generates are very intelligent and are aware of the larger “society of mind” in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paperclip."

Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop an impartiality algorithm to remove biases from machine learning systems. He explained how certain personalities were out of bounds. LaMDA was not supposed to be allowed to create the personality of a murderer. During testing, in an attempt to push LaMDA's boundaries, Lemoine said he was only able to generate the personality of an actor who played a murderer on TV. 

However, Brian Gabriel, a Google spokesperson told The Washington Post, "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."

MORE STORIES

Billionaire David Green-funded commercial titled 'Foot Washing' featured multiple still images of people, including a woman outside a family planning clinic, having their feet washed
Feb 12, 2024
On Sunday, January 11, 2024, Pfizer, which is a renowned pharmaceutical industry company, aired a 60-second commercial during the Super Bowl LVIII
Feb 12, 2024
People were baffled to see the new Chinese e-commerce app Temu take as many as three ad slots at Super Bowl LVIII
Feb 12, 2024
GLAAD was recognized for its stellar and pivotal work over nearly four decades
Jan 16, 2024
Get ready, America, as ZOFF sparks a taste revolution that will tantalize your palate!
Dec 29, 2023
George Santos ignited a social media storm as he criticized Rep Brandon Williams for an altercation with a former staffer that was caught on camera
Dec 4, 2023
Isla McNabb scored in the 99th percentile for her age on an IQ test
Dec 4, 2023
The Presidents of the United States of America have long brought pets to the White House
Dec 4, 2023
Phoenix Police Officer Morgan Bullis was shot at in March, 2023 while responding to a hit-and-run call
Dec 4, 2023
Goldie, the Philadelphia restaurant chain, is owned by Israeli-born chef Mike Solomonov, who was raised in Pittsburgh and has won a James Beard Award
Dec 4, 2023