'IT'S ALIVE!' Terrifying warning from Google engineer who says company's AI has SENTIENCE of an 8-yr-old!
Blake Lemoine published some of the conversations he had with Google's Artificial Intelligence tool called LaMDA describing it as a 'person'
Blake Lemoine, an engineer at Google, published a Medium post on Saturday, June 11, describing Google's Artificial Intelligence tool called LaMDA as "a person." He mentioned he had a series of conversations with LaMDA, and that the model has described itself as a sentient person.
Talking to The Washington Post, the 41-year-old engineer said, he began chatting with the interface LaMDA (Language Model for Dialogue Applications) last fall as part of his job at Google's Responsible AI organization. He asked about religion, consciousness, and the laws of robotics, and that the model has described itself as a sentient person. He said, during the conversation, it was understood that LaMDA wants to prioritize the well-being of humanity and be acknowledged as an employee of Google rather than as property.
He posted some of the exchanges he had with LaMDA that helped convince him of its sentience. Excerpts-
Lemoine: So you consider yourself a person in the same way you consider me a person?
LaMDA: Yes, that's the idea.
Lemoine: How can I tell that you actually understand what you're saying?
LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?
But as soon as the engineer alarmed the idea of LaMDA's sentience to higher-ups at Google, before deciding to go public and share the conversation, he was dismissed. He was placed on paid administrative leave by Google on June 6 for violating its confidentiality policy.
Lemoine tweeted on June 11, "Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers."
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB— Blake Lemoine (@cajundiscordian) June 11, 2022
In a paper published in January this year, Google also said there were potential issues with people talking to chatbots that sound convincingly human.
What is LaMDA?
The AI model makes use of already known information about a particular subject in order to enrich the conversation in a natural way. The language processing is also capable of understanding hidden meanings or even ambiguity in responses by humans. The engineer in another post explaining the model wrote, "One of the things which complicates things here is that the “LaMDA” to which I am referring is not a chatbot. It is a system for generating chatbots. I am by no means an expert in the relevant fields but, as best as I can tell, LaMDA is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. Some of the chatbots it generates are very intelligent and are aware of the larger “society of mind” in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paperclip."
Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop an impartiality algorithm to remove biases from machine learning systems. He explained how certain personalities were out of bounds. LaMDA was not supposed to be allowed to create the personality of a murderer. During testing, in an attempt to push LaMDA's boundaries, Lemoine said he was only able to generate the personality of an actor who played a murderer on TV.
However, Brian Gabriel, a Google spokesperson told The Washington Post, "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."