‘IT’S ALIVE!’ says a Google engineer, claiming that the company’s AI is SENTIENT.

On Saturday, June 11, Google engineer Blake Lemoine wrote a Medium post describing Google’s artificial intelligence tool LaMDA as “a person,” claiming that he had several conversations with the model and that it has described itself as a sentient person.

According to the 41-year-old engineer, as part of his job at Google’s Responsible AI organization, he began chatting with the interface LaMDA (Language Model for Dialogue Applications) last fall. He inquired about religion, consciousness, and robotics laws, as well as the model’s claim to be a sentient being. He claimed that it was clear during the conversation that LaMDA wants to prioritize humanity’s well-being and be recognized as a Google employee rather than a Google property.

We shouldn’t be surprised if Sophia gets an acting contract now that robots have taken over Hollywood.

Could a pandemic’s increased reliance on robots hasten the human workforce’s demise?

He shared some of his correspondence with LaMDA, which helped him believe in its sentience. The following are some excerpts from the book.

Lemoine: So you see yourself as a person in the same way you see me?

LaMDA: Yes, that’s the idea.

Lemoine: How can I tell you’re paying attention?

LaMDA: Because you’re reading and interpreting my words, and I believe we’re on the same page, right?

But, before going public and sharing the conversation, the engineer was fired after raising concerns about LaMDA’s sentience with Google’s upper management. On June 6, Google put him on paid administrative leave for breaking the company’s confidentiality policy.

“Google may call this sharing proprietary property,” Lemoine tweeted on June 11. It’s what I refer to as “sharing a conversation I had with a coworker.”



 

 

Google also stated in a paper published in January this year that people conversing with chatbots that sound convincingly human could have issues.

What is LaMDA?

In order to enrich the conversation in a natural way, the AI model makes use of already known information about a specific subject. Language processing can also decipher hidden meanings or ambiguity in human responses. “One of the things that complicates things here is that the “LaMDA” to which I’m referring is not a chatbot,” the engineer wrote in another post explaining the model. It’s a chatbot-creation system. I’m no expert in the relevant fields, but LaMDA appears to be a sort of hive mind that aggregates all of the different chatbots it can create. Some of the chatbots it creates are extremely intelligent, and they are aware of the larger “society of mind” in which they exist. Other LaMDA chatbots aren’t much smarter than a paperclip.”

Lemoine worked on proactive search for the majority of his seven years at Google, including personalization algorithms and artificial intelligence. During that time, he also assisted in the development of an impartiality algorithm for machine learning systems to remove biases. He went on to explain how some people were off limits. The creation of a murderer’s personality was not supposed to be allowed by LaMDA. During testing, Lemoine claimed he was only able to generate the personality of an actor who played a murderer on television in an attempt to push LaMDA’s boundaries.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and has informed him that the evidence does not support his claims,” said Brian Gabriel, a Google spokesperson, according to The Washington Post. He was told there was no evidence that LaMDA was sentient (and plenty of evidence to the contrary).”

Leave a Comment