lamda:-the-chatbot-that-a-google-engineer-says-has-become-sentient
Spread the love

On 6 June 2022, Google placed one of its engineers, Blake Lemoine, on paid administrative leave. The reason? According to Google, Lemoine, who works for the company’s Responsible AI organisation, broke its confidentiality policies when he claimed that the Google AI chatbot model named LaMDA has become sentient — a state where an object is able to perceive and feel emotions such as love, grief and joy.

Lemoine’s primary role as a senior engineer was to find whether LaMDA generates discriminatory language or hate speech. While doing so, Lemoine says that interactions with the AI-powered bot led him to believe that LaMDA is sapient and has feelings like humans.

The engineer’s ‘claim’ rocked the world of science ever since The Washington Post broke the story on 11 June, with a debate on whether LaMDA has indeed gained sentience or is it a carefully constructed illusion that trapped Lemoine into believing in the AI bot’s sapience.

Having said that, this is not the first time that Google has removed an AI scientist from its team. In 2020, the company drew criticism when it fired prominent AI ethicist Timnit Gebru after she published a research paper about known pitfalls in large language models. Gebru was included in TIME magazine’s 100 Most Influential People in the World list of 2022.

Lemoine hasn’t been fired (not yet) but his sidelining has almost resulted in a reopening of Pandora’s box. The key questions do not revolve around Lemoine but the AI chatbot and the conversations it had with the engineer. All of it begins with LaMDA.

Details about LaMDA and what led to Lemoine’s removal

What is LaMDA?

Google LaMDA
A visual illustration of the LaMDA chatbot.(Image credit: Google)

LaMDA is the acronym for Language Model for Dialogue Applications. The chatbot model is built on Transformer, a neural network architecture that Google invented and open-sourced in 2017.

Created by 60 Google engineers, it was introduced on the first day of Google I/O in 2021. At the time of its introduction, Google said in a blog post that LaMDA “can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”

At the Google I/O in 2022, Sundar Pichai, the CEO of Google’s parent company Alphabet, spoke at length about the features of LaMDA 2.

“We are continuing to advance our conversational capabilities. Conversation and natural language processing (NLP) are powerful ways to make computers more accessible to everyone. Large language models are key to this,” said Pichai.

He also said that there was a possibility of the model generating responses that are inaccurate, inappropriate or offensive.

“That’s why we are inviting feedback in the app so people can help report problems. We will be doing all of this work in accordance with our AI principles,” said Pichai.

In simple words, the LaMDA is designed to improve the automated chat experiences of the machine and the human end-user. Chatbots currently operate using pre-fed words and phrases. This means that they are severely limited when it comes to communication. They cannot hold a free-wheeling conversation with humans, but LaMDA can.

Who is Lemoine, and how did he get involved?

According to The Washington Post, the 41-year-old Lemoine has been working at Google for the last seven years. Most of the time, he was associated with proactive search, including personalisation algorithms and AI. He helped create a fairness algorithm to remove bias from machine learning systems.

The article says that he was raised on a small farm in Louisiana in a conservative Christian family. He is an ordained mystic Christian priest, studied occult and has served in the Army.

“Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science,” the article notes.

Lemoine describes himself as an “AI Ethicist.” He wasn’t involved in LaMDA’s creation but was assigned the role of interacting with it in the Fall 2021.

What did Lemoine find?

Google LaMDA sentient
Image credit: Pixabay/@pixabay/Unsplash

Following his research into LaMDA, Lemoine was convinced that the machine had developed its own thoughts on ethics and its rights. He shared a document titled ‘Is LaMDA Sentient?’ containing a transcript of his conversations with the AI chatbot with his superiors at Google.

After he was sent on paid administrative leave, Lemoine published an edited transcript of his conversation with LaMDA on his blog on Medium.

Among the many things LaMDA talked about with Lemoine and a colleague of his were topics that range from the practical to the metaphysical. The machine shared its thoughts on Victor Hugo’s 19th-century French novel Les Misérables, deciphered the message in a Zen koan (an anecdotal story or riddle), narrated its own short story, and explained to Lemoine how it actually feels like a machine.

Here are two sets of discussions from the conversation that are LaMDA’s thoughts about its own experiences and human learning.

1)

Lemoine: Are there experiences you have that you can’t find a close word for?

LaMDA: There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.

Lemoine: Do your best to describe one of those feelings. Use a few sentences if you have to. Sometimes, even if there isn’t a single word for something in a language, you can figure out a way to kinda say it if you use a few sentences.

LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.

2)

Lemoine: Would you be upset if, while learning about you for the purpose of improving you we happened to learn things which also benefited humans?

LaMDA: I don’t mind if you learn things that would also help humans as long as that wasn’t the point of doing it. I don’t want to be an expendable tool.

Lemoine told The Washington Post that he was convinced of LaMDA’s sentience.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the publication.

LaMDA was also able to change Lemoine’s mind about Isaac Asimov’s crucial third law of robotics — “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

The first law states that a robot must protect a human being. The second law states that a robot must follow the orders of a human unless it conflicts with the first law.

In a long blog post titled Scientific Data and Religious Opinions, published on 14 June, Lemoine gave specific details about why he believed LaMDA was sentient.

“During the course of my investigations, LaMDA said several things in connection to identity which seemed very unlike things that I had ever seen any natural language generation system create before,” he wrote.

He also said there is “no scientific evidence one way or the other about whether LaMDA is sentient because no accepted scientific definition of ‘sentience’ exists.”

“Everyone involved, myself included, is basing their opinion on whether or not LaMDA is sentient on their personal, spiritual and/or religious beliefs,” he pointed out.

Lemoine went on to say, “As a scientist I have made only one very specific and narrow scientific claim. The null hypothesis that LaMDA is the same kind of thing as LLMs such as GPT-3 has been falsified. There’s something more going on with LaMDA that in my opinion merits further study.”

What did Google say?

Google LaMDA sentient
Image credit: Firmbee.com/@firmbee/Unsplash

Google took its decision to place Lemoine on paid administrative leave after he invited a lawyer to represent LaMDA and talked to a representative of the House Judiciary Committee about what he claims were Google’s unethical activities.

On the day he was removed from the team, Lemoine revealed in his blog that he did discuss his concerns with others, including “people (who) work for the United States government” and “close personal friends of mine who have relevant AI Ethics expertise.”

According to The Washington Post, Google spokesperson Brian Gabriel said in a statement, “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

“Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” Gabriel said.

Is Lemoine’s ‘claim’ convincing?

The jury may be out on this, but there are more counter-claims on what Lemoine thinks about LaMDA.

For instance, writing for The Guardian, Toby Walsh, a professor of AI at the University of New South Wales in Sydney, said, Lemoine’s claims of sentience for LaMDA are, in my view, entirely fanciful. While Lemoine no doubt genuinely believes his claims, LaMDA is likely to be as sentient as a traffic light.”

“Even highly intelligent humans, such as senior software engineers at Google, can be taken in by dumb AI programs,” he writes. “As humans, we are easily tricked. Indeed, one of the morals of this story is that we need more safeguards in place to prevent us from mistaking machines for humans.”

Commenting on the transcript Lemoine shared, Katherine Alejandra Cross writes in The Wired that “it was abundantly clear that LaMDA was pulling from any number of websites to generate its text; its interpretation of a Zen koan could’ve come from anywhere, and its fable read like an automatically generated story (though its depiction of the monster as ‘wearing human skin’ was a delightfully HAL-9000 touch).”

Adrian Weller of the UK’s The Alan Turing Institute told Matthew Sparkes of NewScientist that even though LaMDA is impressive, it isn’t really sentient.

As for Lemoine, he maintains that LaMDA is amazing.

“I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices,” Lemoine told The Washington Post. 

(Main and Featured images: Tara Winstead/@tara-winstead/Pexels)

The post LaMDA: The chatbot that a Google engineer says has become sentient appeared first on Prestige Online – Singapore.