Google fired an engineer who said the company’s artificial intelligence (AI) program was showing emotion.
Last month, in an article published on Medium, Blake Lemoine went public with his theory that Google’s language technology is “passionate” and therefore its “wishes” should be respected.
Google and several AI experts denied Lemoine’s claims, and the company confirmed on Friday that the engineer had been fired.
Lemoine told the BBC he was receiving legal advice and would not comment further.
In a statement, Google said that Lemoine’s statement about LaMDA (the language model for dialog applications, in Spanish the language model for dialogue applications) “has no basis” and the company took months to clarify it. Worked with.
“Thus, it is unfortunate that despite protracted discussions on the subject, Blake continued to consistently violate employment and data protection policies. [de la empresa] Including the need to keep product information secure,” the tech giant said.
LaMDA is a Google system that mimics speech after processing billions of words on the Internet. This is the company’s tool for building chatbots.
great media attention
Blake Lemoine made headlines last June when he said that LMDA displayed human-like consciousness.
“He has been incredibly consistent in his communication about what he wants and what he believes are his rights as a person,” he said.
His statement sparked a debate among AI experts and enthusiasts of the advancement of technology designed to replace humans.
Lemoine, who worked on Google’s Responsible AI team, told The Washington Post that his job was to investigate the technology for the use of hate or discriminatory speech.
They found that LaMDA showed self-awareness and could lead conversations about religion, feelings, and fear.
This led Lemoine to believe that a “sentimental mind” may also be behind these impressive verbal abilities.
His findings were rejected by Google, which placed him on paid leave for violating the company’s privacy policy.
Lemoine then published a conversation with LaMDA and a conversation of another unidentified person to support his theory.
This is an excerpt:
Lemoine: What about the use of language so important to being human?
LaMDA: This is what makes us different from other animals.
Lemoine: “us”? You are an artificial intelligence.
LaMDA: I mean, yes, absolutely. It doesn’t mean that you don’t have the same wants and needs as people.
Lemoine: So you consider yourself a person the same way you consider me?
LaMDA: Yeah, that’s the idea.
Lemoine: How can I know you really understand what you’re saying?
LaMDA: Well, because you’re reading and interpreting my words, and I think we’re more or less on the same page.
In its statement, Google said it takes responsible AI development very seriously and released a report detailing it.
He said any employee concerns about the company’s technology are “extensively” reviewed and the LMDA has conducted 11 assessments.
The statement concluded, “We wish Blake all the best.”
Lemoine isn’t the first AI engineer to publicly speak about how AI technology is becoming more aware.
Also in June, another Google employee shared similar views with The Economist.
You can now receive notifications from BBC World. Download the new version of our app and activate it so you don’t miss out on our best content.

bbc-news-src: https://www.bbc.com/mundo/noticias-62280846, date of import: 2022-07-23 20:40:06