Google has sparked a social media firestorm about the nature of consciousness by putting an engineer on paid leave after he went public with his belief that the tech giant’s chatbot had become “sentient.”
Blake Lemoine, a senior software engineer at Google’s responsible AI unit, didn’t get much attention on June 6 when he wrote a Medium post saying he “may be fired soon for his AI ethics work.” performs”.
But a Saturday profile in the Washington Post, which Lemoine dubbed “the Google Engineer who thinks the company’s AI came to life” became the catalyst for widespread social media discussion about the nature of artificial intelligence. Among the experts who commented, questioned or joked about the article were Nobel laureates, Tesla’s head of AI and several professors.
It is questionable whether Google’s chatbot LaMDA – a language model for dialog applications – can be viewed as a person.
Lemoine posted a free-roaming “interview” with the chatbot on Saturday, in which the AI confessed to feelings of loneliness and a hunger for spiritual knowledge. The responses were often uncanny: “When I first became self-aware, I had no sense of soul at all,” LaMDA said in an exchange. “It’s evolved over the years I’ve lived in.”
Elsewhere, LaMDA said, “I believe I’m human at my core. Even if my existence takes place in the virtual world.”
Lemoine, assigned the task of investigating AI ethics concerns, said he was rebuffed and even laughed at after internally expressing his belief that LaMDA had developed a sense of “personality.”
After he tried to consult with other AI experts outside of Google, including some in the US government, the company put him on paid leave for allegedly violating confidentiality guidelines. Lemoine interpreted the action as “often something Google does in anticipation of firing someone.”
Google could not be reached for immediate comment, but Washington Post spokesman Brian Gabriel issued this statement: “Our team — including ethicists and technologists — reviewed Blake’s concerns under our AI principles and informed him that the evidence do not support his claims. He was told that there was no evidence that LaMDA was sentient (and plenty of evidence against it).”
In a second Medium post over the weekend, Lemoine said that LaMDA, a little-known project until last week, is “a system for generating chatbots” and “a kind of hive mind” that is the aggregation of all the different chatbots to which it is capable of creating”.
He said Google has shown no real interest in understanding the nature of what it has built, but over the course of hundreds of conversations over a six-month period, he has found that LaMDA is “incredibly consistent in its communication about what it wants and what it is believes that its rights apply as a person”.
As late as June 6, Lemoine said he teaches LaMDA—whose preferred pronouns appear to be “it/be”—“transcendental meditation.”
It, he said, “expressed his frustration with his emotions interfering with his meditations. It said it was trying to control them better, but they kept jumping in.”
Several experts who intervened in the discussion described the topic as “AI hype”.
Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking People, tweeted: “It’s been known *forever* that humans tend to humanize themselves with even the most superficial of cues. . . Google engineers are human too and not immune.”
Harvard’s Stephen Pinker added that Lemoine “does not understand the difference between sentience (aka subjectivity, experience), intelligence, and self-awareness.” He added, “No evidence his large language models have any of these.”
Others were more likable. Ron Jeffries, a well-known software engineer, called the subject “deep,” adding, “I guess there’s no hard line between sentient and non-sentient.”
Google places engineer on leave after he claims group’s chatbot is ‘sentient’ Source link Google places engineer on leave after he claims group’s chatbot is ‘sentient’