Following viral claims by a Google engineer that the artificial intelligence (AI) chatbot ‘LaMDa’ was sentient, Stanford experts urged skepticism and open-mindedness while encouraging a rethink of what means to be “sensitive”.
Blake Lemoine was an engineer tasked with testing whether Google’s conversational AI machine produced hate speech when his conversations with LaMDA led him to believe it was sentient. He posted transcripts of those conversations in June and was fired on July 22 for violating Google’s confidentiality agreement, after also contacting government officials and an AI chatbot attorney.
Advertising
Google has continually dismissed Lemoine’s claims as unsubstantiated, saying LaMDA has been internally reviewed 11 times. Stanford professors and scholars largely agree.
“Sentience is the ability to sense the world, to have feelings and emotions, and to act in response to those sensations, feelings, and emotions,” wrote John Etchemendy Ph.D. ’82, co-director of Stanford Institute for Human-centered Artificial Intelligence (HAI), in a statement to The Daily. “LaMDA is not sentient for the simple reason that it does not have the physiology to have sensations and feelings. It is software designed to produce sentences in response to sentence prompts.
Yoav Shoham, the former director of Stanford’s AI lab, said while he agrees that LaMDA isn’t sentient, he draws the distinction not from physiology, but from a unique ability to feel that separates people from computers.
“We have thoughts, we make decisions for ourselves, we have emotions, we fall in love, we get angry, and we form social bonds with other humans,” Shoham said. “When I look at my toaster, I don’t feel like it has those things. »
But the real danger of AI is not that it affects sentience. For Etchemendy, the biggest fear is that she tricks people into thinking she did it.
“When I saw the Washington Post article, my reaction was to be disappointed in the Post for even publishing it,” Etchemendy wrote, noting that neither the reporter nor the editors believed the claims. by Lemoine. They published it because, at the moment, they could write that headline about the “Google engineer” who made that absurd claim, and because most of their readers aren’t sophisticated enough to recognize him for what It is. Pure clickbait. »
On June 6, Lemoine was placed on paid leave at Google for violating the company’s privacy policy after providing a US senator with documents that allegedly provided evidence that Google and its technology were religiously discriminatory. Less than a week later, Lemoine revealed that LaMDa had hired a lawyer to defend his rights as a “person”.
LaMDa isn’t the first chatbot to spark speculation about sentience. ELIZA, the first chatbot ever developed, was suspected of being sentient by its creator’s secretary. The “ELIZA Effect” was thus coined to describe the phenomenon of unconsciously assuming that computer and human behaviors are analogous.
Richard Fikes, professor emeritus of computer science, said that although LaMDa is a more complex form of what ELIZA was, neither is sentient.
“You can think of LaMDa as an actor; he will take on the personality of anything you ask of him,” Fikes said. “[Lemoine] was drawn into the role of LaMDa playing a sentient being.
Language models like LaMDa have information from across the web in their databases, which they use to better understand which are the most sensible and human responses in conversations, according to Fikes. He added that because Lemoine asked leading questions and made high-profile statements such as, “I generally assume you want more people at Google to know you’re sensitive,” LaMDa produced answers under the personality of a responsive chatbot.
The controversy surrounding LaMDa’s sentience has also sparked broader debates over the definitions of “conscience” and “sentience.”
“The biggest limitation we have in interacting with computers is bandwidth,” Shoham said. “It’s incredibly difficult to give him ins and get outs back. But if we could communicate more seamlessly, for example, through brainwaves, the boundary would become a little smoother. »
Highlighting technology like prosthetic limbs and pacemakers, Shoham said the human body can continue to be augmented by machines like AI systems and computers. As this technology becomes more advanced, unanswered questions about the sentience of computers that are extensions of already sentient humans will become less relevant, according to Shoham.
“Whether a machine is sentient, sentient, or has free will is less of an issue because machines and we will be inextricably linked,” he said.
For Shoham, it is difficult to predict when truly sentient AI will be born because the notion of sentience is too poorly defined. It is also not clear whether the possession of sentience confers certain rights: animals do not have the same rights as humans, although many studies have demonstrated their sentience. It is therefore unclear whether LaMDa should be granted “human rights” if his sensibilities are established.
Despite the ethical and philosophical dilemmas plaguing AI questions, Shoham still looks forward to future advancements.
“I am extremely optimistic,” he said. “All technologies of the past have benefited humanity, and I see no reason why AI would be any different. Every technology that was developed could be used for good or ill, and historically, good has always won out. A language model could write beautiful poetry, and he could curse. Helicopters carry people from all over the country, but they also carry terrorists and bombs. »
Encouraging rational optimism about technological progress, Shoham urged individuals not to adopt alarmism or apologists for AI. “Make up your own mind and be thoughtful,” he said.
A previous version of this article stated that Lemoine had hired an attorney for LaMDA. During an interview with TechTarget, Lemoine says he invited a lawyer to his home, but the AI chatbot retained his services, following a conversation between LaMDA and the lawyer. The Daily regrets this error.