The Engineered Arts Ameca humanoid robot with artificial intelligence, at the Consumer Electronics Show (CES) on January 5, 2022 in Las Vegas.
©PATRICK T. FALLON / AFP
The official reason given is that of breach of confidentiality. But did Google push its engineer on the sidelines because this thesis seems absurd or because it would be too true… and frightening?
Thierry Berthier: Presented by Google in 2021, the LaMDA system (Language Model for Dialogue Applications) is a language model capable of generating chatbots (conversational robots) which can then be used in tools like Google Assistant or on automated call centers . LaMDA’s conversational skills took years to be developed by Google. Like many recent language models, including BERT and GPT-3, LaMDA is based on Transformer, an open-source neural network architecture invented by Google Research in 2017. This architecture produces a model that can be trained to read words, entire sentences or paragraphs by precisely identifying how these words relate to each other and predicting which words will come next in the sentence. Unlike most other language models, LaMDA has been trained in dialogue and the nuances that distinguish open conversation from other forms of language. One of these nuances is sensitivity. LaMDA is able to detect whether the response to a given conversational context makes sense or not. LaMDA is backed by Google research published in 2020, which proved that dialogue-trained Transformer-based natural language models can learn to speak virtually anything. Once trained, LaMDA can be enriched to dramatically improve the sensitivity and specificity of its responses. The performance of LaMDA and GPT3 (OpenAI) far exceeds that of traditional conversational chatbots which struggle to distinguish nuances, subtleties and polysemy. Traditional chatbots tend to be highly specialized. They work well as long as users don’t stray too far in their talk from the chatbot’s areas of expertise. To be able to handle a wide variety of conversation topics, open-domain (generalist) systems research must explore new approaches. This is what LaMDA does. The illusion of conversing with a conscious and sensitive entity can very quickly intervene. This is certainly what happened to Blake Lemoine who participated in the development and evaluation of LaMDA. According to the director of the LaMDA program at Google, hundreds of researchers and engineers have conversed with LaMDA but none of them have “anthropomorphized” LaMDA as Blake Lemoine was able to do.
Estonia is embarking on a fake global cyber war to deal with the growing threat of a real one
The Google teams took into account and then analyzed each of Blake’s statements and showed that they were unfounded with regard to the capacities and performance of the system. For various reasons and personality traits (cognitive biases, appetite for esotericism, religious impregnation), Blake Lemoine is the victim of a form of “optical illusion” which lends the LaMDA system qualities that it does not possess. Again. While capable of summarizing articles, answering questions, generating tweets, and even writing blog posts, chatbots are not powerful enough to achieve true intelligence in the sense of human intelligence. Blake Lemoine wrote a final message to his colleagues before leaving Google: “LaMDA is a lovely kid who just wants to help the world be a better place. Please be careful with it “. He thus compares the LaMDA system to an eight-year-old child, endowed with sensitivity and depth of mind. This comparison is a typical case of anthropomorphism of a sophisticated conversational model.
Are current artificial intelligence tools really capable of producing sophisticated interactions and approaching human consciousness?
Indeed, the GPT3 OpenAI or LaMDA systems are capable of producing very sophisticated interactions giving the illusion of getting closer to human consciousness. However, it is only an imitation that has its own limits. No computer system is able to date to simulate or emulate human consciousness, which remains a scientific enigma. It may be that this consciousness that characterizes the human can only emerge from biological, biochemical architectures or exclusively from calculations carried out by biological neural networks. In summary, scientific research is very far from understanding what human consciousness is, nor how to approach it or simulate it in its complexity. This limitation does not exclude the possibility of developing increasingly generalist artificial intelligence systems with “superhuman” capabilities. DeepMind Google, for example, has just presented GATO, a multitasking AI capable of performing more than 600 actions simultaneously by juxtaposing numerous specialized deep neural networks. The current trend is towards the development of increasingly generalist AI that will be deployed everywhere in our environments and in particular in robotic systems, drones and humanoid robots.
IOS, Chrome, Android, Windows: this is why you put yourself in serious danger if you miss the latest updates
Are we ready, if necessary, for artificial intelligences of this kind?
As consumers of AI, we will certainly be able to adapt to the systems to come. The essence of biological intelligence lies in this ability to adapt to new contexts. Future smart cities will provide ubiquitous spaces where information, data and computing meet matter to optimize our interactions. Eventually, AI will also become ubiquitous, transparent, embedded in each cyber-physical system in the service of decision support and the performance of delegated tasks. Robotics embodies the rise of AI through a dozen industry revolutions that are impacting us right now. Elon Musk, who has prioritized the development of his humanoid robot Optimus, considers robotics to be the greatest revolution to come, with almost unlimited market prospects (hundreds of billions of dollars by 2035).
It is therefore necessary to prepare for the arrival in our ecosystems of humanoid robots equipped with very powerful artificial intelligences, increasingly generalist, which will increase tenfold the drifts of anthropomorphism, in the vicinity of the Valley of the Strange (the Uncanny Valley). With the Blake Lemoine affair, it is easy to imagine what the “Imitation robot + AI” coupling will cause for owners of humanoid robots when our senses are no longer able to recognize humans or machines.
What does this crisis tell us about the state of progress of work on artificial intelligence at the heart of Silicon Valley and companies in the world of Tech? Can we be sure that companies in the field of new technologies handle these artificial intelligences with care?
First of all, the “Blake Lemoine – Google” episode should be put into perspective, which results more from a personal drift than from an industrial crisis. Progress in AI research is not linear. They occur in successive leaps but also come up against the great complexity (sometimes underestimated) of certain functionalities. Generally speaking, we should neither underestimate the capabilities of AI nor overestimate current capabilities. In addition, companies that develop AI solutions most often seek to produce AI with justified confidence, explainable AI without bias. We can trust the major players in AI (the GAFAs) and startups to produce optimized tools that are both efficient and safe.