Tuesday, April 23, 2024

Science without conscience is just a ruin of artificial intelligence

Must read

Maria Gill
Maria Gill
"Subtly charming problem solver. Extreme tv enthusiast. Web scholar. Evil beer expert. Music nerd. Food junkie."

Google fired one of its engineers, Blake Lemoine, who claimed that the AI ​​he was working on could sense “human emotions”. The question of machines with a conscience is nothing new, but advances in artificial intelligence have modernized it. The fact remains that this possibility is still a long way off, in the opinion of the majority of experts.

He referred to him as “a cute little kid who just wants to help the world” and asked his classmates to “take care of him while he’s gone”. Blake Lemoine has, in fact, been placed on “administrative leave” by Google, The Washington Post revealed on Saturday, June 11th. In the question: The “little kid” this engineer appears to be very close to is an artificial intelligence (AI), called LaMDA.

Blake Lemoine argued to his superiors that this algorithm had developed a form of consciousness and was capable of feeling “human emotions”. And it didn’t stop there. He also had an attorney defending lambda “rights” and called congressional officials to discuss Google’s “unethical practices.” [à l’égard de cette IA]’,” summarizes the Washington Post.

Learn Transcendental Meditation

Moreover, officially, due to this violation of the rules of confidentiality about its research, Google dismissed its engineer, who worked with the Internet giant for seven years. But, in general, “large groups try to put as much distance as possible with anything that could be controversial, and the question of machine consciousness clearly falls into this category,” asserts Reza Waezi, a specialist in cognitive science and technology. Artificial Intelligence at Kennesaw State University.

But Blake LeMoyne had no intention of giving himself up in silence. Published, on Article Day in The Washington Post, the first long-form post on the Medium platform Excerpts from discussions he may have had with LaMDA. Then this engineer took up the pen To direct the point home, is still on average, explaining that he “began to learn Transcendental Meditation” for this algorithm. According to him, the latter would have expressed extremely human frustration at not being able to continue this initiative after learning about the punishment of Blake LeMoyne. This researcher concludes, “I don’t understand why Google would refuse to give her something so simple that would cost nothing: the right to consult her before every experiment on her to get her consent.”

See also  Successful souffle flag

this is Awesome media dump The row between Google and its ex-employee over the conscience of artificial intelligence has not failed to arouse a wide resonance in the scientific community. The vast majority of AI specialists assert that Blake Lemoine “is wrong to lend properties of a machine that it does not possess,” for example, asserts Claude Tozet, a specialist in neurosciences and networks of artificial neurons at Aix-Marseille University.

“He went too far in his assertions, without providing concrete elements that would substantiate his statements,” adds Jean-Gabriel Janacea, a computer scientist, philosopher and chair of the CNRS’s ethics committee.

In fact, Blake Lemoine asserts he was surprised by the notes and the consistency of LaDMA’s rhetoric. Thus, during an exchange about the difference between a slave and a servant, AI made sure that she did not understand the nuances associated with the salary paid to one and not to the other … while adding that his misunderstanding was probably due to the fact that she did not need money as a machine. “It was that level of self-awareness that got me going even deeper,” says Blake Lemoine.

LaMDA, an advanced “talking robot”

It is true that “the ability to reflect on one’s state is one way of defining consciousness”, admits Jean-Gabriel Janasya. But LaMDA’s answer doesn’t prove that the machine knows what it is and what it feels like. “You have to be very careful: the algorithm is programmed to produce answers and it is not surprising that, in the current state of performance of language models, it appears coherent,” stresses Nicholas Sabouret, professor of computer science and specialist in synthetic intelligence at Université Paris-Saclay.

It’s even less surprising with LaMDA. Conversational agent – also known as “chatbot” – uses the latest language model technology. “There was a revolution in 2018 with the introduction of parameters that helped increase the attention of these systems to the importance of certain words in sentences and that taught them to take into account the context of the conversation better. To provide the most appropriate response,” Sophie Rousset, director of research at the Interdisciplinary Laboratory for Digital Sciences Specializing in human-machine dialogue systems.

See also  Starting October 1st in Lyon |

Since then, chatbots have been increasingly successful at deceiving people by talking to people as if they were conscious. LaMDA also benefits from another advantage. “He was able to learn hundreds of millions of conversations between Internet users that Google can retrieve on the Internet,” notes Lawrence Devilers, professor of artificial intelligence at the CNRS and author of the book. “emotional robots”. In other words, this AI has one of the richest libraries of semantic contexts that can be leveraged to determine what is, statistically, the best answer that can be given.

The dialogue reproduced on Medium by Blake Lemoine “is also amazing the smoothness of the exchanges and the management of semantic shifts, i.e. topic changes, by LaMDA,” admits Sophie Rosset.

But to be able to scientifically conclude that this AI has consciousness, more is needed. Moreover, there are tests that, even if not perfect, give more convincing results than a dialogue with an engineer. Thus, Alan Turing, one of the pioneers of artificial intelligence, in the 1950s created a protocol that makes it possible to determine whether a person can be deceived several times by artificial intelligence into thinking that he is talking to one of her spouses.

frankenstein legend

Developments in natural language models have demonstrated the limits of the Turing test. Other, more recent experiences “consists of asking two conversational agents to create a new language together that has nothing to do with what they have learned”, explains Reza Wizy. Who developed such a test. For him, this exercise will make it possible to assess “the creativity, which suggests a form of consciousness, of the machine.”

There is no indication that LaMDA can successfully overcome this obstacle, and “it is very likely that we are in front of a classic case of holographic projection.” [prêter des attributs humains à des animaux ou des objets, NDLR]Claude Tozet confirms.

This case demonstrates above all the desire, even among Google’s AI experts, to bring into the world a conscious AI. “It is the myth of Frankenstein and the desire to be the first to create an individual with consciousness outside the realm of natural procreation,” asserts Nicolas Sabouré.

See also  What is a leap year? The science and history behind this phenomenon

But in the case of artificial intelligence, “it is the sometimes misleading choice of words that can give the impression that we are trying to shape something human,” this expert adds. The very expression of artificial intelligence gives the impression that an algorithm will confer intelligence when “the programming is it,” adds Nicolas Sabouré. The same is true of the expressions “neural networks” or “machine learning” that refer to human characteristics.

He believes that this whole issue could harm research in artificial intelligence. In fact, it can give the impression that the sector is close to a breakthrough and is in no way in sight, which “could create false hopes accompanied by disappointments as a result.”

Above all, if this Google engineer is able to be fooled by their own AI, “it’s also because we’re at a tipping point in terms of language simulation,” assures Lawrence Devilers. She adds that algorithms like LaMDA have become so powerful and complex “that we play witch apprentices with systems we don’t know, in the end, what they can do”.

What, for example, if an AI that has become proficient in the art of dialectics such as LaMDA is used to “convince someone to commit a crime?” asks Jean Gabriel Janasya.

For Laurence Devilers, research in the field of artificial intelligence has reached a point where it has become urgent to put ethics back at the center of the debate. “We have provided an opinion from the Experimental National Digital Ethics Committee on this topic. Conversational Agent Ethics in November 2021‘, she notes.

“It is necessary, on the one hand, that those engineers who work in large groups have an ethics and take responsibility for their work and their words,” emphasizes this expert. On the other hand, she also believes that this demonstrates the urgent need to create “committees of independent experts” who can set ethical standards for the entire sector.

Latest article