This summer, a former Google employee claimed that artificial intelligence was conscious, which sparked controversy. If these tools improve, the question arises whether it is not just about their ability to speak as humans do.
Artificial intelligence (AI) is constantly evolving. It exists in many fields, such as medicine, where it can aid in diagnosis in medical imaging or art, and the ability to create an image from text descriptions. They are also found in chatbots and voice assistants that allow us to get answers to our everyday questions.
Despite these abilities, some of the abilities attributed to him are up for debate. Such is the case for sensitivity, Blake Lemoine, a Google engineer who was fired this summer, after considering an AI tool called LaMDA as a person. In addition to the company, which itself said the opposite, many experts spoke on this topic. So, is AI feeling real, or is it more science fiction?
imitation of man
At the moment, many experts are siding with Google, saying that AI is not conscious. For them, he imitates human behavior only thanks to the data that was used to train him. “You can train her on large amounts of written text, including stories with emotion and pain, and then she can finish that story in a way that feels original. Not because she understands those feelings, but because she knows how to combine old shots to create new ones.”Thomas Dietrich, professor emeritus of computer science at Oregon State University, told the site interested in trade in juan.
“It is related to the human essence. People can relate to many things, including robots.”
CEO of Talkr.ai
In other words, although AI systems generate answers based on data produced by humans, this does not mean that they are sensitive. “Today, it is all about imitation. Artificial intelligence is not at this stage of consciousness”Katja Line, co-founder and CEO of Talkr.ai, a company whose goal is to help companies improve their interaction with the public through virtual assistants.
Others also believe that it is not possible today to determine sensitivity or distinguish between a robot designed to mimic social interactions from one that might actually be able to sense what it is expressing. “You cannot distinguish between feeling and not feeling based on the sequence of words that appear, because these are just learned patterns”explained another Google engineer who worked with LaMDA.
A perception related to human nature
On the other hand, if individuals see AI systems and chat bots more specifically as people, this would not be a disadvantage. According to Katya Lane, it is possible to mimic the sensitivity by simulating the responses that will be given. “We can pre-configure the way the assistant will behave with people, can be empathetic, imitate emotions”, as you say. Added to this is the fact that individuals tend to bond with things and even feel emotions for them. “It is connected to the human core. People can relate to a lot of things, in this case robots, just as they can relate to pets or something else.”says the CEO of Talkr.ai.
“This is not for tomorrow. Today, a machine cannot experience emotions as truly as a human, who is fully aware of being happy, sad, or afraid.”
CEO of Talk.ai
With advances in artificial intelligence, it has become difficult for some to separate themselves from these systems and not see them as people, even being able to imitate the deceased. A young Canadian, for example, was able to talk to a system that imitated his dead girlfriend for several months after providing data to power it, reports San Francisco Chronicle in 2021.
Sensitive artificial intelligence, a reality that is still elusive
Despite Blake Lemoine’s assurances, Katya Lainé – like many other experts – considers that we are far from sensitive AI. “This is not for tomorrow. Today, a machine cannot experience emotions as truly as a human, who is fully aware that he is happy, sad or afraid”Talkr.ai CEO confirms.
Finally, some specialists believe other concerns about artificial intelligence are more pressing and realistic, such as Michael Wooldridge, professor of computer science at the University of Oxford. nearby guardianRecently, he said, this excitement about sensitivity distracts from issues of artificial intelligence currently affecting society, such as bias. Related technologies, such as facial recognition and robotics, can already be racist and sexist due to the data used to train them, posing a risk to certain groups such as LGBTQIA+ people or women. As Katja Line points out, AI highlights these biases because they exist in society. Learning from a person reproduces his mistakes.
“Subtly charming problem solver. Extreme tv enthusiast. Web scholar. Evil beer expert. Music nerd. Food junkie.”