Is there intelligence in artificial intelligence?

This article was republished from France Conversation

Nearly 10 years ago, in 2012, the scientific world was amazed at the exploits of deep learning ( Deep learning). Three years later, this technology allowed AlphaGo to run Defeat the Go Champions. Some feared. Elon Musk, Stephen Hawking and Bill Gates were concerned about the approaching end of humanity, which would be replaced by an AI that would get out of control.

Wasn’t that a bit of a stretch? This is exactly what the AI ​​thinks. In an article he has Written in 2020 in a Watchman, GPT-3, this giant neural network of 175 billion parameters shows:

“I’m here to convince you not to worry. Artificial intelligence will not destroy humans. Trust me.”

At the same time, we know that the power of the machines is constantly increasing. Training a network like GPT-3 was literally out of the question just five years ago. It is impossible to know what his successors could do in five, ten, or twenty years. If today’s neural networks can replace dermatologists, why not replace us all?

Let’s take the question back.

Are there human mental skills that are still beyond the reach of artificial intelligence?

We immediately think of skills that involve “our intuition” or “our creativity.” No luck, Amnesty International claims to attack us in these areas as well. As proof of this, the fact that software-generated businesses have been sold quite expensive, some amounting to nearly half a million dollars. On the musical side, of course, everyone will have their own opinion, but we can actually recognize the accepted bluegrass or roughly Rachmaninoff in the MuseNet tradition created, like GPT-3, by OpenAI.

Will we soon have to surrender to the inevitable control of AI? Before calling for rebellion, let’s try to see what we are dealing with. Artificial intelligence is based on several technologies, but its recent success is due to only one: neural networks, especially those of deep learning. However, a neural network is nothing more than a machine that can be linked. Deep web that She talked about it in 2012 Associated images: horse, boat, mushroom, with corresponding words. It is not enough to cry a genius.

READ  Vaccines against variants: the race for the unvaccinated

However, this correlation mechanism has the somewhat miraculous characteristic of being “persistent”. You present a horse the network has never seen, it recognizes it as a horse. You’re adding noise to the image, don’t disturb it. why ? Because the continuity of the process ensures that if the input to the network changes a little, its output will change a little as well. If the still hesitant network forced it to search for its best answer, it probably wouldn’t change: the horse remains a horse, even if it differs from learned examples, even if the picture is noisy.

Association is not enough

Good, but why do we say such associative behavior is “smart”? The answer seems clear: it can diagnose skin cancer, grant bank loans, keep the car on the road, detect diseases in physiological signals, etc. Thanks to their power to connect, these networks gain forms of experience that require years of study from humans. And when one of these skills, for example writing a newspaper article, appears to be holding on for some time, it suffices to make the machine swallow more examples, as happened with GPT-3, because the machine begins to produce convincing results.

Is it really to be smart? No. This kind of performance is only a small side of intelligence at best. What neural networks do is like rote learning. This is not, of course, because these networks constantly fill in the gaps between the examples they have been shown. Let’s say it is almost by heart. Human experts, be they doctors, pilots or Go players, often don’t do anything else when deciding reflexively, thanks to the sheer amount of examples learned during their training. But humans have many other powers.

READ  Science Hunters: Mark Seguin, Conquer the Railroad

Learn how to calculate or cause over time

The neural network cannot learn arithmetic. The correlation between processes such as 32 + 73 and their results has limitations. They can only reproduce the strategy of the mutt trying to guess the outcome, sometimes falling right. Too difficult account? How about an initial IQ test like: Continued Sequence 1223334444. Correlation by continuity does not always help to see the structure, N repeat N 5 times and continues with five 5. Are you still too difficult? The association’s programs can’t even guess that the animal that died on Tuesday is not alive on the Wednesday. why ? What are they missing?

Modeling in cognitive science has revealed the existence of several mechanisms, other than correlation by continuity, which are all components of human intelligence. Because their experience is prepared in advance, they cannot think of the right time to decide that the dead animal is still dead, or still Understand the meaning From the phrase “he still hasn’t died” and the strangeness of this other sentence: “He is not always dead.” Their single digestion of large amounts of data does not allow them to do so Identify new structures Very obvious to us, like identical number sets in the sequence 1223334444. Their rote memorization strategy is also blind. Unpublished anomaly.

Detecting anomalies is an interesting case, because it is often through it that we measure the intelligence of others. The neural network will not “see” that the nose is missing from the face. Through continuity, he will continue to recognize the person, or perhaps confuse him with another. But he had no way of realizing that not having a nose in the middle of the face was an anomaly.

READ  Here are the most beautiful cars in the world (according to science)

There are many other cognitive mechanisms inaccessible to neural networks. They are searching for their automation. It implements the operations performed at processing time, as neural networks simply perform the associations previously learned.

With a decade of hindsight Deep learning, The informed audience is starting to see neural networks as a “super mechanism” rather than as intelligence. For example, the press recently alerted to the astonishing performance of the DALL-E program, which produces creative images from verbal descriptions – for example, the images that DALL-E imagines from the terms “d-armchair.” Lawyer, “on the site OpenAI). We now hear far more thoughtful judgments than the cautionary reactions that followed the launch of AlphaGo: “It’s absolutely amazing, but we must not forget that this is an artificial neural network, trained to accomplish a task; there is neither creativity nor any form of intelligence.” (Fabian) Chauvier, France Inter, January 31, 2021)

No form of intelligence? Let’s not take too much, but let’s stay clear about the huge divide separating neuron networks from what would be true AI.

Jean – Louis Dessalles wrote “Very Artificial Intelligence” for the editions of Odile Jacob (2019). Lecturer, Institute of Mines – Communication (IMT)

Leave a Reply

Your email address will not be published. Required fields are marked *