aWe’ve lost count of books, documentaries, and newspaper articles that truly tell us about algorithmic threats to society, such as the dangers of technological bias and manipulation of opinion on social networks. The scandals of the past three years explain this miserable, generalized post. The Cambridge Analytica case in 2018 marked the beginning of an era of distrust in algorithmic science, a relatively new discipline, with the illegal use of data from millions of Facebook users. We shouldn’t, however, get into the realm of Manoe, all good or all bad: reality is more subtle.
Many scientists and engineers organize to warn of deviations from harmless algorithms when used well or applied differently. Facial recognition tools are especially intimidating. It is widely controversial, as there is a high risk of algorithmic bias causing these tools to go unrecognized (or underestimate) of people of color, or mass surveillance of citizens. This is exactly what scientists Joy Bulamoyne, Cathy O’Neill, and Meredith Brossard confirmed in the documentary. Coded bias. These facial recognition technologies are based on image identification algorithms used elsewhere in the context of many other harmless applications. A picture or video recognition of the level of ripeness of the fruit on the production line is one of them.
Read alsoAurélie Jean-Gafam: We are the Achilles heel
Classification algorithms consist in classifying individuals by category according to statistical similarities in their behavior and / or personal data. Someone in the category is likely to like the content that class members like. This could be a series on a video-on-demand, a piece of clothing on an e-commerce site, or a post on Twitter. Abused, these algorithms can construct strict categories that lock users into opaque circles to observe the world and opinions. We never again see what people different from us see, like, or consume. US activist Ellie Barries reports on the effect of the bubble.
In the documentary A social dilemmaEngineer Tristan Harris shares his own experience at Google, believing algorithms threaten democracy. But these algorithms, when used correctly, also make it possible to evaluate the percentage of success of a patient’s medical treatment, or submit relevant articles to your journal’s website.
While it is essential and in the public interest to continue to warn about the risks of the algorithm, it is also important to articulate what can be done well with these tools. It is for this reason that the forthcoming European regulation on these issues should deviate from the conservative and Manichaean stance on this science. And to become a global reference according to an accurate and realistic vision … not a dystopia.
“Subtly charming problem solver. Extreme tv enthusiast. Web scholar. Evil beer expert. Music nerd. Food junkie.”