againstThis month, The Engineer, Yannick Kilcher, Have fun training an algorithm, intended to be used in a conversation proxy, on a deliberately dataset based on abusive and discriminatory conversations. Unsurprisingly, the chatbot used by humans like you and me in real circumstances made controversial and inappropriate comments. Then the engineer revealed that he had accomplished – he said – the most terrible algorithm. Then a wave of protests arose among scientists and engineers questioning the conditions of such an experiment. Because science is not a game and must be conducted according to good practice.
In the experiment conducted here, Yannic Kilcher trained an algorithm on a set of offensive conversations within the discussion forum of the 4Chan photo-sharing network. The algorithm, which was then embedded in a written chat agent, reproduced the recurring patterns of conversations generated by the learning phase in exchanges with users. Involve racism, sexism or other discrimination and backlash towards users who, in turn, didn’t know they were exchanging the algorithm. But why have so many scientists and engineers condemned these algorithmic calculations?
Experiment with “blind” guinea pigs
First of all, the idea of celebrating the creation of the “most terrible” algorithm is cynical and reprehensible. In the context of the scientific approach, one can also ask about the conditions for the design and implementation of this test. In fact, it appears that no ethical dimension was taken into account in this experiment. Clearly defining what is expected of this test, what the results can bring to science, or even the harm it can do are among the points that need to be addressed. Some players recommend that such an experiment be validated and monitored by an ethics committee. Finally, one wonders if it is honest to warn users of their role in this experiment: blind guinea pigs.
Let’s be clear, conducting such an experiment could be important, such as informing the general public about the threats of algorithmic biases due to statistically oriented data sets. It can also serve as a fulcrum for a discussion of computational governance through best practices for the design, development, testing and use of these digital entities. But it must be done with caution while risking a horrific spectacle worthy of the Frankenstein legend.
Read alsoAlgorithms: “People must be empowered”
To paraphrase Ben Parker – the uncle of Spider-Man – or Franklin Roosevelt or Winston Churchill, With great strength comes great responsibility. A responsibility that will continue to grow in the future, with an enhanced algorithm for the community whose actors seem to be even more powerful.
“Subtly charming problem solver. Extreme tv enthusiast. Web scholar. Evil beer expert. Music nerd. Food junkie.”