There is a mystery on Twitter. Questions related to the way in which the system selects images for display to users.
Usually, when you post an image on Twitter, it is automatically cropped and a thumbnail that’s a little different than the original that appears in the feed. To see the whole image, you must open the message. However, users have found that if you post an image containing two faces, one white and one black, Twitter will consistently crop the image to show the face white, with the black remaining invisible. The case began with a portrait of US Senator Mitch McConnell (White) and former US President Barack Obama (Black). It’s the same every time, whether white is placed at the top and black at the bottom or vice versa. That’s not all: between a man and a woman, the man is also privileged.
Twitter acknowledged the problem and apologized. But as of yet, there is no precise explanation. Research is underway. This case illustrates perfectly what is called the question of “the ethics of algorithms”. To fully understand, you need to know how cropping images on Twitter works. This is automatic. It’s run by what’s called an artificial intelligence neural network, a program that has trained from millions of images.
The most likely explanation is therefore that he has trained with many more white faces than black faces, and therefore has trouble recognizing a black face in a photo.
Recall that an algorithm does not “see” a person in an image, but only detects pixels without understanding what it is. Therefore, AI is not strictly speaking racist. However, it can reproduce racist biases, conscious or unconscious, coming from those who programmed it. It can also suffer from technical faults which resemble, on arrival, racism. There have been issues like this before at Facebook, as well as with Zoom video conferencing software which tended to erase the faces of black people in its virtual background integration system.
This type of problem is not insoluble but it must be taken into account when creating algorithms and when training AIs, so as not to virtually reproduce discrimination.