Twitter has launched an investigation into its neural network, which appears to be racially biased. When pulling up an image in its photo preview function, the algorithm consistently favours the faces of white people – almost always displaying a lighter-skinned face and cropping out the Black people also featured in the photo.
A college professor first raised the design issue on Twitter, but in relation to Zoom's AI software. You can see the tweet below, which drew attention to the fact that Zoom was erasing the face of his Black colleague when he used a background while video chatting (for more successful design results, see our pick of web design tools).
any guesses? pic.twitter.com/9aIZY4rSCXSeptember 19, 2020
Through the course of the thread (opens in new tab), Colin Madland realised that Twitter was also defaulting to only showing his own face in photo previews, and leaving out that of his Black colleague – indicating a widespread issue with the way AI is working.
A separate Twitter "experiment" (opens in new tab) began (see it below), in which cryptographic engineer Tony Arcieri compared the photo previews of Mitch McConnell and Barack Obama to see which the Twitter algorithm would feature.
Trying a horrible experiment...Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkiaSeptember 19, 2020
Arcieri, and others, edited the images in a myriad of ways to check the results – inverting the colours, brightening and lightening parts of the pictures, and changing the backgrounds. All attempts seemed to prove an AI-powered racial bias, with Obama only popping up on the preview when the colours were changed.
Other Twitter users joined in with testing the algorithm, experimenting with Lenny and Carl from The Simpsons, and even dogs.
I tried it with dogs. Let's see. pic.twitter.com/xktmrNPtidSeptember 20, 2020
Twitter's Liz Kelley thanked everyone for bringing attention to the issue (opens in new tab), stating the social network has "more analysis to do". "We tested for bias before shipping the model and didn't find evidence of racial or gender bias in our testing, but it’s clear that we've got more analysis to do. We'll open source our work so others can review and replicate."
There's clearly more work to do on the algorithms that power AI, and we're glad Twitter is taking responsibility for the issue with its software. With AI now becoming increasingly responsible for various aspects of our day-to-day experiences, it's vital it is built to reflect the complex nuances of society's needs. If you'd like to make sure your design works for everyone, see our guide to inclusive web design.