Twitter’s Picture Preview Feature is Both Racist and Colorist

Twitter’s Algorithmic Faux Pas

Twitter has demonstrated yet another example (out of many) of bigotry and racism in deep computer vision algorithms. Twitter users tried an experiment where they used two photos of a Black and white (or fair skinned) person, two photos of a lighter skinned Black person/man and dark skinned Black person/woman, and/or three photos of a dark skinned Black person, light skinned Black person, and white (or fair skinned) person. To my knowledge, it is the first community algorithmic audit (or at least the first one this popular).

Test images for the below tweet

Why is it happening?

Is it the data that Twitter used to train the deep network? Even though I am currently investigating why computer vision algorithms go awry in the wild from a machine learning perspective, this issue is throwing me for a loop at first glance. What would the training data even need to look like for this to happen? Whatever it is, it would be great if Twitter could release a diagnosis and tell us what happened. It could also advance machine learning ethics research if we know what happens under the hood to make algorithms go awry.

Why hasn’t it been fixed yet?

Twitter has claimed that they actually did test this feature before deploying it [5,7,9]. I do not believe this. A single Twitter user came up with an experiment and immediately showed the bigotry inherent in the algorithm. If they can show this in a single experiment, then Twitter may need to get some new quality assurance testers because the “experiments” fell extremely short of doing their due diligence. This example demonstrates that taking responsibility of decision making algorithms is really important. If you are shipping a product to billions of users, then “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do [5,7]” is not going to cut it, especially on a platform with which Black Twitter and Black women in particular have many other gripes. That is, Twitter and every other technology and social media companies have a responsibility to their users, especially when their products control massive flows of information (and potentially misinformation)to billions of users globally.

Support Black Technology Ethicists

If you enjoyed this article, please give it a hand and follow me on here for future articles. You can also read my article Black Feminist Musings on Algorithmic Oppression appearing in FAccT 2021 and cite it if you use it in your work. I also post my latest articles on my Twitter.

References

[1] Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional accuracy disparities in commercial gender classification. Conference on fairness, accountability and transparency.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lelia Hampton

Lelia Hampton

I am a graduate student in MIT EECS + CSAIL. As an ML researcher, I care about AI + ethics, computational sustainability, and optimization among other things.