Twitter’s Picture Preview Feature is Both Racist and Colorist

Lelia Hampton
7 min readDec 13, 2020

--

In the past few years, Black people have been inundated with the ways that technology hates us just as much as white supremacy does (then again, technology is an extension and weapon of white supremacy in many ways). Indeed, Black people are no strangers to racism, and dark skinned Black people are no strangers to colorism. In fact, Dr. Joy Buolamwini and Dr. Timnit Gebru have already demonstrated in their canonical work Gender Shades that computer vision algorithms with deep neural networks are colorist against dark skin Black women [1]. To this end, Twitter’s algorithm reinforces their results that technology company’s deep computer vision algorithms are in fact colorist and thereby racist. In my FAccT 2021 paper Black Feminist Musings on Algorithmic Oppression, I actually argue that this colorism is a form of violence handed down by these racist technologies (and thereby their creators) [2].

Twitter’s Algorithmic Faux Pas

Twitter has demonstrated yet another example (out of many) of bigotry and racism in deep computer vision algorithms. Twitter users tried an experiment where they used two photos of a Black and white (or fair skinned) person, two photos of a lighter skinned Black person/man and dark skinned Black person/woman, and/or three photos of a dark skinned Black person, light skinned Black person, and white (or fair skinned) person. To my knowledge, it is the first community algorithmic audit (or at least the first one this popular).

Here’s an example of the first faux pas back in September 2020.

If Twitter is cropping the arguably most powerful Black man in the United States and possibly the world, then no Black person is safe. In fact, Dr. Safiya Noble’s work demonstrates that the Obamas are no strangers to algorithmic oppression [3]. In fact, at one point during Obama’s presidency a Google Map search of “N-word House” turned up the White House, and a Google search of Michelle Obama associated her with apes [3]. This is in no way an endorsement of the Obamas, but it is to say that Black people (and other oppressed groups) are not safe on the whole when it comes to the inequity of machine learning algorithms.

I’ll briefly have an aside about the Google search algorithm. Last year, a professor of mine and I audited the Google search engine by searching “black women” to see how they were approaching search suggestions since the release of Safiya Noble’s Algorithms of Oppression [3]. We found Google provided an ad hoc fix by simply not displaying any suggestions at all. Today, I did another audit, and apparently they are now providing suggestions. See the audit below.

Returning to the racist Twitter algorithm, despite this issue occurring three months ago, it is still a problem as of December 6, 2020.

Test images for the below tweet

In December 2020, the above two pictures were tested by a user to see if Twitter had fixed their algorithm yet and yielded the following.

Besides the bottom banner, the preview algorithm yielded almost identical results.

Machine learning systems are decision making systems. In the case of Twitter’s picture preview algorithm, they decide whom or what will be presented to users before they click on a photo to see the whole view. I would say this is less serious than some other algorithmic faux pas, but that is simply not true. The truth is this picture preview feature could hinder the socioeconomic livelihood of Black entrepreneurs, Black models who may be in the same picture with lighter/white models, and so many others. This algorithmic racism can and most surely has had tangible impacts upon Black and Brown people and dark skinned Black and Brown people.

A Black artist displayed the impact of their livelihood in their artistic endeavors due to Twitter’s algorithm. As shown below, they used a fair skinned women and dark skinned Black woman for their test.

This Twitter user goes on to demonstrate a similar outcome when they replicate this test with the same fair skinned woman and a light skinned Black man.

In fact, they go on to demonstrate that not only is this algorithm racist, but it’s also colorist. This artist used a photo with a lighter skinned Black man and dark skinned Black woman to demonstrate that the algorithm overall favors lighter skin.

On the one hand, Black people’s photos are being taken without permission from social media, cloud storage sites, and more digital avenues by companies and law enforcement to use in computer vision algorithms. On the other hand, when we want to voluntarily use our photos in technology with computer vision algorithms, possibly for economic livelihood, we still receive disadvantageous outcomes. It’s the double bind: we’re damned if we do and damned if we don’t.

I will share this last experiment as an example which further pushed the boundaries of the experiments by playing with the spatial differences in a creative way. Like the artist above, they covered several bases and went beyond simple experiments. Their experiment yielded the following:

Why is it happening?

Is it the data that Twitter used to train the deep network? Even though I am currently investigating why computer vision algorithms go awry in the wild from a machine learning perspective, this issue is throwing me for a loop at first glance. What would the training data even need to look like for this to happen? Whatever it is, it would be great if Twitter could release a diagnosis and tell us what happened. It could also advance machine learning ethics research if we know what happens under the hood to make algorithms go awry.

Why hasn’t it been fixed yet?

Twitter has claimed that they actually did test this feature before deploying it [5,7,9]. I do not believe this. A single Twitter user came up with an experiment and immediately showed the bigotry inherent in the algorithm. If they can show this in a single experiment, then Twitter may need to get some new quality assurance testers because the “experiments” fell extremely short of doing their due diligence. This example demonstrates that taking responsibility of decision making algorithms is really important. If you are shipping a product to billions of users, then “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do [5,7]” is not going to cut it, especially on a platform with which Black Twitter and Black women in particular have many other gripes. That is, Twitter and every other technology and social media companies have a responsibility to their users, especially when their products control massive flows of information (and potentially misinformation)to billions of users globally.

If your algorithm is harmful, why not fix it? Why not be transparent about the challenges while you fix it? I understand propriety, but Twitter has given us no updates since it happened despite promising us they would fix it [9], and the problem persists. Even though Black Twitter provides an active user base and a lot of free advertising to companies(see Popeye’s chicken sandwich among others), we are the first to be thrown under the bus by the Twitter algorithm.

Support Black Technology Ethicists

If you enjoyed this article, please give it a hand and follow me on here for future articles. You can also read my article Black Feminist Musings on Algorithmic Oppression appearing in FAccT 2021 and cite it if you use it in your work. I also post my latest articles on my Twitter.

References

[1] Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional accuracy disparities in commercial gender classification. Conference on fairness, accountability and transparency.

[2] Lelia Hampton. 2021. Black Feminist Musings on Algorithmic Oppression. FAccT 2021.

[3] Safiya Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.

[4] Ruha Benjamin. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press.

[5] Alex Hern. September 21, 2020. Twitter apologises for ‘racist’ image-cropping algorithm. The Guardian. https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-cropping-algorithm

[6] Is Twitter’s image-cropping feature racist? https://www.dw.com/en/twitter-image-cropping-racist-algorithm/a-55085160

[7] Anagha Srikanth. September, 21, 2020. Twitter acknowledges the way it presents photos is racist. The Hill. https://thehill.com/changing-america/respect/diversity-inclusion/517442-twitter-acknowledges-the-way-it-presents-photos

[8] Georgia Coggan. September, 21, 2020. Is Twitter’s algorithm racist?Creative Bloq. https://www.creativebloq.com/news/twitter-racist-algorithm

[9] Rachel Kraus. September 20, 2020. Twitter to investigate apparent racial bias in photo previews. Mashable. https://mashable.com/article/twitter-photo-preview-algorithmic-racial-bias/

[10] Kashmir Hill. June 24, 2020. Wrongfully Accused by an Algorithm. The New York Times. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html

Note: All views are my own and do not reflect those of any institutions or organizations of which I may be a member.

--

--

Lelia Hampton
Lelia Hampton

Written by Lelia Hampton

I am a graduate student in MIT EECS + CSAIL. As an ML researcher, I care about AI + ethics, computational sustainability, and optimization among other things.

Responses (1)