Joy Buolamwini: examining racial and gender bias in facial analysis software

As a graduate student, Joy found an AI system detected her better when she was wearing a white mask, prompting her research project Gender Shades. This project uncovered the bias built in to commercial AI in gender classification showing that facial analysis technology AI has a heavy bias towards white males. 

Joy Buolamwini /The Algorithmic Justice League at MIT Media Lab (2018/2018) by Joy Buolamwini /The Algorithmic Justice LeagueBarbican Centre

Scientist, activist and founder of the Algorithmic Justice League, Joy Buolamwini examines racial and gender bias in facial analysis software.

Joy Buolamwini /The Algorithmic Justice League at MIT Media Lab, Joy Buolamwini /The Algorithmic Justice League, 2018/2018, From the collection of: Barbican Centre
Show lessRead more

"I am a poet of code on a mission to show compassion through computation. I lead the Algorithmic Justice League, an organisation that uses art and research to highlight the social implications of artificial intelligence. In my practice, I have learned the importance of allowing intuition and play to guide my explorations. I now make artistic expressions informed by our algorithmic bias research to interrogate the limitations of AI, but my journey into studying the social impact of AI started because of my intimate frustrations with creating an art project at the Massachusetts Institute of Technology (MIT).

During my first year at the MIT Media Lab, I took a course called ‘Science Fabrication’. The premise of the course was to read science fiction and use it as inspiration to manifest a project that might otherwise be considered too impractical or fanciful. As part of the class we had futurist Stuart Candy conduct a workshop called ‘The Thing from the Future’. Participants drew cards that provided the constraints for an imagined future object."

Using the parameters framed by my cards (object-artwork | arc-half a decade from now | mood optimist) I sketched the Hall of Possibilities – a future space with paintings that you could walk into and there become another being. Gazing upon a painting would enable you to become a part of what you viewed. The next challenge was to take this imagined future and create a real-world object. While I couldn’t create paintings that people could walk into, I realised I could use a half-silvered mirror and an LCD screen to create a magical effect. By placing the special glass on a computer screen, I could change the reflection of the onlooker.

As I played with the effect, I created an object called the Aspire Mirror in 2015. The development of the Aspire Mirror was influenced by emotional devices like the empathy box and mood organ in Philip K. Dick’s science fiction classic Do Androids Dream of Electric Sheep? (1968) as well as Anansi the spider, a legendary creature spoken about in Ghanaian tales about shape shifting. Perhaps by seeing ourselves as another, by shifting the shape of our reflections, we could engender more empathy. Staring into the Aspire Mirror, an onlooker could see reflected onto her face animals, quotes, symbols, or anything else we could code into the system.

Once I achieved the effect, I added a web camera with computer vision software. The software used AI-enabled facial analysis technology to track a face so that the animals or quotes would follow the reflection of the onlooker. At least that was the idea, but as I worked to build the Aspire Mirror, I noticed that the software had a hard time following my face consistently. One night after returning from a party with a Halloween mask, I started working on the mirror. As a last resort to get the project to work, I put on the white mask. In the mirror, I became a lion."

Installation photo from the Barbican's AI: More than Human exhibition featuring Joy Buolomwini's Gender Shades project (2019/2019) by Barbican CentreBarbican Centre

"Staring into the Aspire Mirror, an onlooker could see reflected onto her face animals, quotes, symbols, or anything else we could code into the system."

Joy Buolamwini /The Algorithmic Justice League at MIT Media Lab, Joy Buolamwini /The Algorithmic Justice League, 2018/2018, From the collection of: Barbican Centre
Show lessRead more

"I took the mask off and was no longer detected. I had my lighter-skinned colleagues try out the mirror and it worked almost instantaneously for their faces. When the time came to demo this art project, I played it safe and used a volunteer with lighter skin.

Still, the experience of coding in a white mask continued to haunt me. I took another sequence of MIT Media Lab courses including ‘Toys to Think With and Learning Creative Learning’. As part of that sequence I joined a team to create the UpBeat Walls project where we explored the question, ‘What if you could paint walls with your smile?’

Using the same computer vision software and repurposing the code from the Aspire Mirror project, I programmed a system that would track the movement of a face, turning it into a digital paint brush and project-ing a trail of smiley faces onto a projection surface.

However, the UpBeat Walls project had the same issue as Aspire Mirror: while lighter-skinned individuals had no problem painting walls with their face movements, darker-skinned individuals struggled. The experience was frustrating but the issues with the technology did not seem urgent to me until I read the 2016 Perpetual Line Up report released by Georgetown Law. The report showed that over 1 in 2 adults in the US (more than 117 million people) currently had their face in facial recognition networks that could be searched by law enforcement using systems that had not been audited for accuracy.

In the UK a subsequent report revealed that facial analysis systems used by law enforcement agencies had false positive match rates of over 90%. There, false matches had misidentified more than 2000 innocent people as criminals.

When I learned about the increased use of facial analysis systems in law enforcement and thought about my personal issues with the technology, I decided to share my story. In a 2016 TED talk, I spoke about how I coded in white mask to have my face detected, and how the technology failed. Facial analysis technology, like many data-driven AI systems, is fundamentally based on pattern recognition. To teach a computer to detect a face, we use machine learning techniques that analyse large datasets of human faces. Over time, a ma¬chine learning model can be trained to detect a human face. However, if the faces in the dataset are not diverse, the model will struggle when presented with unfamiliar ones. In my case, a white mask was a closer fit to what the system had learned was a face than my actual human face."

Joy Buolamwini /The Algorithmic Justice League at MIT Media Lab (2018/2018) by Joy Buolamwini /The Algorithmic Justice LeagueBarbican Centre

"In my case, a white mask was a closer fit to what the system had learned was a face than my actual human face."

Installation photo from the Barbican's AI: More than Human exhibition featuring Joy Buolomwini's Gender Shades project, Barbican Centre, 2019/2019, From the collection of: Barbican Centre
Show lessRead more

"After sharing my story, I thought someone might want to check my claims, and so I decided to run facial analysis systems from different tech companies on my TED profile image. As I carried out these analyses, I found most of the systems did not detect my face, and the ones that did labelled me male.

Knowing that the techniques used for facial analysis are also used for pedestrian tracking, I wondered what would happen if self-driving cars couldn’t detect people of colour? Furthermore, I learned that companies like HireVue use facial analysis technology to inform hiring decisions. What if the system was unfamiliar with faces like mine and prevented qualified persons from gaining employment? In fact, Amazon scrapped an internal AI hiring tool that displayed gender bias. The tool was trained on ten years of hiring data and categorically gave a low rank to any application that included the term ‘women’ or listed certain women-only colleges. The AI picked up something I call the ‘coded gaze’ – a reflection of the preferences, priorities, and at times prejudices of those who have the power to shape technology. In the case of AI sorting through job applications, finding the words that lead to discrimination is fairly straightforward. However, when AI is analysing faces, it can be much harder to determine which attributes could lead to harmful discrimination.

When I started looking at face datasets used in the development of facial analysis technology, I found that in some cases they contained 75% male faces and over 80% lighter faces. For these systems, data is destiny, and if the data is largely pale and male, AI trained on skewed data is destined to fail the rest of society – the undersampled majority – women and people of colour.

I decided to focus my MIT thesis on studying facial analysis technology. I launched a project called Gender Shades where I tested commercially sold AI systems from IBM, Microsoft, and Face++, a billion-dollar tech company in China with access to one of the largest datasets of Chinese faces. Since existing datasets tended to be dominated by pale-skinned and male faces, I created my own dataset called the Pilot Parliaments Benchmark which was better balanced on gender and skin type. I ran the AI systems from each company on 1270 faces from the Pilot Parliaments Benchmark dataset.

For the task of guessing the gender of a face, the results were stunning. For lighter-skinned males, error rates were no more than 1%, and for lighter-skinned females they were no more than 7%. For darker skinned males, the error rates went up to 12% and for darker skinned females error rates were as high at 37% in aggregate. When we disaggregated the error rates by skin type we found that for the darkest-skinned women error rates were as high as 47%. All companies had reduced gender to a binary meaning that by just guessing, the AI system had a 50/50 chance of getting the correct answer by chance.

I shared the results with the companies and more than 230 articles in over 35 countries were published about the findings. Since the publication of the research, all the companies have made significant improvements. However, even if facial analysis technology is made more accurate, it can still be weaponized and placed on lethal autonomous weapons; it can be used by law enforcement for racial profiling; and governments can employ the technology covertly for mass surveillance. Since this technology is highly susceptible to bias and abuse, I have worked with companies and civil liberties organisations to launch the Safe Face Pledge.

The Safe Face Pledge is an opportunity for organisations to make public commitments towards mitigating the abuse of facial analysis technology. This historic pledge prohibits lethal use of the technology, lawless police use, and requires transparency in any government use. The Safe Face Pledge provides actionable and measurable steps that organisations can take to ensure they are following AI ethics principles. The pledge is a reminder that we have choice. Whether AI will help us reach our aspirations or reinforce the unjust inequalities is ultimately up to us."

Installation photo from the Barbican's AI: More than Human exhibition featuring Joy Buolomwini's Gender Shades project (2019/2019) by Barbican CentreBarbican Centre

"Whether AI will help us reach our aspirations or reinforce the unjust inequalities is ultimately up to us."

Credits: Story

Joy Buolamwini is a computer scientist and digital activist based at the MIT Media Lab. She founded the Algorithmic Justice League, an organisation that looks to challenge bias in decision making software.

Essay originally published in the AI: More than Human exhibition catalogue as 'Facing the Coded Gaze' by Joy Buolamwini.

AI: More Than Human is a major exhibition exploring creative and scientific developments in AI, demonstrating its potential to revolutionise our lives. The exhibition takes place at the Barbican Centre, London from 16 May—26 Aug 2019.

Credits: All media
The story featured may in some cases have been created by an independent third party and may not always represent the views of the institutions, listed below, who have supplied the content.
Explore more
Related theme
AI: More than Human
Explore our relationship with artificial intelligence
View theme

Interested in Natural history?

Get updates with your personalized Culture Weekly

You are all set!

Your first Culture Weekly will arrive this week.

Home
Discover
Play
Nearby
Favorites