When I started shopping for makeup, I quickly learned the importance of skin tones and undertones. As someone with a fair to medium complexion and yellow undertones, I found that foundations that were too light and too pink left my skin looking pale and ashy. At the time, makeup shade ranges were extremely limited, and the alienation I often felt as a Chinese American who grew up in Appalachia was amplified every time a salesperson ruefully proclaimed that no foundation shade didn’t match me.
Only in recent years has skin tone diversity become a major concern for cosmetic companies. Rihanna’s launch of Fenty Beauty in 2017 with 40 foundation shades revolutionized the industry in what was dubbed the “Fenty Effect“, and brands are now competing to display greater skin tone inclusiveness. Since then, I have personally felt how meaningful it is to be able to walk into a store and purchase products off the shelf that acknowledge your existence.
Hidden skin tone bias in AI
As an AI ethics researcher, when I started checking computer vision models for bias, I found myself in a world of limited nuance ranges. In computer vision, where visual information from images and videos is processed for tasks such as facial recognition and verification, AI bias (disparities in AI performance for different groups) has been masked by the domain’s narrow understanding of skin tones. In the absence of data to directly measure racial bias, AI developers typically only consider bias across light and dark skin tone categories. As a result, although significant progress has been made in awareness of the biases of facial recognition against people with darker skin tonesbiases outside of this dichotomy are rarely taken into account.
The most commonly used skin tone scale by AI developers is the Fitzpatrick scale, although it was originally developed to characterize the tan or burn of the skin of Caucasians. The two deeper shades were only added later to capture the “brown” and “black” skin tones. The resulting scale resembles old-school foundation shade ranges, with only six options.
This narrow bias design is highly exclusive. In one of the few studies examining racial bias in facial recognition technologies, the National Institute of Standards and Technology found that these technologies have prejudices against groups outside of this dichotomy, including East Asians, South Asians, and Native Americans, but such biases are rarely verified.
After several years of working with researchers on my team, we discovered that computer vision models are not only biased toward light versus dark skin tones, but also toward red skin tones. compared to yellow skin tones. In fact, AI models were less accurate for people with darker or yellower skin tones, and those skin tones are significantly underrepresented in major AI datasets. Our work introduced a two-dimensional skin tone scale to allow AI developers to identify biases between light and dark tones and red versus yellow hues in the future. This discovery provided me with vindication, both scientifically and personally.
High-stakes AI
Like discrimination in other contexts, a pernicious feature of AI bias is the stubborn uncertainty it creates. For example, if I am stopped at the border because a facial recognition model fails to match my face to my passport, but the technology works well for my white colleagues, is this due to bias or simply bad luck? As AI becomes more and more pervasive in daily life, small biases can accumulate, causing some people to live as second-class citizens, systematically invisible or misinterpreted. This is of particular concern for high-stakes applications, such as facial recognition to identify criminal suspects or pedestrian detection for self-driving cars.
While detecting AI bias against people with different skin tones is not a panacea, it is an important step forward in an era where efforts are being made to increasingly to fight against algorithmic discrimination, as the study highlights. European AI law and that of President Joe Biden Executive Order on AI. Not only does this research enable deeper audits of AI models, but it also highlights the importance of including diverse perspectives in AI development.
In explaining this research, I was struck by how intuitive our two-dimensional scale seems to people who have had experience shopping for cosmetics – one of the rare cases where you have to categorize your skin tone and undertone of skin. It saddens me to think that AI developers may have relied on a narrow conception of skin tone until now, because it isn’t there more diversity?, particularly intersectional diversity, in this area. My own dual identity as an Asian American and a woman – who had experienced the challenges of representing skin tones – is what inspired me to explore this potential solution in the first place.
We’ve seen the impact of diverse perspectives on the cosmetics industry through Rihanna and others. It is therefore essential that the AI industry learns from this. Failure to do so risks creating a world in which many find themselves erased or excluded by our technologies.
Alice Xiang is a distinguished researcher, accomplished author, and governance leader who has dedicated her career to uncovering the most wicked facets of AI, many of which are rooted in data and the AI development process . She is the Global Head of AI Ethics at Sony Group Corporation and a Senior Research Scientist at Sony AI.
More essential comments published by Fortune:
- Why am I yet another woman leave the tech industry
- The tax code is made for traders. Here’s how much it costs punishes dual-income couples
- My mental health hit a low point due to a difficult pregnancy. Each employer should offer the kind of benefits that got me through
- America is wondering whether to raise the retirement age, but the baby boomers are Already working well into their 60s and 70s
The opinions expressed in comments on Fortune.com are solely the opinions of the authors and do not necessarily reflect the opinions and beliefs of Fortune.