“Why diversity in Artificial Intelligence (AI) is non-negotiable.” “’Disastrous’ lack of diversity in AI industry perpetuates bias.” “A lack of diversity in tech is damaging AI.” “Artificial Intelligence is on the brink of a diversity disaster.” These are headlines from articles discussing one of the technology sector’s hottest topics – diversity in AI. Why is this a hot topic and why should you care? Because AI affects your everyday life and a lack of diversity in its development can have detrimental effects on people of color.
For instance, if you’re darker-skinned and have ever had problems getting the automatic soap dispenser in a public restroom to dispense soap for you, you’ll want to check out this video uploaded to Twitter in 2017 by Chukwuemeka Afigbo, then Facebook’s head of platform partnerships in the Middle East and Africa. As the video and the article reveal, the soap dispenser’s problem is optic, not AI in nature. But the fact remains that the dispensers were manufactured without the developers understanding a unique set of potential problems for a particular group of people in the general public. Basically, because of lack of testing on people of color the technology has a tendency to periodically malfunction. This is directly related to our topic and shows that the lack of diversity in technology in general, which includes AI, is problematic for people of color.
I first need to say that AI is an extremely complex subject and the length of this article will only allow us to scratch the surface of this particular AI topic. If you are interested in learning more, I encourage you to read the articles that I link to as a starting point for your research.
Artificial Intelligence in the Material World
Are you aware that you most likely already use AI in your everyday life? Do you say “Hey Siri…” or ask Alexa for information? You’re using AI. When you scroll through your Facebook feed or view the “Movies you may like” feed on Netflix, AI is powering the algorithms that determine what shows up in your feed and on that list of movies. AI isn’t coming; it’s here!
What Exactly is Artificial Intelligence Anyway?
In her article “What Is Artificial Intelligence? Examples and News in 2019,” business reporter Anne Sraders provides a definition I like: Artificial intelligence is the use of computer science programming to imitate human thought and action by analyzing data and surroundings, solving or anticipating problems and learning or self-teaching to adapt to a variety of tasks. An important thread to pull from that definition is that the computer is programmed to learn from data sets it is given.
So why the concern about diversity in AI? Under the covers, AI allows for bias to be introduced in two ways: creator bias and data bias. A lack of diversity on teams that develop AI products and services makes it easier for these biases to go undetected. Consider these examples.
AI and Amazon, Microsoft, and IBM
Amazon, Microsoft and IBM have had problems with their facial recognition technologies correctly identifying women and darker-skinned people. Amazon has been heavily marketing its technology to law enforcement agencies as a way to quickly identify suspects, but its technology is reported to have the most problems making correct identifications. When the issues with the software were first identified in 2018, all three companies released “more accurate technology,” implying that they reworked their software’s code – they presumably addressed creator bias. That said, Amazon’s rates of misidentification in the follow-up 2019 study are still very high which is causing concern. Can you see how law enforcement using a facial recognition software package that has problems identifying people of color or people’s gender could be problematic?
AI and Human Resources
Human resource departments have started to use AI to help them fill job openings – feed a stack of electronic resumes into a program that will identify the best candidate for your job. In his article “Why Adding Diversity in Artificial Intelligence is Nonnegotiable,” Danny Guillory explains how data bias works. He states that in a generic job search for an engineer (this search will produce a result set of mostly Caucasian males), when profiles are selected (hired) from that search, AI will ‘learn’ this and then continue to select that same type of profile in future searches. It is not trained to, and will not ever on its own, think outside the box, which, in this case, would be something like selecting a woman’s or a Latino’s profile. “In this mode,” says Guillory, “groups of people can be systematically eliminated. If certain groups are not included in the data sets that AI is taking into consideration, in the long run, problems or challenges that are outside of the data set may not be able to be solved for at all.”
Lack of Diversity in AI and Tech Fields
Sarah Myers West, Meredith Whittaker and Kate Crawford, researchers at New York University’s AI Now Institute, released an extensive report on AI earlier this year where they stated, “The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation.” and “The commercial deployment of these tools is cause for deep concern.”

The lack of diverse employees at tech companies has been widely reported, and this is a major part of the problem. In an interview with AI Business, Payal Jain – chair of Women in Data – says, “There [are] three things that are really important when we start thinking about AI and machine learning. It’s not so much about data—it’s all about people. Firstly, we’ve got to be aware of our own biases. Secondly, we need diverse teams to work with the technology. With 78% of people working in AI being male, there are biases that they naturally will not spot. Finally, we’ve got to make sure we’re giving the machines non-biased datasets.”
Timnit Gebru, co-founder of Black in AI, says of the need for diversity in AI in an interview for MIT’s Technology Review, “When problems don’t affect us, we don’t think they’re that important, and we might not even know what the problems are because we’re not interacting with the people who are experiencing them.”
Black in AI and the Importance of STEM Programs
This is why women and people of color in technology are working to address the issue. Organizations like Women in Data, Black in AI and Women in Machine Learning have been formed because the organizers are working to develop ways to address bias in AI, connect under-represented people in that tech sector and educate about the problem. Gebru says she and a friend started Black in AI in 2017 after she attended a Neural Information Processing Systems (NIPS) conference in 2016. (NIPS is considered one of the world’s largest AI conferences). There were about 8500 attendees. “I counted six black people,” she said. “I was literally panicking. That’s the only way I can describe how I felt…. Because six black people out of 8,500—that’s a ridiculous number, right? That is almost zero percent.”
It is these kinds of realities and disparities that make it extremely important for us to continue to encourage young people of color and women into STEM careers. This will help to increase the diversity pool as more ethnically diverse developers are added to the field to help prevent creator and data bias. STEM fields have and continue to be difficult for women to complete degrees in, but the industry has learned a lot about what is needed to help them succeed. Organizations like GirlsWhoCode allow teen girls to learn to code in an environment where they are surrounded by other women. Networking groups like the National Society of Black Engineers provide spaces where encouragement and mentorship can be found. This Harvard Business Review article lists six things successful women in STEM careers do; a topic we may cover in a future Message magazine article.
The fact of the matter is, AI is not going anywhere. It will continue to make its way more intimately and permanently into our daily lives. We need to be sure it does that as bias-free as possible.