An Inclusive Future: Women in the AI Workforce

Jessica Maksimov, intern at Our Secure Future, explains the importance of diversity in the AI workforce in order to address the growing issue of biases in AI systems.

The 2023 AI Executive Order addresses the increasing spread of AI, stating, “[the] Administration will support programs to provide Americans the skills they need for the age of AI and attract the world’s AI talent to our shores… so that the companies and technologies of the future are made in America.” This statement addresses a key component of AI development: talent. AI talent underpins the creation and deployment of all AI technologies. Those who create these AI systems embed them with their values and biases, which then impact millions of users. Women constitute only a minority of the AI workforce, meaning their share of values and world perspectives are underrepresented in AI systems. 

The “AI workforce” consists of technical, product, and commercial talents. This blog post focuses on “technical talent” which consists of people “who are qualified to work in AI or on an AI development team, or have the requisite knowledge, skills, and abilities such that they could work on an AI product or application with minor training.” AI technical talent encompasses various professions, including system developers, product managers, and marketing teams.  

AI workers impact system development and influence how the systems function. If the workers and training data exhibit biases, so does the technology. If AI talent is composed of one minority of the population, it does not exhibit perspectives representative of the entire population. In other words, AI bias and workforce diversity are “two sides of the same problem.” Without addressing one, we cannot solve the other. Applying the Women, Peace and Security (WPS) agenda to existing and emerging technologies such as AI can help to remedy the lack of diverse perspectives apparent in current AI systems.  

Workforce Diversity  

AI systems are becoming the underbelly of the future world—already embedded into the U.S. workplace, government agencies, and justice system. However, AI’s ability to serve a broad range of citizens is limited by the minority of perspectives they encompass. According to the Stanford University 2023 AI Index, incoming AI PhD graduates are 78.7 percent male and only 21.3 percent female. Meaning, few women are entering the education space that contributes to AI work.  

According to the U.S. National Science Foundation 2023 “Diversity and STEM” report, women compose 51 percent of the total U.S. working population (among people ages 18-74), but only 35 percent of the STEM (science, technology, engineering and mathematics) workforce is female. Among female college-educated STEM workers, 61 percent of women are social and related scientists, and only 16 percent are engineers. In S&E (science and engineering) and middle-skill occupations, women’s median earnings are lower than those of their male counterparts. In Big Tech, as of 2021, women constituted 29 percent of Microsoft’s workforce and 33 percent of Google’s— both companies are currently engaged in creating the world’s most powerful large language models.    

AI Biases Reflect Underrepresentation in AI Talent  

Sarah West, et, al., addressed workforce diversity in a 2019 study, “Discriminating Systems: Gender, Race, and Power in AI.” The researchers found that focusing on the AI pipeline— which assesses the transition of workers from school to industry— does not address other factors of limited diversity in the AI industry. Issues, including “workplace culture, power asymmetries, harassment, exclusionary hiring practices, [and] unfair compensation” all hinder the diversity levels that AI companies can maintain. These exclusionary patterns are systemic and widespread in industry, collectively reducing the mobility of diverse groups within AI companies and limiting workers’ desire to continue working in discriminatory environments.  

This lack of AI talent diversity then translates into AI bias when machines, trained on limited perspectives, are used for classification. Namely, AI systems have been used to predict criminality based on facial features, determine sexuality and race in photos, and assess worker competence based on “micro-expressions.” However, researchers have shown that AI machines amplify biases when identifying people’s identity and professions. For example, in a Bloomberg survey, the AI image-generator DALL-E, created disproportionate amounts of images of males, except when asked to generate low-paying jobs, such as housekeeper and cashier—which were generated as predominantly female. Additionally, prominent MIT researcher Joy Buolamwini revealed that AI systems could not identify her race or face until she put on a white mask. As a Black woman, she was practically invisible to the camera. Moreover, many AI classification systems mistook pictures of Black women for men. Without a diverse perspective embedded into AI systems, they will continue to replicate historical patterns of racial and gender biases.

Worker-led movements have popped up in Big Tech to make high-ranking industry leaders address workplace discrimination. Namely, in 2017, the Tech Workers Coalition was officially founded as a forum for tech workers to organize and advocate for “inclusive and equitable tech cultures.” In 2018, 20,000 Google employees participated in the Google Walkout for Real Change, protesting Google’s dismissal of sexual assault and harassment claims. Google subsequently agreed to end forced attribution in harassment claims. In 2019, employee protests forced Microsoft to publish statistics about internal discrimination and sexual harassment against women, as well as career progression data. These instances show that workers have made impacts on workplace environments, resulting in slight changes. 

Future Outlooks  

There are already signs of hope for a more diverse AI workplace. Since 2017, five percent more women have entered computer science, computer engineering, and information faculties in North American universities. Additionally, K-12 computer science education is becoming more gender and ethnicity diverse, with the number of female students increasing by nearly 14 percent since 2007. There are also individuals, such as Joy Buolamwini and Caroline Criado-Perez, who are openly speaking out about the lack of female representation in AI and offering research-based solutions. Additionally, more worker-led movements have forced companies to reconsider workplace practices creating unjust and unsafe environments for women. However, with increasing tech layoffs, AI workers may become more complacent, fearing retribution and unemployment. The 2023 AI Executive Order has also made AI talent diversity part of the administration’s agenda, a promising sign of fostering inclusivity in the future AI workforce. All in all, further applications of gender perspectives are necessary to develop more inclusive approaches to AI and cultivate a diversity of perspectives in the creation and implementation of AI systems. 

For more on AI bias see https://oursecurefuture.org/news/artificial-intelligence-and-its-ingrained-gender-biases