The Gender Lens Crypto & More

How AI Reinforces Gender Stereotypes (Trend Brief)

Catalyst

Global Nonprofit

Artificial intelligence (AI) has been heralded as a tool that can enhance human capacities and improve services, influence the future of work, create jobs, and serve as an equalizer by reducing bias in decisions by making predictions with algorithms based on data. Yet AI, ultimately, is what humans design it to be, learn, and do. This means that AI, by definition, is not neutral. Rather, it reflects the biases held by those who build it, reinforcing stereotypes based on those biases.

Stereotypes are widely held, oversimplified generalizations about a group of people. This sort of “shorthand” categorization is based on the assumption that all members of a particular group are the same. Whether explicitly or implicitly, when stereotypes influence our perceptions and decision-making, members of stereotyped groups can be disadvantaged, and damage can be done.

Women Take Care and Men Take Charge

Gendered stereotypes result in sexism and can create structural barriers that perpetuate workplace gender inequality. One example of a gendered stereotype is that women are more nurturing than men. Over time, the societally pervasive stereotype that “women take care, men take charge” can embed itself in organizational cultures and norms, and—whether at home or in the workplace—women are viewed as more likely to be caretakers, which often negatively impacts women’s careers.

As we interact with AI in our daily lives, AI has the power to unintentionally reinforce gendered stereotypes. Catalyst research finds that women leaders perceived as nurturing or emotional are liked but not considered competent. This “double bind” can lead to women’s occupational segregation and lack of advancement opportunities.

A study on human-robot interactions found that AI reinforced the double bind dilemma. Participants rated robots that were assigned an explicit gender—either stereotypically male personality traits (confident and assertive) or stereotypically female personality traits (agreeable and warm). Participants rated the male-identified robot as more trustworthy, reliable, and competent than the female robot; the female robot was rated as more likable. While users do not necessarily prefer robots of a certain gender, they do prefer robots whose “occupations” and “personalities” match stereotypical gender roles. For example, people respond better to healthcare service robots identified as female and security service robots identified as male.

Digital voice assistants, such as Siri and Alexa, are often designed with female names and gendered voices. Their role is to perform tasks that have traditionally been assigned to women, such as scheduling appointments and setting reminders. Designing these assistants consistently with a female voice can reinforce traditional gender roles and may even lead to biased hiring of women in service or assistant-type jobs.

Additionally, how we speak to our digital assistants can influence societal norms. Abusive, insulting, or sexual language can normalize the way we speak to each other and particularly to women, while the tolerant or passive responses by feminized digital assistants to this language can reinforce stereotypes of the compliant and forgiving woman.

AI Reinforces Gendered Roles and Occupations

Word embedding is an example of how machine learning can reinforce gender stereotypes. AI identifies words close to each other and uses them as a frame of reference. Recently Apple’s iOS automatically offered an emoji of a businessman when users typed in the word “CEO.” When AI finds words like “CEO” near the word “man” multiple times, it learns this association and links these words going forward.

Princeton University found that AI’s word associations can reinforce stereotypes on everything from what internet search results we receive to hiring decisions we make. Princeton researchers measured AI’s word associations and found gender stereotypes in the word choices. The word “nurse,” for instance, was highly associated with the words “women” and “nurturing.” Meanwhile, the word “doctor” was more often associated with “men.”

AI learns these contextual associations through the data provided to it by programmers who are predominantly white and male. It’s possible that gender bias could occur if an AI recruiting system begins to use these word associations to accept nurse candidates with female names at a higher rate.

Even AI translation services reveal gender-occupation stereotypes when translating languages without gender-specific pronouns, such as Chinese and Turkish. In this example, researchers found that AI also assumed “nurse,” “nanny,” and “teacher” all to be women.

Machines taught by photo-based image-recognition software also quickly learn gender bias. In a recent study at the University of Virginia, images that depict activities such as cooking, shopping, and washing were more likely to be linked to women while images of shooting or coaching were more likely linked to men. Researchers further tested the data sets and discovered that the AI not only reflected the unconscious stereotypes of its creators but actually amplified them.

Where Do We Go from Here?

  • AI Industry Diversification: As of 2018, women comprise only 22% of AI professionals globally.The lack of gender diversity in the AI field hinders the industry’s ability to catch gender bias and stereotyping during AI machine learning and database design. An important first step to begin to mitigate the impact of AI-reinforced bias and stereotypes is for the AI industry to increase representation of women and other underrepresented groups in its workforce.


  • Business Policies, Procedures, and Practices: The number of businesses using AI increased by 60% between 2017 and 2018, but “only half of businesses across the [United States] and Europe have policies and procedures in place to identify and address ethical considerations—either in the initial design of AI applications or in their behavior after the system is launched.” While providing clear benefits, AI is not a technological utopia. It is important for organizations to develop policies and procedures to address ethical concerns that arise from the application of AI in their business models. It is equally important that businesses identify and utilize tools to review for unwanted bias in their data sets and machine-learning models.


  • Authored by: Sophia Ahn & Amelia Costigan