An age-related gender bias against older women is found throughout popular image and video sites, as well as in the algorithms of popular generative AI tools, according to new research from the Graduate School of Business at Stanford University.
The authors examined nearly 1.4 million images and videos from Google, Wikipedia, IMDb, Flickr, and YouTube, as well as nine language models, and found women are consistently represented as younger than men across occupations and social roles, even though there is no prior evidence of an age gap between men and women in the workplace. This bias was the most pronounced for content depicting high-status and high-earning occupations.
Next, the authors conducted an experiment with 459 participants, asking them to Google images of occupations and upload them to the study’s survey. They were then asked to label the gender of the person in the image, estimate the average age of someone in that occupation, and rate their willingness to hire the person depicted in the image. Aligned with the authors’ earlier findings, the participants consistently estimated a lower age for women and a higher age for men in each occupation.
To investigate if this age-gender bias is propagated by generative AI, the authors prompted ChatGPT to create nearly 40,000 resumes for 54 occupations using 16 unique female and male names. When ChatGPT created resumes for hypothetical women, it generated significantly lower ages, more recent graduation dates, and fewer years of relevant experience than those for hypothetical men. Furthermore, when ChatGPT was asked to evaluate the quality of hypothetical resumes, it exhibited a preference for older applicants, with a highly significant and positive correlation for applicants who were older men.
“Our study provides direct evidence that age-related gender bias is amplified by two of the most widely used algorithms today: the Google Image search engine and ChatGPT,” the authors write. “Although companies such as Google and OpenAI invest heavily in reducing stereotypical content in their products, most studies focus on single dimensions of bias, such as gender-based or race-based biases. Our study highlighted the critical need to account for multimodal and multidimensional forms of bias, which are more challenging to detect but not less consequential in how people and algorithms represent the social world.”


