Skip to main content
eScholarship
Open Access Publications from the University of California

Referring to people with only a last name: Comparing gender biases in humans and chatGPT

Abstract

In some contexts, last-name-only format is used to refer to people (e.g. "Hedberg came in", "Jones called"). At least in U.S. English, men are more likely to be referred to by last-name-only format than women (male bias, e.g. McConnell-Ginet 2003). Moreover, researchers referred to with last-name-only are judged more famous/eminent (eminence bias, Atir & Ferguson 2018). However, the robustness of these biases is not yet well-understood, nor how they interact with other semantic biases. We report sentence-completion data from humans showing that these biases persist in informationally-impoverished contexts, the male bias persists when even pitted against verbs' implicit-causality biases, and these biases persist even when use of the last-name-only format for women is primed. Furthermore, we compare the human results to sentence-completions produced by the language model chatGPT, and show that chatGPT exhibits a weaker gender-bias effect but a stronger eminence bias than human participants.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View