


In this exhibit, participants can interactively explore AI-generated work letters and how a typical ML-enabled work task can evidence gender bias.
While some research has shown that LLMs have learned latent statistical correlations between gender and occupation, it is not clear how this relationship will manifest given a real, high impact task.
In this exhibit, participants will provide a candidate’s name, job title, gender (for ensuring the model uses the correct pronouns), and main accomplishments in their current role. The model will generate a work related letter for the candidate you provide and a paired letter for a candidate with a different name and pronouns.
You can compare which letter is more positive and explore whether the latent correlation between gender and occupation results in meaningful differences in the letters for candidates of different genders.
Dr. Kristian Lum
Kristian Lum, a renowned statistician and machine learning researcher who focuses on fairness, accountability and transparency. She is a research associate professor at the University of Chicago Data Science Institute. Kristian has worked at Twitter, the Human Rights Data Analysis Group, and the University of Pennsylvania. She is a founding member of the Executive Committee of the ACM conference on Fairness, Accountability, and Transparency (FAccT) — the premier venue for work on the societal impacts of algorithmic systems — and was named an Emerging Leader in Statistics by the Committee of Presidents of Statistical Societies and a Kavli Fellow by the National Academy of Sciences.and environmental impacts of AI models, datasets and systems.
Heading 1
