AI News Hub Logo

AI News Hub

Whose Story Gets Told? Positionality and Bias in LLM Summaries of Life Narratives

cs.CL updates on arXiv.org
Melanie Subbiah, Haaris Mian, Nicholas Deas, Ananya Mayukha, Dan P. McAdams, Kathleen McKeown

arXiv:2604.20131v1 Announce Type: new Abstract: Increasingly, studies are exploring using Large Language Models (LLMs) for accelerated or scaled qualitative analysis of text data. While we can compare LLM accuracy against human labels directly for deductive coding, or labeling text, it is more challenging to judge the ethics and effectiveness of using LLMs in abstractive methods such as inductive thematic analysis. We collaborate with psychologists to study the abstractive claims LLMs make about human life stories, asking, how does using an LLM as an interpreter of meaning affect the conclusions and perspectives of a study? We propose a summarization-based pipeline for surfacing biases in perspective-taking an LLM might employ in interpreting these life stories. We demonstrate that our pipeline can identify both race and gender bias with the potential for representational harm. Finally, we encourage the use of this analysis in future studies involving LLM-based interpretation of study participants' written text or transcribed speech to characterize a positionality portrait for the study.