Helena A. Haxvig , Vincenzo D’Andrea , Maurizio Teli
{"title":"“我从未见过比这更好的玻璃天花板”:从参与式设计的角度来看,法学硕士生成的合成角色中的偏见和性别","authors":"Helena A. Haxvig , Vincenzo D’Andrea , Maurizio Teli","doi":"10.1016/j.ijhcs.2025.103651","DOIUrl":null,"url":null,"abstract":"<div><div>This study examines synthetic personas generated by Large Language Models (LLMs) and their implications, focusing on how these personas encode and perform gendering. Traditional personas carry implicit power and agency, making their accuracy and inclusivity essential. However, delegating persona creation to generative AI raises concerns about bias, representation, and ethical design. Poorly designed personas risk reinforcing stereotypes, marginalizing certain groups, and embedding biases into the design process. Using a mixed-method approach – combining direct inquiries with four LLMs and participatory workshops – we analyze gender bias in synthetic personas. Drawing from feminist theory, Human–Computer Interaction (HCI), and Participatory Design (PD), both societal, normative, and representational biases were identified. As a result of this, we argue that synthetic personas should not be used as direct stand-ins for real users but instead reframed as objects of critical inquiry. They can serve as provocations—tools that challenge assumptions and expose biases in LLM-generated outputs. Furthermore, this study underscores the need to move beyond exclusively expert-driven evaluations by incorporating user perspectives directly. By doing so, the evaluation process becomes richer, more representative, and better equipped to identify biases that might otherwise be overlooked.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103651"},"PeriodicalIF":5.1000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"“I’ve never seen a glass ceiling better represented”: Bias and gendering in LLM-generated synthetic personas from a participatory design perspective\",\"authors\":\"Helena A. Haxvig , Vincenzo D’Andrea , Maurizio Teli\",\"doi\":\"10.1016/j.ijhcs.2025.103651\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This study examines synthetic personas generated by Large Language Models (LLMs) and their implications, focusing on how these personas encode and perform gendering. Traditional personas carry implicit power and agency, making their accuracy and inclusivity essential. However, delegating persona creation to generative AI raises concerns about bias, representation, and ethical design. Poorly designed personas risk reinforcing stereotypes, marginalizing certain groups, and embedding biases into the design process. Using a mixed-method approach – combining direct inquiries with four LLMs and participatory workshops – we analyze gender bias in synthetic personas. Drawing from feminist theory, Human–Computer Interaction (HCI), and Participatory Design (PD), both societal, normative, and representational biases were identified. As a result of this, we argue that synthetic personas should not be used as direct stand-ins for real users but instead reframed as objects of critical inquiry. They can serve as provocations—tools that challenge assumptions and expose biases in LLM-generated outputs. Furthermore, this study underscores the need to move beyond exclusively expert-driven evaluations by incorporating user perspectives directly. By doing so, the evaluation process becomes richer, more representative, and better equipped to identify biases that might otherwise be overlooked.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"205 \",\"pages\":\"Article 103651\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1071581925002083\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925002083","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
“I’ve never seen a glass ceiling better represented”: Bias and gendering in LLM-generated synthetic personas from a participatory design perspective
This study examines synthetic personas generated by Large Language Models (LLMs) and their implications, focusing on how these personas encode and perform gendering. Traditional personas carry implicit power and agency, making their accuracy and inclusivity essential. However, delegating persona creation to generative AI raises concerns about bias, representation, and ethical design. Poorly designed personas risk reinforcing stereotypes, marginalizing certain groups, and embedding biases into the design process. Using a mixed-method approach – combining direct inquiries with four LLMs and participatory workshops – we analyze gender bias in synthetic personas. Drawing from feminist theory, Human–Computer Interaction (HCI), and Participatory Design (PD), both societal, normative, and representational biases were identified. As a result of this, we argue that synthetic personas should not be used as direct stand-ins for real users but instead reframed as objects of critical inquiry. They can serve as provocations—tools that challenge assumptions and expose biases in LLM-generated outputs. Furthermore, this study underscores the need to move beyond exclusively expert-driven evaluations by incorporating user perspectives directly. By doing so, the evaluation process becomes richer, more representative, and better equipped to identify biases that might otherwise be overlooked.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...