Gender and Representation Bias in GPT-3 Generated Stories

Li Lucy, David Bamman
{"title":"Gender and Representation Bias in GPT-3 Generated Stories","authors":"Li Lucy, David Bamman","doi":"10.18653/V1/2021.NUSE-1.5","DOIUrl":null,"url":null,"abstract":"Using topic modeling and lexicon-based word similarity, we find that stories generated by GPT-3 exhibit many known gender stereotypes. Generated stories depict different topics and descriptions depending on GPT-3’s perceived gender of the character in a prompt, with feminine characters more likely to be associated with family and appearance, and described as less powerful than masculine characters, even when associated with high power verbs in a prompt. Our study raises questions on how one can avoid unintended social biases when using large language models for storytelling.","PeriodicalId":316373,"journal":{"name":"Proceedings of the Third Workshop on Narrative Understanding","volume":"185 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"145","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third Workshop on Narrative Understanding","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/V1/2021.NUSE-1.5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 145

Abstract

Using topic modeling and lexicon-based word similarity, we find that stories generated by GPT-3 exhibit many known gender stereotypes. Generated stories depict different topics and descriptions depending on GPT-3’s perceived gender of the character in a prompt, with feminine characters more likely to be associated with family and appearance, and described as less powerful than masculine characters, even when associated with high power verbs in a prompt. Our study raises questions on how one can avoid unintended social biases when using large language models for storytelling.
GPT-3生成故事中的性别和代表性偏见
使用主题建模和基于词典的词相似度,我们发现由GPT-3生成的故事表现出许多已知的性别刻板印象。根据GPT-3对提示中人物性别的感知,生成的故事描绘了不同的主题和描述,女性角色更可能与家庭和外表联系在一起,并且被描述为不如男性角色强大,即使在提示中与高权力动词联系在一起。我们的研究提出了一个问题,即在使用大型语言模型讲故事时,如何避免意外的社会偏见。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信