Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)最新文献

筛选
英文 中文
On the Dynamics of Gender Learning in Speech Translation 论语音翻译中性别学习的动态
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) Pub Date : 1900-01-01 DOI: 10.18653/v1/2022.gebnlp-1.12
Beatrice Savoldi, Marco Gaido, L. Bentivogli, Matteo Negri, M. Turchi
{"title":"On the Dynamics of Gender Learning in Speech Translation","authors":"Beatrice Savoldi, Marco Gaido, L. Bentivogli, Matteo Negri, M. Turchi","doi":"10.18653/v1/2022.gebnlp-1.12","DOIUrl":"https://doi.org/10.18653/v1/2022.gebnlp-1.12","url":null,"abstract":"Due to the complexity of bias and the opaque nature of current neural approaches, there is a rising interest in auditing language technologies. In this work, we contribute to such a line of inquiry by exploring the emergence of gender bias in Speech Translation (ST). As a new perspective, rather than focusing on the final systems only, we examine their evolution over the course of training. In this way, we are able to account for different variables related to the learning dynamics of gender translation, and investigate when and how gender divides emerge in ST. Accordingly, for three language pairs (en ? es, fr, it) we compare how ST systems behave for masculine and feminine translation at several levels of granularity. We find that masculine and feminine curves are dissimilar, with the feminine one being characterized by more erratic behaviour and late improvements over the course of training. Also, depending on the considered phenomena, their learning trends can be either antiphase or parallel. Overall, we show how such a progressive analysis can inform on the reliability and time-wise acquisition of gender, which is concealed by static evaluations and standard metrics.","PeriodicalId":161909,"journal":{"name":"Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)","volume":"2675 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133923578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Occupational Biases in Norwegian and Multilingual Language Models 挪威语和多语言模式中的职业偏见
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) Pub Date : 1900-01-01 DOI: 10.18653/v1/2022.gebnlp-1.21
Samia Touileb, Lilja Øvrelid, Erik Velldal
{"title":"Occupational Biases in Norwegian and Multilingual Language Models","authors":"Samia Touileb, Lilja Øvrelid, Erik Velldal","doi":"10.18653/v1/2022.gebnlp-1.21","DOIUrl":"https://doi.org/10.18653/v1/2022.gebnlp-1.21","url":null,"abstract":"In this paper we explore how a demographic distribution of occupations, along gender dimensions, is reflected in pre-trained language models. We give a descriptive assessment of the distribution of occupations, and investigate to what extent these are reflected in four Norwegian and two multilingual models. To this end, we introduce a set of simple bias probes, and perform five different tasks combining gendered pronouns, first names, and a set of occupations from the Norwegian statistics bureau. We show that language specific models obtain more accurate results, and are much closer to the real-world distribution of clearly gendered occupations. However, we see that none of the models have correct representations of the occupations that are demographically balanced between genders. We also discuss the importance of the training data on which the models were trained on, and argue that template-based bias probes can sometimes be fragile, and a simple alteration in a template can change a model’s behavior.","PeriodicalId":161909,"journal":{"name":"Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124727335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Unsupervised Mitigating Gender Bias by Character Components: A Case Study of Chinese Word Embedding 汉字成分的无监督缓解性别偏见——以中文词嵌入为例
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) Pub Date : 1900-01-01 DOI: 10.18653/v1/2022.gebnlp-1.14
Xiuying Chen, Mingzhe Li, Rui Yan, Xin Gao, Xiangliang Zhang
{"title":"Unsupervised Mitigating Gender Bias by Character Components: A Case Study of Chinese Word Embedding","authors":"Xiuying Chen, Mingzhe Li, Rui Yan, Xin Gao, Xiangliang Zhang","doi":"10.18653/v1/2022.gebnlp-1.14","DOIUrl":"https://doi.org/10.18653/v1/2022.gebnlp-1.14","url":null,"abstract":"Word embeddings learned from massive text collections have demonstrated significant levels of discriminative biases.However, debias on the Chinese language, one of the most spoken languages, has been less explored.Meanwhile, existing literature relies on manually created supplementary data, which is time- and energy-consuming.In this work, we propose the first Chinese Gender-neutral word Embedding model (CGE) based on Word2vec, which learns gender-neutral word embeddings without any labeled data.Concretely, CGE utilizes and emphasizes the rich feminine and masculine information contained in radicals, i.e., a kind of component in Chinese characters, during the training procedure.This consequently alleviates discriminative gender biases.Experimental results on public benchmark datasets show that our unsupervised method outperforms the state-of-the-art supervised debiased word embedding models without sacrificing the functionality of the embedding model.","PeriodicalId":161909,"journal":{"name":"Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132290672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信