Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia最新文献

筛选
英文 中文
Fighting Filterbubbles with Adversarial Training 对抗过滤气泡训练
Lukas Pfahler, K. Morik
{"title":"Fighting Filterbubbles with Adversarial Training","authors":"Lukas Pfahler, K. Morik","doi":"10.1145/3422841.3423535","DOIUrl":"https://doi.org/10.1145/3422841.3423535","url":null,"abstract":"Recommender engines play a role in the emergence and reinforcement of filter bubbles. When these systems learn that a user prefers content from a particular site, the user will be less likely to be exposed to different sources or opinions and, ultimately, is more likely to develop extremist tendencies. We trace roots of this phenomenon to the way the recommender engine represents news articles. The vectorial features modern systems extract from the plain text of news articles are already highly predictive of the associated news outlet. We propose a new training scheme based on adversarial machine learning to tackle this issue . Our preliminary experiments show that the features we can extract this way are significantly less predictive of the news outlet and thus offer the possibility to reduce the risk of manifestation of new filter bubbles.","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127111308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing Fairness and Accuracy in Sentiment Detection using Multiple Black Box Models 基于多黑盒模型的情感检测公平性和准确性平衡
Abdulaziz A. Almuzaini, V. Singh
{"title":"Balancing Fairness and Accuracy in Sentiment Detection using Multiple Black Box Models","authors":"Abdulaziz A. Almuzaini, V. Singh","doi":"10.1145/3422841.3423536","DOIUrl":"https://doi.org/10.1145/3422841.3423536","url":null,"abstract":"Sentiment detection is an important building block for multiple information retrieval tasks such as product recommendation, cyberbullying, fake news and misinformation detection. Unsurprisingly, multiple commercial APIs, each with different levels of accuracy and fairness, are now publicly available for sentiment detection. Users can easily incorporate these APIs in their applications. While combining inputs from multiple modalities or black-box models for increasing accuracy is commonly studied in multimedia computing literature, there has been little work on combining different modalities for increasingfairness of the resulting decision. In this work, we audit multiple commercial sentiment detection APIs for the gender bias in two-actor news headlines settings and report on the level of bias observed. Next, we propose a \"Flexible Fair Regression\" approach, which ensures satisfactory accuracy and fairness by jointly learning from multiple black-box models. The results pave way for fair yet accurate sentiment detectors for multiple applications.","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126993019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research 不以封面判断用户:了解社交媒体研究中多模态处理的危害
Jiachen Jiang, Soroush Vosoughi
{"title":"Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research","authors":"Jiachen Jiang, Soroush Vosoughi","doi":"10.1145/3422841.3423534","DOIUrl":"https://doi.org/10.1145/3422841.3423534","url":null,"abstract":"Social media has shaken the foundations of our society, unlikely as it may seem. Many of the popular tools used to moderate harmful digital content, however, have received widespread criticism from both the academic community and the public sphere for middling performance and lack of accountability. Though social media research is thought to center primarily on natural language processing, we demonstrate the need for the community to understand multimedia processing and its unique ethical considerations. Specifically, we identify statistical differences in the performance of Amazon Turk (MTurk) annotators when different modalities of information are provided and discuss the patterns of harm that arise from crowd-sourced human demographic prediction. Finally, we discuss the consequences of those biases through auditing the performance of a toxicity detector called Perspective API on the language of Twitter users across a variety of demographic categories.","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124785353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation 性别斜率:基于属性操作的计算机视觉模型的反事实公平性
Jungseock Joo, Kimmo Kärkkäinen
{"title":"Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation","authors":"Jungseock Joo, Kimmo Kärkkäinen","doi":"10.1145/3422841.3423533","DOIUrl":"https://doi.org/10.1145/3422841.3423533","url":null,"abstract":"Automated computer vision systems have been applied in many domains including security, law enforcement, and personal devices, but recent reports suggest that these systems may produce biased results, discriminating against people in certain demographic groups. Diagnosing and understanding the underlying true causes of model biases, however, are challenging tasks because modern computer vision systems rely on complex black-box models whose behaviors are hard to decode. We propose to use an encoder-decoder network developed for image attribute manipulation to synthesize facial images varying in the dimensions of gender and race while keeping other signals intact. We use these synthesized images to measure counterfactual fairness of commercial computer vision classifiers by examining the degree to which these classifiers are affected by gender and racial cues controlled in the images, e.g., feminine faces may elicit higher scores for the concept of nurse and lower scores for STEM-related concepts.","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115957766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia 第二届多媒体公平、问责、透明与道德国际研讨会论文集
{"title":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","authors":"","doi":"10.1145/3422841","DOIUrl":"https://doi.org/10.1145/3422841","url":null,"abstract":"","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116342555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信