Training Computational Models of Group Processes without Groundtruth: the Self- vs External Assessment’s Dilemma

Lucien Maman, G. Volpe, G. Varni
{"title":"Training Computational Models of Group Processes without Groundtruth: the Self- vs External Assessment’s Dilemma","authors":"Lucien Maman, G. Volpe, G. Varni","doi":"10.1145/3536220.3563687","DOIUrl":null,"url":null,"abstract":"Supervised learning relies on the availability and reliability of the labels used to train computational models. In research areas such as Affective Computing and Social Signal Processing, such labels are usually extracted from multiple self- and/or external assessments. Labels are, then, either aggregated to produce a single groundtruth label, or all used during training, potentially resulting in degrading performance of the models. Defining a “true” label is, however, complex. Labels can be gathered at different times, with different tools, and may contain biases. Furthermore, multiple assessments are usually available for a same sample with potential contradictions. Thus, it is crucial to devise strategies that can take advantage of both self- and external assessments to train computational models without a reliable groundtruth. In this study, we designed and tested 3 of such strategies with the aim of mitigating the biases and making the models more robust to uncertain labels. Results show that the strategy based on weighting the loss during training according to a measure of disagreement improved the performances of the baseline, hence, underlining the potential of such an approach.","PeriodicalId":186796,"journal":{"name":"Companion Publication of the 2022 International Conference on Multimodal Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion Publication of the 2022 International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3536220.3563687","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Supervised learning relies on the availability and reliability of the labels used to train computational models. In research areas such as Affective Computing and Social Signal Processing, such labels are usually extracted from multiple self- and/or external assessments. Labels are, then, either aggregated to produce a single groundtruth label, or all used during training, potentially resulting in degrading performance of the models. Defining a “true” label is, however, complex. Labels can be gathered at different times, with different tools, and may contain biases. Furthermore, multiple assessments are usually available for a same sample with potential contradictions. Thus, it is crucial to devise strategies that can take advantage of both self- and external assessments to train computational models without a reliable groundtruth. In this study, we designed and tested 3 of such strategies with the aim of mitigating the biases and making the models more robust to uncertain labels. Results show that the strategy based on weighting the loss during training according to a measure of disagreement improved the performances of the baseline, hence, underlining the potential of such an approach.
无基础真理的群体过程训练计算模型:自我评估与外部评估的困境
监督学习依赖于用于训练计算模型的标签的可用性和可靠性。在情感计算和社会信号处理等研究领域,这些标签通常是从多个自我和/或外部评估中提取的。然后,标签要么被聚合以产生一个单一的groundtruth标签,要么在训练期间全部使用,这可能会导致模型性能的降低。然而,定义一个“真正的”标签是很复杂的。标签可以在不同的时间,用不同的工具收集,并且可能包含偏见。此外,对于具有潜在矛盾的同一样本,通常可以进行多次评估。因此,设计出既能利用自我评估又能利用外部评估的策略来训练没有可靠基础的计算模型是至关重要的。在本研究中,我们设计并测试了3个这样的策略,目的是减轻偏差,使模型对不确定标签更加稳健。结果表明,在训练过程中根据不一致的度量来加权损失的策略提高了基线的性能,因此,强调了这种方法的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信