通过工人选择和数据增强的成本优化的NLP众包

IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY
Liner Yang;Yujie Wang;Zhixuan Fang;Yaping Huang;Erhong Yang
{"title":"通过工人选择和数据增强的成本优化的NLP众包","authors":"Liner Yang;Yujie Wang;Zhixuan Fang;Yaping Huang;Erhong Yang","doi":"10.1109/TNSE.2025.3559342","DOIUrl":null,"url":null,"abstract":"This paper presents worker selection and data augmentation algorithms aimed at improving annotation quality and reducing costs in crowdsourcing for Natural Language Processing (NLP). Unlike previous studies targeting simpler tasks like binary classification, which require less contextual understanding, this study aims to provide a unified paradigm for a wider spectrum of NLP tasks, with sequence labeling and text generation as application showcases. Utilizing a Combinatorial Multi-Armed Bandit (CMAB) approach and a cost-effective human feedback mechanism, the proposed worker selection algorithm effectively addresses the challenge of label inter-dependency in NLP tasks. Additionally, our algorithm tackles the issues presented by imbalanced and small-scale datasets through data augmentation methods. Experiments on the CoNLL 2003 NER, Chinese OEI, and YACLC datasets demonstrated the algorithm's efficiency, achieving up to 100.04% of the expert-only baseline <inline-formula><tex-math>${\\text{F}}$</tex-math></inline-formula>-score and 65.97% cost savings. A dataset-independent experiment yielded 97.56% of the expert baseline <inline-formula><tex-math>${\\text{F}}$</tex-math></inline-formula>-score and 59.88% cost savings. We also provide a theoretical analysis proving our worker selection framework achieves sub-linear regret.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 4","pages":"3343-3359"},"PeriodicalIF":6.7000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cost-Optimized Crowdsourcing for NLP via Worker Selection and Data Augmentation\",\"authors\":\"Liner Yang;Yujie Wang;Zhixuan Fang;Yaping Huang;Erhong Yang\",\"doi\":\"10.1109/TNSE.2025.3559342\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents worker selection and data augmentation algorithms aimed at improving annotation quality and reducing costs in crowdsourcing for Natural Language Processing (NLP). Unlike previous studies targeting simpler tasks like binary classification, which require less contextual understanding, this study aims to provide a unified paradigm for a wider spectrum of NLP tasks, with sequence labeling and text generation as application showcases. Utilizing a Combinatorial Multi-Armed Bandit (CMAB) approach and a cost-effective human feedback mechanism, the proposed worker selection algorithm effectively addresses the challenge of label inter-dependency in NLP tasks. Additionally, our algorithm tackles the issues presented by imbalanced and small-scale datasets through data augmentation methods. Experiments on the CoNLL 2003 NER, Chinese OEI, and YACLC datasets demonstrated the algorithm's efficiency, achieving up to 100.04% of the expert-only baseline <inline-formula><tex-math>${\\\\text{F}}$</tex-math></inline-formula>-score and 65.97% cost savings. A dataset-independent experiment yielded 97.56% of the expert baseline <inline-formula><tex-math>${\\\\text{F}}$</tex-math></inline-formula>-score and 59.88% cost savings. We also provide a theoretical analysis proving our worker selection framework achieves sub-linear regret.\",\"PeriodicalId\":54229,\"journal\":{\"name\":\"IEEE Transactions on Network Science and Engineering\",\"volume\":\"12 4\",\"pages\":\"3343-3359\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2025-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Network Science and Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10959726/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10959726/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了在自然语言处理(NLP)众包中提高标注质量和降低成本的工作者选择和数据增强算法。与以往针对简单任务的研究不同,如二元分类,需要较少的上下文理解,本研究旨在为更广泛的NLP任务提供统一的范式,以序列标记和文本生成为应用展示。该算法利用组合多臂班迪(CMAB)方法和高效的人类反馈机制,有效地解决了NLP任务中标签相互依赖的问题。此外,我们的算法通过数据增强方法解决了不平衡和小规模数据集所带来的问题。在CoNLL 2003 NER、中文OEI和YACLC数据集上的实验证明了该算法的效率,达到了专家基线${\text{F}}$-得分的100.04%,节省了65.97%的成本。独立于数据集的实验产生了97.56%的专家基线${\text{F}}$-得分和59.88%的成本节约。我们还提供了理论分析,证明我们的工人选择框架实现了亚线性后悔。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cost-Optimized Crowdsourcing for NLP via Worker Selection and Data Augmentation
This paper presents worker selection and data augmentation algorithms aimed at improving annotation quality and reducing costs in crowdsourcing for Natural Language Processing (NLP). Unlike previous studies targeting simpler tasks like binary classification, which require less contextual understanding, this study aims to provide a unified paradigm for a wider spectrum of NLP tasks, with sequence labeling and text generation as application showcases. Utilizing a Combinatorial Multi-Armed Bandit (CMAB) approach and a cost-effective human feedback mechanism, the proposed worker selection algorithm effectively addresses the challenge of label inter-dependency in NLP tasks. Additionally, our algorithm tackles the issues presented by imbalanced and small-scale datasets through data augmentation methods. Experiments on the CoNLL 2003 NER, Chinese OEI, and YACLC datasets demonstrated the algorithm's efficiency, achieving up to 100.04% of the expert-only baseline ${\text{F}}$-score and 65.97% cost savings. A dataset-independent experiment yielded 97.56% of the expert baseline ${\text{F}}$-score and 59.88% cost savings. We also provide a theoretical analysis proving our worker selection framework achieves sub-linear regret.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Network Science and Engineering
IEEE Transactions on Network Science and Engineering Engineering-Control and Systems Engineering
CiteScore
12.60
自引率
9.10%
发文量
393
期刊介绍: The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信