Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing最新文献

筛选
英文 中文
Eliciting and Learning with Soft Labels from Every Annotator 用每个注释者的软标签引出和学习
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-07-02 DOI: 10.48550/arXiv.2207.00810
K. M. Collins, Umang Bhatt, Adrian Weller
{"title":"Eliciting and Learning with Soft Labels from Every Annotator","authors":"K. M. Collins, Umang Bhatt, Adrian Weller","doi":"10.48550/arXiv.2207.00810","DOIUrl":"https://doi.org/10.48550/arXiv.2207.00810","url":null,"abstract":"The labels used to train machine learning (ML) models are of paramount importance. Typically for ML classification tasks, datasets contain hard labels, yet learning using soft labels has been shown to yield benefits for model generalization, robustness, and calibration. Earlier work found success in forming soft labels from multiple annotators' hard labels; however, this approach may not converge to the best labels and necessitates many annotators, which can be expensive and inefficient. We focus on efficiently eliciting soft labels from individual annotators. We collect and release a dataset of soft labels (which we call CIFAR-10S) over the CIFAR-10 test set via a crowdsourcing study (N=248). We demonstrate that learning with our labels achieves comparable model performance to prior approaches while requiring far fewer annotators -- albeit with significant temporal costs per elicitation. Our elicitation methodology therefore shows nuanced promise in enabling practitioners to enjoy the benefits of improved model performance and reliability with fewer annotators, and serves as a guide for future dataset curators on the benefits of leveraging richer information, such as categorical uncertainty, from individual annotators.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85741930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI 连接算法研究和使用情境:可解释人工智能情境化评估的视角
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-06-22 DOI: 10.48550/arXiv.2206.10847
Q. Liao, Yunfeng Zhang, Ronny Luss, F. Doshi-Velez, Amit Dhurandhar, Microsoft Research, Twitter Inc, Ibm Research
{"title":"Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI","authors":"Q. Liao, Yunfeng Zhang, Ronny Luss, F. Doshi-Velez, Amit Dhurandhar, Microsoft Research, Twitter Inc, Ibm Research","doi":"10.48550/arXiv.2206.10847","DOIUrl":"https://doi.org/10.48550/arXiv.2206.10847","url":null,"abstract":"Recent years have seen a surge of interest in the field of explainable AI (XAI), with a plethora of algorithms proposed in the literature. However, a lack of consensus on how to evaluate XAI hinders the advancement of the field. We highlight that XAI is not a monolithic set of technologies---researchers and practitioners have begun to leverage XAI algorithms to build XAI systems that serve different usage contexts, such as model debugging and decision-support. Algorithmic research of XAI, however, often does not account for these diverse downstream usage contexts, resulting in limited effectiveness or even unintended consequences for actual users, as well as difficulties for practitioners to make technical choices. We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts. Towards this goal, we introduce a perspective of contextualized XAI evaluation by considering the relative importance of XAI evaluation criteria for prototypical usage contexts of XAI. To explore the context dependency of XAI evaluation criteria, we conduct two survey studies, one with XAI topical experts and another with crowd workers. Our results urge for responsible AI research with usage-informed evaluation practices, and provide a nuanced understanding of user requirements for XAI in different usage contexts.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73219880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
A Human-Centric Perspective on Model Monitoring 以人为中心的模型监测视角
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-06-06 DOI: 10.1609/hcomp.v10i1.21997
Murtuza N. Shergadwala, Himabindu Lakkaraju, K. Kenthapadi
{"title":"A Human-Centric Perspective on Model Monitoring","authors":"Murtuza N. Shergadwala, Himabindu Lakkaraju, K. Kenthapadi","doi":"10.1609/hcomp.v10i1.21997","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21997","url":null,"abstract":"Predictive models are increasingly used to make various consequential decisions in high-stakes domains such as healthcare, finance, and policy. It becomes critical to ensure that these models make accurate predictions, are robust to shifts in the data, do not rely on spurious features, and do not unduly discriminate against minority groups. To this end, several approaches spanning various areas such as explainability, fairness, and robustness have been proposed in recent literature. Such approaches need to be human-centered as they cater to the understanding of the models to their users. However, there is little to no research on understanding the needs and challenges in monitoring deployed machine learning (ML) models from a human-centric perspective. To address this gap, we conducted semi-structured interviews with 13 practitioners who are experienced with deploying ML models and engaging with customers spanning domains such as financial services, healthcare, hiring, online retail, computational advertising, and conversational assistants. We identified various human-centric challenges and requirements for model monitoring in real-world applications. Specifically, we found that relevant stakeholders would want model monitoring systems to provide clear, unambiguous, and easy-to-understand insights that are readily actionable. Furthermore, our study also revealed that stakeholders desire customization of model monitoring systems to cater to domain-specific use cases.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81336351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Strategyproofing Peer Assessment via Partitioning: The Price in Terms of Evaluators' Expertise 通过划分的策略验证同伴评估:评估者专业知识的代价
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-01-25 DOI: 10.1609/hcomp.v10i1.21987
Komal Dhull, Steven Jecmen, Pravesh Kothari, Nihar B. Shah
{"title":"Strategyproofing Peer Assessment via Partitioning: The Price in Terms of Evaluators' Expertise","authors":"Komal Dhull, Steven Jecmen, Pravesh Kothari, Nihar B. Shah","doi":"10.1609/hcomp.v10i1.21987","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21987","url":null,"abstract":"Strategic behavior is a fundamental problem in a variety of real-world applications that require some form of peer assessment, such as peer grading of homeworks, grant proposal review, conference peer review of scientific papers, and peer assessment of employees in organizations. Since an individual's own work is in competition with the submissions they are evaluating, they may provide dishonest evaluations to increase the relative standing of their own submission. This issue is typically addressed by partitioning the individuals and assigning them to evaluate the work of only those from different subsets. Although this method ensures strategyproofness, each submission may require a different type of expertise for effective evaluation. In this paper, we focus on finding an assignment of evaluators to submissions that maximizes assigned evaluators' expertise subject to the constraint of strategyproofness. We analyze the price of strategyproofness: that is, the amount of compromise on the assigned evaluators' expertise required in order to get strategyproofness. We establish several polynomial-time algorithms for strategyproof assignment along with assignment-quality guarantees. Finally, we evaluate the methods on a dataset from conference peer review.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76717569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Modeling Simultaneous Preferences for Age, Gender, Race, and Professional Profiles in Government-Expense Spending: A Conjoint Analysis 对政府开支中年龄、性别、种族和职业背景的同时偏好建模:一个联合分析
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18942
Lujain Ibrahim, M. Ghassemi, Tuka Alhanai
{"title":"Modeling Simultaneous Preferences for Age, Gender, Race, and Professional Profiles in Government-Expense Spending: A Conjoint Analysis","authors":"Lujain Ibrahim, M. Ghassemi, Tuka Alhanai","doi":"10.1609/hcomp.v9i1.18942","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18942","url":null,"abstract":"Bias can have devastating outcomes on everyday life, and may manifest in subtle preferences for particular attributes (age, gender, ethnicity, profession). Understanding bias is complex, but first requires identifying the variety and interplay of individual preferences. In this study, we deployed a sociotechnical, web-based human-subject experiment to quantify individual preferences in the context of selecting an advisor to successfully pitch a government-expense. We utilized conjoint analysis to rank the preferences of 722 U.S. based subjects, and observed that their ideal advisor was White, middle-aged, and of either a government or STEM-related profession (0.68 AUROC, p < 0.05). The results motivate the simultaneous measurement of preferences as a strategy to offset preferences that may yield negative consequences (e.g. prejudice, disenfranchisement) in contexts where social interests are being represented.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83249068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid Instance-Level Knowledge Acquisition for Google Maps from Class-Level Common Sense 从类级常识快速获取谷歌地图的实例级知识
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18947
Christoper A. Welty, Lora Aroyo, Flip Korn, S. M. McCarthy, Shubin Zhao
{"title":"Rapid Instance-Level Knowledge Acquisition for Google Maps from Class-Level Common Sense","authors":"Christoper A. Welty, Lora Aroyo, Flip Korn, S. M. McCarthy, Shubin Zhao","doi":"10.1609/hcomp.v9i1.18947","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18947","url":null,"abstract":"Successful knowledge graphs (KGs) solved the historical knowledge acquisition bottleneck by supplanting an expert focus with a simple, crowd-friendly one: KG nodes represent popular people, places, organizations, etc., and the graph arcs represent common sense relations like affiliations, locations, etc. Techniques for more general, categorical, KG curation do not seem to have made the same transition: the KG research community is still largely focused on methods that belie the common-sense characteristics of successful KGs. In this paper, we propose a simple approach to acquiring and reasoning with class-level attributes from the crowd that represent broad common sense associations between categories. We pick a very real industrial-scale data set and problem: how to augment an existing knowledge graph of places and products with associations between them indicating the availability of the products at those places, which would enable a KG to provide answers to questions like, \"Where can I buy milk nearby?\" This problem has several practical challenges, not least of which is that only 30% of physical stores (i.e. brick & mortar stores) have a website, and fewer list their product inventory, leaving a large acquisition gap to be filled by methods other than information extraction (IE). Based on a KG-inspired intuition that a lot of the class-level pairs are part of people's general common sense, e.g. everyone knows grocery stores sell milk and don't sell asphalt, we acquired a mixture of instance- and class- level pairs (e.g. , , resp.) from a novel 3-tier crowdsourcing method, and demonstrate the scalability advantages of the class-level approach. Our results show that crowdsourced class-level knowledge can provide rapid scaling of knowledge acquisition in this and similar domains, as well as long-term value in the KG.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84852054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Checklist to Combat Cognitive Biases in Crowdsourcing 在众包中对抗认知偏见的清单
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18939
Tim Draws, Alisa Rieger, O. Inel, U. Gadiraju, N. Tintarev
{"title":"A Checklist to Combat Cognitive Biases in Crowdsourcing","authors":"Tim Draws, Alisa Rieger, O. Inel, U. Gadiraju, N. Tintarev","doi":"10.1609/hcomp.v9i1.18939","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18939","url":null,"abstract":"Recent research has demonstrated that cognitive biases such as the confirmation bias or the anchoring effect can negatively affect the quality of crowdsourced data. In practice, however, such biases go unnoticed unless specifically assessed or controlled for. Task requesters need to ensure that task workflow and design choices do not trigger workers’ cognitive biases. Moreover, to facilitate the reuse of crowdsourced data collections, practitioners can benefit from understanding whether and which cognitive biases may be associated with the data. To this end, we propose a 12-item checklist adapted from business psychology to combat cognitive biases in crowdsourcing. We demonstrate the practical application of this checklist in a case study on viewpoint annotations for search results. Through a retrospective analysis of relevant crowdsourcing research that has been published at HCOMP in 2018, 2019, and 2020, we show that cognitive biases may often affect crowd workers but are typically not considered as potential sources of poor data quality. The checklist we propose is a practical tool that requesters can use to improve their task designs and appropriately describe potential limitations of collected data. It contributes to a body of efforts towards making human-labeled data more reliable and reusable.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75379760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Making Time Fly: Using Fillers to Improve Perceived Latency in Crowd-Powered Conversational Systems 让时间飞逝:在群体驱动的会话系统中使用填充来改善感知延迟
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18935
Tahir Abbas, U. Gadiraju, Vassilis-Javed Khan, P. Markopoulos
{"title":"Making Time Fly: Using Fillers to Improve Perceived Latency in Crowd-Powered Conversational Systems","authors":"Tahir Abbas, U. Gadiraju, Vassilis-Javed Khan, P. Markopoulos","doi":"10.1609/hcomp.v9i1.18935","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18935","url":null,"abstract":"Crowd-Powered Conversational Systems (CPCS) are gaining traction due to their potential utility in a range of application fields where automated conversational interfaces are still inadequate. Currently, long response times negatively impact CPCSs, limiting their potential application as conversational partners. Related research has focused on developing algorithms for swiftly hiring workers and synchronous crowd coordination techniques to ensure high-quality work. Evaluation studies typically concern system reaction times and performance measurements, but have so far not examined the effects of extended wait times on users. The goal of this study, based on time perception models, is to explore how effective different time fillers \u0000are at reducing the negative impacts of waiting in CPCSs. To this end, we conducted a rigorous simulation-based between-subjects (N = 930) study on the Prolific crowdsourcing platform to assess the influence of different filler types across three levels of delay (8, 16 & 32s) for Information Retrieval (IR) and stress management tasks. Our results show that asking users to perform secondary tasks (e.g., microtasks or breathing exercises) while waiting for longer periods of time helped divert their attention away from timekeeping, increased their engagement, and resulted in shorter perceived waiting times. For shorter delays, conversational fillers generated more intense immersion and contributed to shorten the perception of time.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84832328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Enhancing Image Classification Capabilities of Crowdsourcing-Based Methods through Expanded Input Elicitation 通过扩展输入启发增强基于众包方法的图像分类能力
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18949
Romena Yasmin, Joshua Grassel, Mahmudulla Hassan, O. Fuentes, Adolfo R. Escobedo
{"title":"Enhancing Image Classification Capabilities of Crowdsourcing-Based Methods through Expanded Input Elicitation","authors":"Romena Yasmin, Joshua Grassel, Mahmudulla Hassan, O. Fuentes, Adolfo R. Escobedo","doi":"10.1609/hcomp.v9i1.18949","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18949","url":null,"abstract":"This study investigates how different forms of input elicitation obtained from crowdsourcing can be utilized to improve the quality of inferred labels for image classification tasks, where an image must be labeled as either positive or negative depending on the presence/absence of a specified object. Three types of input elicitation methods are tested: binary classification (positive or negative); level of confidence in binary response (on a scale from 0-100%); and what participants believe the majority of the other participants' binary classification is. We design a crowdsourcing experiment to test the performance of the proposed input elicitation methods and use data from over 200 participants. Various existing voting and machine learning (ML) methods are applied and others developed to make the best use of these inputs. In an effort to assess their performance on classification tasks of varying difficulty, a systematic synthetic image generation process is developed. Each generated image combines items from the MPEG-7 Core Experiment CE-Shape-1 Test Set into a single image using multiple parameters (e.g., density, transparency, etc.) and may or may not contain a target object. The difficulty of these images is validated by the performance of an automated image classification method. Experimental results suggest that more accurate classifications can be achieved when using the average of the self-reported confidence values as an additional attribute for ML algorithms relative to what is achieved with more traditional approaches. Additionally, they demonstrate that other performance metrics of interest, namely reduced false-negative rates, can be prioritized through special modifications of the proposed aggregation methods that leverage the variety of elicited inputs.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75353892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Exploring the Music Perception Skills of Crowd Workers 群体工作者的音乐感知能力探讨
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18944
I. P. Samiotis, S. Qiu, C. Lofi, Jie Yang, U. Gadiraju, A. Bozzon
{"title":"Exploring the Music Perception Skills of Crowd Workers","authors":"I. P. Samiotis, S. Qiu, C. Lofi, Jie Yang, U. Gadiraju, A. Bozzon","doi":"10.1609/hcomp.v9i1.18944","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18944","url":null,"abstract":"Music content annotation campaigns are common on paid crowdsourcing platforms. Crowd workers are expected to annotate complicated music artefacts, which can demand certain skills and expertise. Traditional methods of participant selection are not designed to capture these kind of domain-specific skills and expertise, and often domain-specific questions fall under the general demographics category. Despite the popularity of such tasks, there is a general lack of deeper understanding of the distribution of musical properties - especially auditory perception skills - among workers. To address this knowledge gap, we conducted a user study (N=100) on Prolific. We asked workers to indicate their musical sophistication through a questionnaire and assessed their music perception skills through an audio-based skill test. The goal of this work is to better understand the extent to which crowd workers possess higher perceptions skills, beyond their own musical education level and self reported abilities. Our study shows that untrained crowd workers can possess high perception skills on the music elements of melody, tuning, accent and tempo; skills that can be useful in a plethora of annotation tasks in the music domain.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78198114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信