Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing最新文献

筛选
英文 中文
How Context Influences Cross-Device Task Acceptance in Crowd Work 情境如何影响群体工作中的跨设备任务接受度
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-10-01 DOI: 10.1609/hcomp.v8i1.7463
Danula Hettiachchi, S. Wijenayake, S. Hosio, V. Kostakos, Jorge Gonçalves
{"title":"How Context Influences Cross-Device Task Acceptance in Crowd Work","authors":"Danula Hettiachchi, S. Wijenayake, S. Hosio, V. Kostakos, Jorge Gonçalves","doi":"10.1609/hcomp.v8i1.7463","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7463","url":null,"abstract":"Although crowd work is typically completed through desktop or laptop computers by workers at their home, literature has shown that crowdsourcing is feasible through a wide array of computing devices, including smartphones and digital voice assistants. An integrated crowdsourcing platform that operates across multiple devices could provide greater flexibility to workers, but there is little understanding of crowd workers’ perceptions on uptaking crowd tasks across multiple contexts through such devices. Using a crowdsourcing survey task, we investigate workers’ willingness to accept different types of crowd tasks presented on three device types in different scenarios of varying location, time and social context. Through analysis of over 25,000 responses received from 329 crowd workers on Amazon Mechanical Turk, we show that when tasks are presented on different devices, the task acceptance rate is 80.5% on personal computers, 77.3% on smartphones and 70.7% on digital voice assistants. Our results also show how different contextual factors such as location, social context and time influence workers decision to accept a task on a given device. Our findings provide important insights towards the development of effective task assignment mechanisms for cross-device crowd platforms.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76966550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Enhancing Collective Estimates by Aggregating Cardinal and Ordinal Inputs 通过汇总基数和序数输入来增强集体估计
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-10-01 DOI: 10.1609/hcomp.v8i1.7465
Ryan Kemmer, Yeawon Yoo, Adolfo R. Escobedo, Ross Maciejewski
{"title":"Enhancing Collective Estimates by Aggregating Cardinal and Ordinal Inputs","authors":"Ryan Kemmer, Yeawon Yoo, Adolfo R. Escobedo, Ross Maciejewski","doi":"10.1609/hcomp.v8i1.7465","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7465","url":null,"abstract":"There are many factors that affect the quality of data received from crowdsourcing, including cognitive biases, varying levels of expertise, and varying subjective scales. This work investigates how the elicitation and integration of multiple modalities of input can enhance the quality of collective estimations. We create a crowdsourced experiment where participants are asked to estimate the number of dots within images in two ways: ordinal (ranking) and cardinal (numerical) estimates. We run our study with 300 participants and test how the efficiency of crowdsourced computation is affected when asking participants to provide ordinal and/or cardinal inputs and how the accuracy of the aggregated outcome is affected when using a variety of aggregation methods. First, we find that more accurate ordinal and cardinal estimations can be achieved by prompting participants to provide both cardinal and ordinal information. Second, we present how accurate collective numerical estimates can be achieved with significantly fewer people when aggregating individual preferences using optimization-based consensus aggregation models. Interestingly, we also find that aggregating cardinal information may yield more accurate ordinal estimates.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89321231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Understanding the Effects of Explanation Types and User Motivations on Recommender System Use 理解解释类型和用户动机对推荐系统使用的影响
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-10-01 DOI: 10.1609/hcomp.v8i1.7466
Qing Li, Sharon Lynn Chu Yew Yee, Nanjie Rao, Mahsan Nourani
{"title":"Understanding the Effects of Explanation Types and User Motivations on Recommender System Use","authors":"Qing Li, Sharon Lynn Chu Yew Yee, Nanjie Rao, Mahsan Nourani","doi":"10.1609/hcomp.v8i1.7466","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7466","url":null,"abstract":"It is becoming increasingly common for intelligent systems, such as recommender systems, to provide explanations for their generated recommendations to the users. However, we still do not have a good understanding of what types of explanations work and what factors affect the effectiveness of different types of explanations. Our work focuses on explanations for movie recommender systems. This paper presents a mixed study where we hypothesize that the type of explanation, as well as user motivation for watching movies, will affect how users respond to recommendation system explanations. Our study compares three types of explanations: i) neighbor-ratings, ii) profile-based, and iii) event-based, as well as three types of user movie-watching motivations: i) hedonic (fun and relaxation), ii) eudaimonic (inspiration and meaningfulness), and iii) educational (learning new content). We discuss the implications of the study results for the design of explanations for movie recommender systems, and future novel research directions that the study results uncover.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83478992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Modeling Annotator Perspective and Polarized Opinions to Improve Hate Speech Detection 建模注释者视角和极化观点以改进仇恨言论检测
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-10-01 DOI: 10.1609/hcomp.v8i1.7473
S. Akhtar, Valerio Basile, V. Patti
{"title":"Modeling Annotator Perspective and Polarized Opinions to Improve Hate Speech Detection","authors":"S. Akhtar, Valerio Basile, V. Patti","doi":"10.1609/hcomp.v8i1.7473","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7473","url":null,"abstract":"In this paper we propose an approach to exploit the fine-grained knowledge expressed by individual human annotators during a hate speech (HS) detection task, before the aggregation of single judgments in a gold standard dataset eliminates non-majority perspectives. We automatically divide the annotators into groups, aiming at grouping them by similar personal characteristics (ethnicity, social background, culture etc.). To serve a multi-lingual perspective, we performed classification experiments on three different Twitter datasets in English and Italian languages. We created different gold standards, one for each group, and trained a state-of-the-art deep learning model on them, showing that supervised models informed by different perspectives on the target phenomena outperform a baseline represented by models trained on fully aggregated data. Finally, we implemented an ensemble approach that combines the single perspective-aware classifiers into an inclusive model. The results show that this strategy further improves the classification performance, especially with a significant boost in the recall of HS prediction.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82255452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Schema and Metadata Guide the Collective Generation of Relevant and Diverse Work 模式和元数据指导相关和多样化工作的集体生成
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-10-01 DOI: 10.1609/hcomp.v8i1.7479
Xiaotong Tone Xu, Judith E. Fan, Steven W. Dow
{"title":"Schema and Metadata Guide the Collective Generation of Relevant and Diverse Work","authors":"Xiaotong Tone Xu, Judith E. Fan, Steven W. Dow","doi":"10.1609/hcomp.v8i1.7479","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7479","url":null,"abstract":"While most crowd work seeks consistent answers, creative domains often seek more diverse input. The typical crowd mechanisms for controlling quality may stifle creativity, yet removing them altogether could just produce noise. Schemas and metadata provide two mechanisms for embedding existing knowledge into task environments. Schemas are expert-derived patterns designed to structure how people think through a problem. Metadata, on the other hand, illustrate a range of creative input that fits within the structure of a schema. To understand the relative effects of schemas and metadata, we conducted a study where crowd workers are asked to generate creative interpretations for a set of placemaking examples. Crowd workers were guided either by schema plus metadata, schema alone, or neither. We found that showing schema along with crowd-produced metadata helped workers contribute interpretations that are both more on-topic and diverse, compared to using the schema alone or no schema. We discuss the implications on how crowds can creatively build on insights shared by others.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88222514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Motivating Novice Crowd Workers through Goal Setting: An Investigation into the Effects on Complex Crowdsourcing Task Training 通过目标设定激励新手众包工作者:复杂众包任务培训效果调查
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-10-01 DOI: 10.1609/hcomp.v8i1.7470
Amy Rechkemmer, Ming Yin
{"title":"Motivating Novice Crowd Workers through Goal Setting: An Investigation into the Effects on Complex Crowdsourcing Task Training","authors":"Amy Rechkemmer, Ming Yin","doi":"10.1609/hcomp.v8i1.7470","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7470","url":null,"abstract":"Training workers within a task is one way of enabling novice workers, who may lack domain knowledge or experience, to work on complex crowdsourcing tasks. Based on goal setting theory in psychology, we conduct a randomized experiment to study whether and how setting different goals—including performance goal, learning goal, and behavioral goal—when training workers for a complex crowdsourcing task affects workers’ learning perception, learning gain, and post-training performance. We find that setting different goals during training significantly affects workers’ learning perception, but overall does not have an effect on learning gain or post-training performance. However, higher levels of learning gain can be obtained when setting learning goals for workers who are highly learning-oriented. Additionally, giving workers a challenging behavioral goal can nudge them to adopt desirable behavior meant to improve learning and performance, though the adoption of such behavior does not lead to as much improvement as when the worker decides to take part in the behavior themselves. We conclude by discussing the lessons we’ve learned on how to effectively utilize goals in complex crowdsourcing task training.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86658912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels 机器生成的翻译对一般用户有多大用处?猜测错误预测标签的人类评价
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-08-26 DOI: 10.1609/hcomp.v8i1.7477
Hua Shen, Ting-Hao 'Kenneth' Huang
{"title":"How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels","authors":"Hua Shen, Ting-Hao 'Kenneth' Huang","doi":"10.1609/hcomp.v8i1.7477","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7477","url":null,"abstract":"Explaining to users why automated systems make certain mistakes is important and challenging. Researchers have proposed ways to automatically produce interpretations for deep neural network models. However, it is unclear how useful these interpretations are in helping users figure out why they are getting an error. If an interpretation effectively explains to users how the underlying deep neural network model works, people who were presented with the interpretation should be better at predicting the model’s outputs than those who were not. This paper presents an investigation on whether or not showing machine-generated visual interpretations helps users understand the incorrectly predicted labels produced by image classifiers. We showed the images and the correct labels to 150 online crowd workers and asked them to select the incorrectly predicted labels with or without showing them the machine-generated visual interpretations. The results demonstrated that displaying the visual interpretations did not increase, but rather decreased, the average guessing accuracy by roughly 10%.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83323305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems 领域专业知识在用户信任中的作用以及智能系统第一印象的影响
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-08-20 DOI: 10.1609/hcomp.v8i1.7469
Mahsan Nourani, J. King, E. Ragan
{"title":"The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems","authors":"Mahsan Nourani, J. King, E. Ragan","doi":"10.1609/hcomp.v8i1.7469","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7469","url":null,"abstract":"Domain-specific intelligent systems are meant to help system users in their decision-making process. Many systems aim to simultaneously support different users with varying levels of domain expertise, but prior domain knowledge can affect user trust and confidence in detecting system errors. While it is also known that user trust can be influenced by first impressions with intelligent systems, our research explores the relationship between ordering bias and domain expertise when encountering errors in intelligent systems. In this paper, we present a controlled user study to explore the role of domain knowledge in establishing trust and susceptibility to the influence of first impressions on user trust. Participants reviewed an explainable image classifier with a constant accuracy and two different orders of observing system errors (observing errors in the beginning of usage vs. in the end). Our findings indicate that encountering errors early-on can cause negative first impressions for domain experts, negatively impacting their trust over the course of interactions. However, encountering correct outputs early helps more knowledgable users to dynamically adjust their trust based on their observations of system performance. In contrast, novice users suffer from over-reliance due to their lack of proper knowledge to detect errors.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77737214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
CrowDEA: Multi-view Idea Prioritization with Crowds crowddea:基于人群的多视角想法优先化
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-08-01 DOI: 10.1609/hcomp.v8i1.7460
Yukino Baba, Jiyi Li, H. Kashima
{"title":"CrowDEA: Multi-view Idea Prioritization with Crowds","authors":"Yukino Baba, Jiyi Li, H. Kashima","doi":"10.1609/hcomp.v8i1.7460","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7460","url":null,"abstract":"Given a set of ideas collected from crowds with regard to an open-ended question, how can we organize and prioritize them in order to determine the preferred ones based on preference comparisons by crowd evaluators? As there are diverse latent criteria for the value of an idea, multiple ideas can be considered as “the best”. In addition, evaluators can have different preference criteria, and their comparison results often disagree. In this paper, we propose an analysis method for obtaining a subset of ideas, which we call frontier ideas, that are the best in terms of at least one latent evaluation criterion. We propose an approach, called CrowDEA, which estimates the embeddings of the ideas in the multiple-criteria preference space, the best viewpoint for each idea, and preference criterion for each evaluator, to obtain a set of frontier ideas. Experimental results using real datasets containing numerous ideas or designs demonstrate that the proposed approach can effectively prioritize ideas from multiple viewpoints, thereby detecting frontier ideas. The embeddings of ideas learned by the proposed approach provide a visualization that facilitates observation of the frontier ideas. In addition, the proposed approach prioritizes ideas from a wider variety of viewpoints, whereas the baselines tend to use to the same viewpoints; it can also handle various viewpoints and prioritize ideas in situations where only a limited number of evaluators or labels are available.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81222587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance 超越准确性:心理模型在人类-人工智能团队绩效中的作用
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2019-10-28 DOI: 10.1609/hcomp.v7i1.5285
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, E. Horvitz
{"title":"Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance","authors":"Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, E. Horvitz","doi":"10.1609/hcomp.v7i1.5285","DOIUrl":"https://doi.org/10.1609/hcomp.v7i1.5285","url":null,"abstract":"Decisions made by human-AI teams (e.g., AI-advised humans) are increasingly common in high-stakes domains such as healthcare, criminal justice, and finance. Achieving high team performance depends on more than just the accuracy of the AI system: Since the human and the AI may have different expertise, the highest team performance is often reached when they both know how and when to complement one another. We focus on a factor that is crucial to supporting such complementary: the human’s mental model of the AI capabilities, specifically the AI system’s error boundary (i.e. knowing “When does the AI err?”). Awareness of this lets the human decide when to accept or override the AI’s recommendation. We highlight two key properties of an AI’s error boundary, parsimony and stochasticity, and a property of the task, dimensionality. We show experimentally how these properties affect humans’ mental models of AI capabilities and the resulting team performance. We connect our evaluations to related work and propose goals, beyond accuracy, that merit consideration during model selection and optimization to improve overall human-AI team performance.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91533124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 240
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信