Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing最新文献

筛选
英文 中文
Performance of Paid and Volunteer Image Labeling in Citizen Science - A Retrospective Analysis 公民科学中有偿与志愿者形象标注的绩效——回顾分析
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-10-14 DOI: 10.1609/hcomp.v10i1.21988
Kutub Gandhi, Sofia Eleni Spatharioti, Scott Eustis, S. Wylie, Seth Cooper
{"title":"Performance of Paid and Volunteer Image Labeling in Citizen Science - A Retrospective Analysis","authors":"Kutub Gandhi, Sofia Eleni Spatharioti, Scott Eustis, S. Wylie, Seth Cooper","doi":"10.1609/hcomp.v10i1.21988","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21988","url":null,"abstract":"Citizen science projects that rely on human computation can attempt to solicit volunteers or use paid microwork platforms such as Amazon Mechanical Turk. To better understand these approaches, this paper analyzes crowdsourced image label data sourced from an environmental justice project looking at wetland loss off the coast of Louisiana. This retrospective analysis identifies key differences between the two populations: while Mechanical Turk workers are accessible, cost-efficient, and rate more images than volunteers (on average), their labels are of lower quality, whereas volunteers can achieve high accuracy with comparably few votes. Volunteer organizations can also interface with the educational or outreach goals of an organization in ways that the limited context of microwork prevents.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91521718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HSI: Human Saliency Imitator for Benchmarking Saliency-Based Model Explanations HSI:人类显著性模仿者的基准显著性为基础的模型解释
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-10-14 DOI: 10.1609/hcomp.v10i1.22002
Yi Yang, Yueyuan Zheng, Didan Deng, Jindi Zhang, Yongxiang Huang, Yumeng Yang, J. Hsiao, Caleb Chen Cao
{"title":"HSI: Human Saliency Imitator for Benchmarking Saliency-Based Model Explanations","authors":"Yi Yang, Yueyuan Zheng, Didan Deng, Jindi Zhang, Yongxiang Huang, Yumeng Yang, J. Hsiao, Caleb Chen Cao","doi":"10.1609/hcomp.v10i1.22002","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.22002","url":null,"abstract":"Model explanations are generated by XAI (explainable AI) methods to help people understand and interpret machine learning models. To study XAI methods from the human perspective, we propose a human-based benchmark dataset, i.e., human saliency benchmark (HSB), for evaluating saliency-based XAI methods. Different from existing human saliency annotations where class-related features are manually and subjectively labeled, this benchmark collects more objective human attention on vision information with a precise eye-tracking device and a novel crowdsourcing experiment. Taking the labor cost of human experiment into consideration, we further explore the potential of utilizing a prediction model trained on HSB to mimic saliency annotating by humans. Hence, a dense prediction problem is formulated, and we propose an encoder-decoder architecture which combines multi-modal and multi-scale features to produce the human saliency maps. Accordingly, a pretraining-finetuning method is designed to address the model training problem. Finally, we arrive at a model trained on HSB named human saliency imitator (HSI). We show, through an extensive evaluation, that HSI can successfully predict human saliency on our HSB dataset, and the HSI-generated human saliency dataset on ImageNet showcases the ability of benchmarking XAI methods both qualitatively and quantitatively.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78742620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
CHIME: Causal Human-in-the-Loop Model Explanations 因果人在循环模型解释
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-10-14 DOI: 10.1609/hcomp.v10i1.21985
S. Biswas, L. Corti, Stefan Buijsman, Jie Yang
{"title":"CHIME: Causal Human-in-the-Loop Model Explanations","authors":"S. Biswas, L. Corti, Stefan Buijsman, Jie Yang","doi":"10.1609/hcomp.v10i1.21985","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21985","url":null,"abstract":"Explaining the behaviour of Artificial Intelligence models has become a necessity. Their opaqueness and fragility are not tolerable in high-stakes domains especially. Although considerable progress is being made in the field of Explainable Artificial Intelligence, scholars have demonstrated limits and flaws of existing approaches: explanations requiring further interpretation, non-standardised explanatory format, and overall fragility. In light of this fragmentation, we turn to the field of philosophy of science to understand what constitutes a good explanation, that is, a generalisation that covers both the actual outcome and, possibly multiple, counterfactual outcomes. Inspired by this, we propose CHIME: a human-in-the-loop, post-hoc approach focused on creating such explanations by establishing the causal features in the input. We first elicit people's cognitive abilities to understand what parts of the input the model might be attending to. Then, through Causal Discovery we uncover the underlying causal graph relating the different concepts. Finally, with such a structure, we compute the causal effects different concepts have towards a model's outcome. We evaluate the Fidelity, Coherence, and Accuracy of the explanations obtained with CHIME with respect to two state-of-the-art Computer Vision models trained on real-world image data sets. We found evidence that the explanations reflect the causal concepts tied to a model's prediction, both in terms of causal strength and accuracy.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85727795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SignUpCrowd: Using Sign-Language as an Input Modality for Microtask Crowdsourcing SignUpCrowd:使用手语作为微任务众包的输入方式
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-10-14 DOI: 10.1609/hcomp.v10i1.21998
Aayush Singh, Sebastian Wehkamp, U. Gadiraju
{"title":"SignUpCrowd: Using Sign-Language as an Input Modality for Microtask Crowdsourcing","authors":"Aayush Singh, Sebastian Wehkamp, U. Gadiraju","doi":"10.1609/hcomp.v10i1.21998","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21998","url":null,"abstract":"Different input modalities have been proposed and employed in technological landscapes like microtask crowdsourcing. However, sign language remains an input modality that has received little attention. Despite the fact that thousands of people around the world primarily use sign language, very little has been done to include them in such technological landscapes. We aim to address this gap and take a step towards the inclusion of deaf and mute people in microtask crowdsourcing. We first identify various microtasks which can be adapted to use sign language as input, while elucidating the challenges it introduces. We built a system called ‘SignUpCrowd’ that can be used to support sign language input for microtask crowdsourcing. We carried out a between-subjects study (N=240) to understand the effectiveness of sign language as an input modality for microtask crowdsourcing in comparison to prevalent textual and click input modalities. We explored this through the lens of visual question answering and sentiment analysis tasks by recruiting workers from the Prolific crowdsourcing platform. Our results indicate that sign language as an input modality in microtask crowdsourcing is comparable to the prevalent standards of using text and click input. Although people with no knowledge of sign language found it difficult to use, this input modality has the potential to broaden participation in crowd work. We highlight evidence suggesting the scope for sign language as a viable input type for microtask crowdsourcing. Our findings pave the way for further research to introduce sign language in real-world applications and create an inclusive technological landscape that more people can benefit from.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88092306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TaskLint: Automated Detection of Ambiguities in Task Instructions TaskLint:自动检测任务指令中的歧义
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-10-14 DOI: 10.1609/hcomp.v10i1.21996
V. K. C. Manam, Joseph Divyan Thomas, Alexander J. Quinn
{"title":"TaskLint: Automated Detection of Ambiguities in Task Instructions","authors":"V. K. C. Manam, Joseph Divyan Thomas, Alexander J. Quinn","doi":"10.1609/hcomp.v10i1.21996","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21996","url":null,"abstract":"Clear instructions are a necessity for obtaining accurate results from crowd workers. Even small ambiguities can force workers to choose an interpretation arbitrarily, resulting in errors and inconsistency. Crisp instructions require significant time to design, test, and iterate. Recent approaches have engaged workers to detect and correct ambiguities. However, this process increases the time and money required to obtain accurate, consistent results. We present TaskLint, a system to automatically detect problems with task instructions. Leveraging a diverse set of existing NLP tools, TaskLint identifies words and sentences that might foretell worker confusion. This is analogous to static analysis tools for code (\"linters\"), which detect possible features in code that might indicate the presence of bugs. Our evaluation of TaskLint using task instructions created by novices confirms the potential for static tools to improve task clarity and the accuracy of results, while also highlighting several challenges.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73857756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Goal-Setting Behavior of Workers on Crowdsourcing Platforms: An Exploratory Study on MTurk and Prolific 众包平台员工目标设定行为:基于MTurk和高产的探索性研究
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-10-14 DOI: 10.1609/hcomp.v10i1.21983
Tahir Abbas, U. Gadiraju
{"title":"Goal-Setting Behavior of Workers on Crowdsourcing Platforms: An Exploratory Study on MTurk and Prolific","authors":"Tahir Abbas, U. Gadiraju","doi":"10.1609/hcomp.v10i1.21983","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21983","url":null,"abstract":"A wealth of evidence across several domains indicates that goal setting improves performance and learning by enabling individuals to commit their thoughts and actions to goal achievement. Recently, researchers have begun studying the effects of goal setting in paid crowdsourcing to improve the quality and quantity of contributions, increase learning gains, and hold participants accountable for contributing more effectively. However, there is a lack of research addressing crowd workers' goal-setting practices, how they are currently pursuing them, and the challenges that they face. This information is essential for researchers and developers to create tools that assist crowd workers in pursuing their goals more effectively, thereby improving the quality of their contributions. This paper addresses these gaps by conducting mixed-method research in which we surveyed 205 workers from two crowdsourcing platforms -- Amazon Mechanical Turk (MTurk) and Prolific -- about their goal-setting practices. Through a 14-item survey, we asked workers regarding the types of goals they create, their goal achievement strategies, potential barriers that impede goal attainment, and their use of software tools for effective goal management. We discovered that (a) workers actively create intrinsic and extrinsic goals; (b) use a combination of tools for goal management; (c) medical issues and a busy lifestyle are some obstacles to their goal achievement; and (d) we gathered novel features for future goal management tools. Our findings shed light on the broader implications of developing goal management tools to improve workers' well-being.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73479675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
It Is like Finding a Polar Bear in the Savannah! Concept-Level AI Explanations with Analogical Inference from Commonsense Knowledge 这就像在大草原上找到一只北极熊!基于常识性知识类比推理的概念级AI解释
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-10-14 DOI: 10.1609/hcomp.v10i1.21990
Gaole He, Agathe Balayn, Stefan Buijsman, Jie Yang, U. Gadiraju
{"title":"It Is like Finding a Polar Bear in the Savannah! Concept-Level AI Explanations with Analogical Inference from Commonsense Knowledge","authors":"Gaole He, Agathe Balayn, Stefan Buijsman, Jie Yang, U. Gadiraju","doi":"10.1609/hcomp.v10i1.21990","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21990","url":null,"abstract":"With recent advances in explainable artificial intelligence (XAI), researchers have started to pay attention to concept-level explanations, which explain model predictions with a high level of abstraction. However, such explanations may be difficult to digest for laypeople due to the potential knowledge gap and the concomitant cognitive load. Inspired by recent work, we argue that analogy-based explanations composed of commonsense knowledge may be a potential solution to tackle this issue. In this paper, we propose analogical inference as a bridge to help end-users leverage their commonsense knowledge to better understand the concept-level explanations. Specifically, we design an effective analogy-based explanation generation method and collect 600 analogy-based explanations from 100 crowd workers. Furthermore, we propose a set of structured dimensions for the qualitative assessment of analogy-based explanations and conduct an empirical evaluation of the generated analogies with experts. Our findings reveal significant positive correlations between the qualitative dimensions of analogies and the perceived helpfulness of analogy-based explanations. These insights can inform the design of future methods for the generation of effective analogy-based explanations. We also find that the understanding of commonsense explanations varies with the experience of the recipient user, which points out the need for further work on personalization when leveraging commonsense explanations.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88103979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gesticulate for Health's Sake! Understanding the Use of Gestures as an Input Modality for Microtask Crowdsourcing 为了健康多做手势!理解手势作为微任务众包输入方式的使用
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-10-14 DOI: 10.1609/hcomp.v10i1.21984
Garrett Allen, Andrea Hu, U. Gadiraju
{"title":"Gesticulate for Health's Sake! Understanding the Use of Gestures as an Input Modality for Microtask Crowdsourcing","authors":"Garrett Allen, Andrea Hu, U. Gadiraju","doi":"10.1609/hcomp.v10i1.21984","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21984","url":null,"abstract":"Human input is pivotal in building reliable and robust artificial intelligence systems. By providing a means to gather diverse, high-quality, representative, and cost-effective human input on demand, microtask crowdsourcing marketplaces have thrived. Despite the unmistakable benefits available from online crowd work, the lack of health provisions and safeguards, along with existing work practices threatens the sustainability of this paradigm. Prior work has investigated worker engagement and mental health, yet no such investigations into the effects of crowd work on the physical health of workers have been undertaken. Crowd workers complete their work in various sub-optimal work environments, often using a conventional input modality of a mouse and keyboard. The repetitive nature of microtask crowdsourcing can lead to stress-related injuries, such as the well-documented carpal tunnel syndrome. It is known that stretching exercises can help reduce injuries and discomfort in office workers. Gestures, the act of using the body intentionally to affect the behavior of an intelligent system, can serve as both stretches and an alternative form of input for microtasks. To better understand the usefulness of the dual-purpose input modality of ergonomically-informed gestures across different crowdsourced microtasks, we carried out a controlled 2 x 3 between-subjects study (N=294). Considering the potential benefits of gestures as an input modality, our results suggest a real trade-off between worker accuracy in exchange for potential short to long-term health benefits.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82240429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
When More Data Lead Us Astray: Active Data Acquisition in the Presence of Label Bias 当更多的数据使我们误入歧途:标签偏差存在下的主动数据获取
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-10-14 DOI: 10.1609/hcomp.v10i1.21994
Yunyi Li, Maria De-Arteaga, M. Saar-Tsechansky
{"title":"When More Data Lead Us Astray: Active Data Acquisition in the Presence of Label Bias","authors":"Yunyi Li, Maria De-Arteaga, M. Saar-Tsechansky","doi":"10.1609/hcomp.v10i1.21994","DOIUrl":"https://doi.org/10.1609/hcomp.v10i1.21994","url":null,"abstract":"An increased awareness concerning risks of algorithmic bias has driven a surge of efforts around bias mitigation strategies. A vast majority of the proposed approaches fall under one of two categories: (1) imposing algorithmic fairness constraints on predictive models, and (2) collecting additional training samples. Most recently and at the intersection of these two categories, methods that propose active learning under fairness constraints have been developed. However, proposed bias mitigation strategies typically overlook the bias presented in the observed labels. In this work, we study fairness considerations of active data collection strategies in the presence of label bias. We first present an overview of different types of label bias in the context of supervised learning systems. We then empirically show that, when overlooking label bias, collecting more data can aggravate bias, and imposing fairness constraints that rely on the observed labels in the data collection process may not address the problem. Our results illustrate the unintended consequences of deploying a model that attempts to mitigate a single type of bias while neglecting others, emphasizing the importance of explicitly differentiating between the types of bias that fairness-aware algorithms aim to address, and highlighting the risks of neglecting label bias during data collection.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81337405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Allocation Schemes in Analytic Evaluation: Applicant-Centric Holistic or Attribute-Centric Segmented? 分析评价中的分配方案:以申请人为中心的整体还是以属性为中心的分割?
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2022-09-18 DOI: 10.48550/arXiv.2209.08665
Jingyan Wang, Carmel Baharav, Nihar B. Shah, A. Woolley, R. Ravi
{"title":"Allocation Schemes in Analytic Evaluation: Applicant-Centric Holistic or Attribute-Centric Segmented?","authors":"Jingyan Wang, Carmel Baharav, Nihar B. Shah, A. Woolley, R. Ravi","doi":"10.48550/arXiv.2209.08665","DOIUrl":"https://doi.org/10.48550/arXiv.2209.08665","url":null,"abstract":"Many applications such as hiring and university admissions involve evaluation and selection of applicants. These tasks are fundamentally difficult, and require combining evidence from multiple different aspects (what we term \"attributes\"). In these applications, the number of applicants is often large, and a common practice is to assign the task to multiple evaluators in a distributed fashion. Specifically, in the often-used holistic allocation, each evaluator is assigned a subset of the applicants, and is asked to assess all relevant information for their assigned applicants. However, such an evaluation process is subject to issues such as miscalibration (evaluators see only a small fraction of the applicants and may not get a good sense of relative quality), and discrimination (evaluators are influenced by irrelevant information about the applicants). We identify that such attribute-based evaluation allows alternative allocation schemes. Specifically, we consider assigning each evaluator more applicants but fewer attributes per applicant, termed segmented allocation. We compare segmented allocation to holistic allocation on several dimensions via theoretical and experimental methods. We establish various tradeoffs between these two approaches, and identify conditions under which one approach results in more accurate evaluation than the other.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87752525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信