Human computation (Fairfax, Va.)最新文献

筛选
英文 中文
Public Comment on Draft NOAA Citizen Science Strategy 公众对NOAA公民科学战略草案的意见
Human computation (Fairfax, Va.) Pub Date : 2021-03-31 DOI: 10.15346/HC.V8I1.130
L. Shanley, Pietro Michelucci, K. Tsosie, George Wyeth, J. Drapkin, Krystal Azelton, D. Cavalier, J. Holmberg
{"title":"Public Comment on Draft NOAA Citizen Science Strategy","authors":"L. Shanley, Pietro Michelucci, K. Tsosie, George Wyeth, J. Drapkin, Krystal Azelton, D. Cavalier, J. Holmberg","doi":"10.15346/HC.V8I1.130","DOIUrl":"https://doi.org/10.15346/HC.V8I1.130","url":null,"abstract":"This guest editorial briefly describes a history of activities related to engaging the U.S. federal government in citizen science, and presents the recent public comments that we submitted to the American National Oceanic and Atmospheric Association (NOAA) in response to their recently published draft citizen science strategy.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"2 1","pages":"25-42"},"PeriodicalIF":0.0,"publicationDate":"2021-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74118046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Empirical Study on Effects of Self-Correction in Crowdsourced Microtasks 微任务众包中自我纠正效应的实证研究
Human computation (Fairfax, Va.) Pub Date : 2021-03-01 DOI: 10.15346/HC.V8I1.1
Masaki Kobayashi, H. Morita, Masaki Matsubara, N. Shimizu, Atsuyuki Morishima
{"title":"Empirical Study on Effects of Self-Correction in Crowdsourced Microtasks","authors":"Masaki Kobayashi, H. Morita, Masaki Matsubara, N. Shimizu, Atsuyuki Morishima","doi":"10.15346/HC.V8I1.1","DOIUrl":"https://doi.org/10.15346/HC.V8I1.1","url":null,"abstract":"Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update their results according to the review.Self-correction was proposed as a complementary approach to statistical algorithms, in which workers independently perform the same task.It can provide higher-quality results with low additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are required.In addition, as self-correction provides feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks.This paper reports our experimental results on self-corrections with a real-world crowdsourcing service.We found that:(1) Self-correction is effective for making workers reconsider their judgments.(2) Self-correction is effective more if workers are shown the task results of higher-quality workers during the second stage.(3) A perceptual learning effect is observed in some cases. Self-correction can provide feedback that shows workers how to provide high-quality answers in future tasks.(4) A Perceptual learning effect is observed, particularly with workers who moderately change answers in the second stage. This suggests that we can measure the learning potential of workers.These findings imply that requesters/crowdsourcing services can construct a positive loop for improved task results by the self-correction approach.However, (5) no long-term effects of the self-correction task were transferred to other similar tasks in two different settings.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"13 1","pages":"1-24"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85166630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Designing for Collective Intelligence and Community Resilience on Social Networks 社会网络上的集体智慧和社区弹性设计
Human computation (Fairfax, Va.) Pub Date : 2020-05-01 DOI: 10.15346/hc.v8i2.116
Jon Chamberlain, B. Turpin, Maged Ali, Kakia Chatsiou, Kirsty O'Callaghan
{"title":"Designing for Collective Intelligence and Community Resilience on Social Networks","authors":"Jon Chamberlain, B. Turpin, Maged Ali, Kakia Chatsiou, Kirsty O'Callaghan","doi":"10.15346/hc.v8i2.116","DOIUrl":"https://doi.org/10.15346/hc.v8i2.116","url":null,"abstract":"The popularity and ubiquity of social networks has enabled a new form of decentralised online collaboration: groups of users gathering around a central theme and working together to solve problems, complete tasks and develop social connections. Groups that display such `organic collaboration' have been shown to solve tasks quicker and more accurately than other methods of crowdsourcing. They can also enable community action and resilience in response to different events, from casual requests to emergency response and crisis management. However, engaging such groups through formal agencies risks disconnect and disengagement by destabilising motivational structures. This paper explores case studies of this phenomenon, reviews models of motivation that can help design systems to harness these groups and proposes a framework for lightweight engagement using existing platforms and social networks.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"34 1","pages":"15-32"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72769674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Predicting the Working Time of Microtasks Based on Workers' Perception of Prediction Errors 基于预测误差感知的微任务工作时间预测
Human computation (Fairfax, Va.) Pub Date : 2019-12-31 DOI: 10.15346/hc.v6i1.110
Susumu Saito, Chun-Wei Chiang, Saiph Savage, Teppei Nakano, Tetsunori Kobayashi, Jeffrey P. Bigham
{"title":"Predicting the Working Time of Microtasks Based on Workers' Perception of Prediction Errors","authors":"Susumu Saito, Chun-Wei Chiang, Saiph Savage, Teppei Nakano, Tetsunori Kobayashi, Jeffrey P. Bigham","doi":"10.15346/hc.v6i1.110","DOIUrl":"https://doi.org/10.15346/hc.v6i1.110","url":null,"abstract":"Crowd workers struggle to earn adequate wages. Given the limited task-related information provided on crowd platforms, workers often fail to estimate how long it would take to complete certain microtasks. Although there exist a few third-party tools and online communities that provide estimates of working times, such information is limited to microtasks that have been previously completed by other workers, and such tasks are usually booked immediately by experienced workers. This paper presents a computational technique for predicting microtask working times (i.e., how much time it takes to complete microtasks) based on past experiences of workers regarding similar tasks. The following two challenges were addressed during development of the proposed predictive model --- (i) collection of sufficient training data labeled with accurate working times, and (ii) evaluation and optimization of the prediction model. The paper first describes how 7,303 microtask submission data records were collected using a web browser extension --- installed by 83 Amazon Mechanical Turk (AMT) workers --- created for characterization of the diversity of worker behavior to facilitate accurate recording of working times. Next, challenges encountered in defining evaluation and/or objective functions have been described based on the tolerance demonstrated by workers with regard to prediction errors. To this end, surveys were conducted in AMT asking workers how they felt regarding prediction errors in working times pertaining to microtasks simulated using an \"imaginary\" AI system. Based on 91,060 survey responses submitted by 875 workers, objective/evaluation functions were derived for use in the prediction model to reflect whether or not the calculated prediction errors would be tolerated by workers. Evaluation results based on worker perceptions of prediction errors revealed that the proposed model was capable of predicting worker-tolerable working times in 73.6% of all tested microtask cases. Further, the derived objective function contributed to realization of accurate predictions across microtasks with more diverse durations.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"800 1","pages":"192-219"},"PeriodicalIF":0.0,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85419106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
POnline: An Online Pupil Annotation Tool Employing Crowd-sourcing and Engagement Mechanisms POnline:使用众包和参与机制的在线学生注释工具
Human computation (Fairfax, Va.) Pub Date : 2019-12-02 DOI: 10.15346/hc.v6i1.9
David Gil de Gómez Pérez, R. Bednarik
{"title":"POnline: An Online Pupil Annotation Tool Employing Crowd-sourcing and Engagement Mechanisms","authors":"David Gil de Gómez Pérez, R. Bednarik","doi":"10.15346/hc.v6i1.9","DOIUrl":"https://doi.org/10.15346/hc.v6i1.9","url":null,"abstract":"Pupil center and pupil contour are two of the most important features in the eye-image used for video-based eye-tracking. Well annotated databases are needed in order to allow benchmarking of the available- and new pupil detection and gaze estimation algorithms. Unfortunately, creation of such a data set is costly and requires a lot of efforts, including manual work of the annotators. In addition, reliability of manual annotations is hard to establish with a low number of annotators. In order to facilitate progress of the gaze tracking algorithm research, we created an online pupil annotation tool that engages many users to interact through gamification and allows utilization of the crowd power to create reliable annotations cite{artstein2005bias}. We describe the tool and the mechanisms employed, and report results on the annotation of a publicly available data set. Finally, we demonstrate an example utilization of the new high-quality annotation on a comparison of two state-of-the-art pupil center algorithms.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"105 1","pages":"176-191"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79264678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Read-Agree-Predict: A Crowdsourced Approach to Discovering Relevant Primary Sources for Historians 阅读-同意-预测:为历史学家发现相关第一手资料的众包方法
Human computation (Fairfax, Va.) Pub Date : 2019-10-08 DOI: 10.15346/hc.v6i1.8
Nai-Ching Wang, D. Hicks, P. Quigley, Kurt Luther
{"title":"Read-Agree-Predict: A Crowdsourced Approach to Discovering Relevant Primary Sources for Historians","authors":"Nai-Ching Wang, D. Hicks, P. Quigley, Kurt Luther","doi":"10.15346/hc.v6i1.8","DOIUrl":"https://doi.org/10.15346/hc.v6i1.8","url":null,"abstract":"Historians spend significant time evaluating the relevance of primary sources that they encounter in digitized archives and through web searches. One reason this task is time-consuming is that historians’ research interests are often highly abstract and specialized. These topics are unlikely to be manually indexed and are difficult to identify with automated text analysis techniques. In this article, we investigate the potential of a new crowdsourcing model in which the historian delegates to a novice crowd the task of evaluating the relevance of primary sources with respect to her unique research interests. The model employs a novel crowd workflow, Read-AgreePredict (RAP), that allows novice crowd workers to perform as well as expert historians. As a useful byproduct, RAP also reveals and prioritizes crowd confusions as targeted learning opportunities. We demonstrate the value of our model with two experiments with paid crowd workers (n=170), with the future goal of extending our work to classroom students and public history interventions. We also discuss broader implications for historical research and education.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"421 1","pages":"147-175"},"PeriodicalIF":0.0,"publicationDate":"2019-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86845242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MetaCrowd: Crowdsourcing Biomedical Metadata Quality Assessment MetaCrowd:众包生物医学元数据质量评估
Human computation (Fairfax, Va.) Pub Date : 2019-09-04 DOI: 10.15346/hc.v6i1.6
A. Zaveri, Wei Hu, M. Dumontier
{"title":"MetaCrowd: Crowdsourcing Biomedical Metadata Quality Assessment","authors":"A. Zaveri, Wei Hu, M. Dumontier","doi":"10.15346/hc.v6i1.6","DOIUrl":"https://doi.org/10.15346/hc.v6i1.6","url":null,"abstract":"To reuse the enormous amounts of biomedical data available on the Web, there is an urgent need for good quality metadata. This is extremely important to ensure that data is maximally Findable, Accessible, Interoperable and Reusable. The Gene Expression Omnibus (GEO) allow users to specify metadata in the form of textual key: value pairs (e.g. sex: female). However, since there is no structured vocabulary or format available, the 44,000,000+ key: value pairs suffer from numerous quality issues. Using domain experts for the curation is not only time consuming but also unscalable. Thus, in our approach, MetaCrowd, we apply crowdsourcing as a means for GEO metadata quality assessment. Our results show crowdsourcing is a reliable and feasible way to identify similar as well as erroneous metadata in GEO. This is extremely useful for data consumers and producers for curating and providing good quality metadata.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"49 1","pages":"98-112"},"PeriodicalIF":0.0,"publicationDate":"2019-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75819018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Comparing crowdworkers' and conventional knowledge workers' self-regulated learning strategies in the workplace 比较众创工作者与传统知识型工作者在工作场所的自律学习策略
Human computation (Fairfax, Va.) Pub Date : 2019-06-18 DOI: 10.15346/HC.V6I1.5
A. Margaryan
{"title":"Comparing crowdworkers' and conventional knowledge workers' self-regulated learning strategies in the workplace","authors":"A. Margaryan","doi":"10.15346/HC.V6I1.5","DOIUrl":"https://doi.org/10.15346/HC.V6I1.5","url":null,"abstract":"This paper compares the strategies used by crowdworkers and conventional knowledge workers to self-regulate their learning in the workplace. Crowdworkers are a self-employed, radically distributed workforce operating outside conventional organisational settings; they have no access to the sorts of training, professional development and incidental learning opportunities that workers in conventional workplaces typically do. The paper explores what differences there are between crowdworkers and conventional knowledge workers in terms of self-regulated learning strategies they undertake. Data were drawn from four datasets using the same survey instrument. Respondents included crowdworkers from CrowdFlower and Upwork platforms and conventional knowledge workers in the finance, education and healthcare sectors. The results show that the majority of crowdworkers and conventional knowledge workers used a wide range of SRL strategies. Among 20 strategies explored, a statistically significant difference was uncovered in the use of only one strategy. Specifically, crowdworkers were significantly less likely than the conventional workers to articulate plans of how to achieve their learning goals. The results suggest that, despite working outside organisational structures, crowdworkers are similar to conventional workers in terms of how they self-regulate their workplace learning. The paper concludes by discussing the implications of these findings and proposing directions for future research. ","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"393 1","pages":"83-97"},"PeriodicalIF":0.0,"publicationDate":"2019-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86823143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Local Crowdsourcing for Annotating Audio: the Elevator Annotator platform 本地众包音频注释:电梯注释器平台
Human computation (Fairfax, Va.) Pub Date : 2019-06-02 DOI: 10.15346/hc.v6i1.1
Themistoklis Karavellas, A. Prameswari, O. Inel, V. D. Boer
{"title":"Local Crowdsourcing for Annotating Audio: the Elevator Annotator platform","authors":"Themistoklis Karavellas, A. Prameswari, O. Inel, V. D. Boer","doi":"10.15346/hc.v6i1.1","DOIUrl":"https://doi.org/10.15346/hc.v6i1.1","url":null,"abstract":"Crowdsourcing and other human computation techniques have proven useful in collecting large numbers of annotations for various datasets. In the majority of cases, online platforms are used when running crowdsourcing campaigns. Local crowdsourcing is a variant where annotation is done on specific physical locations. This paper describes a local crowdsourcing concept, platform and experiment. The case setting concerns eliciting annotations for an audio archive. For the experiment, we developed a hardware platform designed to be deployed in building elevators. To evaluate the effectiveness of the platform and to test the influence of location on the annotation results, an experiment was set up in two different locations. In each location two different user interaction modalities are used. The results show that our simple local crowdsourcing setup is able to achieve acceptable accuracy levels with up to 4 annotations per hour, and that the location has a significant effect on accuracy.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"8 1","pages":"1-11"},"PeriodicalIF":0.0,"publicationDate":"2019-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78169497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey of Crowdsourcing in Medical Image Analysis 医学图像分析中的众包研究
Human computation (Fairfax, Va.) Pub Date : 2019-02-25 DOI: 10.15346/HC.V7I1.1
S. Ørting, Andrew Doyle, Matthias Hirth, Arno van Hilten, O. Inel, C. Madan, Panagiotis Mavridis, Helen Spiers, V. Cheplygina
{"title":"A Survey of Crowdsourcing in Medical Image Analysis","authors":"S. Ørting, Andrew Doyle, Matthias Hirth, Arno van Hilten, O. Inel, C. Madan, Panagiotis Mavridis, Helen Spiers, V. Cheplygina","doi":"10.15346/HC.V7I1.1","DOIUrl":"https://doi.org/10.15346/HC.V7I1.1","url":null,"abstract":"Rapid advances in image processing capabilities have been seen across many domains, fostered by the  application of machine learning algorithms to \"big-data\". However, within the realm of medical image analysis, advances have been curtailed, in part, due to the limited availability of large-scale, well-annotated datasets. One of the main reasons for this is the high cost often associated with producing large amounts of high-quality meta-data. Recently, there has been growing interest in the application of crowdsourcing for this purpose; a technique that has proven effective for creating large-scale datasets across a range of disciplines, from computer vision to astrophysics. Despite the growing popularity of this approach, there has not yet been a comprehensive literature review to provide guidance to researchers considering using crowdsourcing methodologies in their own medical imaging analysis. In this survey, we review studies applying crowdsourcing to the analysis of medical images, published prior to July 2018. We identify common approaches, challenges and considerations, providing guidance of utility to researchers adopting this approach. Finally, we discuss future opportunities for development within this emerging domain.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"17 1","pages":"1-26"},"PeriodicalIF":0.0,"publicationDate":"2019-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76783110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信