Assessing Annotation Consistency in the Wild

Fausto Giunchiglia, M. Zeni, Enrico Bignotti, Wanyi Zhang
{"title":"Assessing Annotation Consistency in the Wild","authors":"Fausto Giunchiglia, M. Zeni, Enrico Bignotti, Wanyi Zhang","doi":"10.1109/PERCOMW.2018.8480236","DOIUrl":null,"url":null,"abstract":"The process of human annotation of sensor data is at the base of research areas such as participatory sensing and mobile crowdsensing. While much research has been devoted to assessing the quality of sensor data, the same cannot be said about annotations, which are fundamental to obtain a clear understanding of users experience. We present an evaluation of an interdisciplinary annotation methodology allowing users to continuously annotate their everyday life. The evaluation is done on a dataset from a project focused on the behaviour of students and how this impacts on their academic performance. We focus on those annotations concerning locations and movements of students, and we evaluate the annotations quality by checking their consistency. Results show that students are highly consistent with respect to the random baseline, and that these results can be improved by exploiting the semantics of annotations.","PeriodicalId":190096,"journal":{"name":"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PERCOMW.2018.8480236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

The process of human annotation of sensor data is at the base of research areas such as participatory sensing and mobile crowdsensing. While much research has been devoted to assessing the quality of sensor data, the same cannot be said about annotations, which are fundamental to obtain a clear understanding of users experience. We present an evaluation of an interdisciplinary annotation methodology allowing users to continuously annotate their everyday life. The evaluation is done on a dataset from a project focused on the behaviour of students and how this impacts on their academic performance. We focus on those annotations concerning locations and movements of students, and we evaluate the annotations quality by checking their consistency. Results show that students are highly consistent with respect to the random baseline, and that these results can be improved by exploiting the semantics of annotations.
评估野外标注一致性
人工标注传感器数据的过程是参与式传感和移动众测等研究领域的基础。虽然已经有很多研究致力于评估传感器数据的质量,但对注释的研究却并非如此,而注释是清晰理解用户体验的基础。我们提出了一种跨学科注释方法的评估,允许用户不断地注释他们的日常生活。评估是在一个项目的数据集上完成的,该项目关注的是学生的行为以及这种行为如何影响他们的学习成绩。我们关注那些与学生的位置和运动有关的注释,并通过检查它们的一致性来评估注释的质量。结果表明,学生与随机基线高度一致,并且可以通过利用注释的语义来改进这些结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信