利用打孔注释高效地众包视觉重要性

Minsuk Chang, Soohyun Lee, Aeri Cho, Hyeon Jeon, Seokhyeon Park, Cindy Xiong Bearfield, Jinwook Seo
{"title":"利用打孔注释高效地众包视觉重要性","authors":"Minsuk Chang, Soohyun Lee, Aeri Cho, Hyeon Jeon, Seokhyeon Park, Cindy Xiong Bearfield, Jinwook Seo","doi":"arxiv-2409.10459","DOIUrl":null,"url":null,"abstract":"We introduce a novel crowdsourcing method for identifying important areas in\ngraphical images through punch-hole labeling. Traditional methods, such as gaze\ntrackers and mouse-based annotations, which generate continuous data, can be\nimpractical in crowdsourcing scenarios. They require many participants, and the\noutcome data can be noisy. In contrast, our method first segments the graphical\nimage with a grid and drops a portion of the patches (punch holes). Then, we\niteratively ask the labeler to validate each annotation with holes, narrowing\ndown the annotation only having the most important area. This approach aims to\nreduce annotation noise in crowdsourcing by standardizing the annotations while\nenhancing labeling efficiency and reliability. Preliminary findings from\nfundamental charts demonstrate that punch-hole labeling can effectively\npinpoint critical regions. This also highlights its potential for broader\napplication in visualization research, particularly in studying large-scale\nusers' graphical perception. Our future work aims to enhance the algorithm to\nachieve faster labeling speed and prove its utility through large-scale\nexperiments.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"15 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficiently Crowdsourcing Visual Importance with Punch-Hole Annotation\",\"authors\":\"Minsuk Chang, Soohyun Lee, Aeri Cho, Hyeon Jeon, Seokhyeon Park, Cindy Xiong Bearfield, Jinwook Seo\",\"doi\":\"arxiv-2409.10459\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce a novel crowdsourcing method for identifying important areas in\\ngraphical images through punch-hole labeling. Traditional methods, such as gaze\\ntrackers and mouse-based annotations, which generate continuous data, can be\\nimpractical in crowdsourcing scenarios. They require many participants, and the\\noutcome data can be noisy. In contrast, our method first segments the graphical\\nimage with a grid and drops a portion of the patches (punch holes). Then, we\\niteratively ask the labeler to validate each annotation with holes, narrowing\\ndown the annotation only having the most important area. This approach aims to\\nreduce annotation noise in crowdsourcing by standardizing the annotations while\\nenhancing labeling efficiency and reliability. Preliminary findings from\\nfundamental charts demonstrate that punch-hole labeling can effectively\\npinpoint critical regions. This also highlights its potential for broader\\napplication in visualization research, particularly in studying large-scale\\nusers' graphical perception. Our future work aims to enhance the algorithm to\\nachieve faster labeling speed and prove its utility through large-scale\\nexperiments.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"15 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10459\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们介绍了一种新颖的众包方法,通过打孔标注来识别图形图像中的重要区域。传统方法(如地名追踪器和基于鼠标的注释)会生成连续数据,但在众包场景中并不实用。这些方法需要许多参与者,而且结果数据可能存在噪声。相比之下,我们的方法首先用网格分割图形图像,并丢弃部分补丁(打孔)。然后,我们会不断要求标注者验证每个带孔的注释,缩小注释范围,只保留最重要的区域。这种方法旨在通过标准化注释来减少众包中的注释噪音,同时提高标注效率和可靠性。基础图表的初步研究结果表明,打孔标注能有效地指出关键区域。这也凸显了其在可视化研究中的广泛应用潜力,尤其是在研究大规模用户的图形感知方面。我们未来的工作目标是改进算法,实现更快的标注速度,并通过大规模实验证明其实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Efficiently Crowdsourcing Visual Importance with Punch-Hole Annotation
We introduce a novel crowdsourcing method for identifying important areas in graphical images through punch-hole labeling. Traditional methods, such as gaze trackers and mouse-based annotations, which generate continuous data, can be impractical in crowdsourcing scenarios. They require many participants, and the outcome data can be noisy. In contrast, our method first segments the graphical image with a grid and drops a portion of the patches (punch holes). Then, we iteratively ask the labeler to validate each annotation with holes, narrowing down the annotation only having the most important area. This approach aims to reduce annotation noise in crowdsourcing by standardizing the annotations while enhancing labeling efficiency and reliability. Preliminary findings from fundamental charts demonstrate that punch-hole labeling can effectively pinpoint critical regions. This also highlights its potential for broader application in visualization research, particularly in studying large-scale users' graphical perception. Our future work aims to enhance the algorithm to achieve faster labeling speed and prove its utility through large-scale experiments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信