{"title":"Click-Search: Supporting information search with crowd-powered image-to-keyword query formulation","authors":"Ping-Jing Yang , Hao-Chuan Wang , Yu-Hsuan Liu","doi":"10.1016/j.jvlc.2016.09.002","DOIUrl":null,"url":null,"abstract":"<div><p>Information search is a common yet important task in everyday work and life. It remains challenging how to help users search for information or things they don’t know how to express with words. Also, even when people know how to express, the cognitive cost required to retrieve the concepts and formulate the queries can be excessive. In this paper, we present Click-Search, a search user interface that allows people to indicate their search intents by merely selecting and cropping segments of pictures. The system automatically converts cropped image segments to keywords based on known semantic labels at the level of image pixels that are generated by crowdsourced image tagging. Through a user study, we showed that Click-Search could support object-finding activities effectively with a satisfactory user experience.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"46 ","pages":"Pages 12-19"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2016.09.002","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Languages and Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1045926X16300702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 1
Abstract
Information search is a common yet important task in everyday work and life. It remains challenging how to help users search for information or things they don’t know how to express with words. Also, even when people know how to express, the cognitive cost required to retrieve the concepts and formulate the queries can be excessive. In this paper, we present Click-Search, a search user interface that allows people to indicate their search intents by merely selecting and cropping segments of pictures. The system automatically converts cropped image segments to keywords based on known semantic labels at the level of image pixels that are generated by crowdsourced image tagging. Through a user study, we showed that Click-Search could support object-finding activities effectively with a satisfactory user experience.
期刊介绍:
The Journal of Visual Languages and Computing is a forum for researchers, practitioners, and developers to exchange ideas and results for the advancement of visual languages and its implication to the art of computing. The journal publishes research papers, state-of-the-art surveys, and review articles in all aspects of visual languages.