{"title":"MouseHints: easing task switching in parallel browsing","authors":"Luis A. Leiva","doi":"10.1145/1979742.1979861","DOIUrl":"https://doi.org/10.1145/1979742.1979861","url":null,"abstract":"We present a technique to help users regain context either after an interruption or when multitasking while performing web tasks. Using mouse movements as an indicator of attention, a browser plugin records in background the user's interactions (including clicks, dwell times, and DOM elements). On leaving the page, this information is stored to be rendered as an overlay when the user returns to such page. The results of a short study showed that participants resumed tasks three times faster with MouseHints and completed their tasks in about half the time. Related applications and further research are also envisioned.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131528875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mobile phones and information capture in the workplace","authors":"Amrita Thakur, M. Gormish, B. Erol","doi":"10.1145/1979742.1979800","DOIUrl":"https://doi.org/10.1145/1979742.1979800","url":null,"abstract":"Smartphones (mobile phones with downloadable applications) are being used for far more than making calls and reading email. We investigated the use of phones for information capture for work purposes through interviews, multiple free response surveys, and two multiple choice surveys. While we expected and found taking pictures to be useful for work, we were surprised at the extent of audio, video, and note taking done on the phone, and the impact on other devices. Our work also suggests future mobile information capture for work will increase more due to cultural changes than technological improvements.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131568120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steven W. Dow, A. Kulkarni, Brie Bunge, Truc Nguyen, Scott R. Klemmer, Bjoern Hartmann
{"title":"Shepherding the crowd: managing and providing feedback to crowd workers","authors":"Steven W. Dow, A. Kulkarni, Brie Bunge, Truc Nguyen, Scott R. Klemmer, Bjoern Hartmann","doi":"10.1145/1979742.1979826","DOIUrl":"https://doi.org/10.1145/1979742.1979826","url":null,"abstract":"Micro-task platforms provide a marketplace for hiring people to do short-term work for small payments. Requesters often struggle to obtain high-quality results, especially on content-creation tasks, because work cannot be easily verified and workers can move to other tasks without consequence. Such platforms provide little opportunity for workers to reflect and improve their task performance. Timely and task-specific feedback can help crowd workers learn, persist, and produce better results. We analyze the design space for crowd feedback and introduce Shepherd, a prototype system for visualizing crowd work, providing feedback, and promoting workers into shepherding roles. This paper describes our current progress and our plans for system development and evaluation.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116871034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Xu, Yang Wang, Fang Chen, Ho Choi, Guanzhong Li, Siyuan Chen, M. Hussain
{"title":"Pupillary response based cognitive workload index under luminance and emotional changes","authors":"Jie Xu, Yang Wang, Fang Chen, Ho Choi, Guanzhong Li, Siyuan Chen, M. Hussain","doi":"10.1145/1979742.1979819","DOIUrl":"https://doi.org/10.1145/1979742.1979819","url":null,"abstract":"Pupillary response has been widely accepted as a physiological index of cognitive workload. It can be reliably measured with video-based eye trackers in a non-intrusive way. However, in practice commonly used measures such as pupil size or dilation might fail to evaluate cognitive workload due to various factors unrelated to workload, including luminance condition and emotional arousal. In this work, we investigate machine learning based feature extraction techniques that can both robustly index cognitive workload and adaptively handle changes of pupillary response caused by confounding factors unrelated to workload.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133988816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond pointing and clicking: how do newer interaction modalities affect user engagement?","authors":"S. Shyam Sundar, Qian Xu, Saraswathi Bellur, Jeeyun Oh, Haiyan Jia","doi":"10.1145/1979742.1979794","DOIUrl":"https://doi.org/10.1145/1979742.1979794","url":null,"abstract":"Modern interfaces offer users a wider range of interaction modalities beyond pointing and clicking, such as dragging, sliding, zooming, and flipping through images. But, do they offer any distinct psychological advantages? We address this question with an experiment (N = 128) testing the relative contributions made by six interaction modalities (zoom-inout, drag, slide, mouse-over, cover-flow and click-to-download) to user engagement with identical content. Data suggest that slide is better at aiding memory than the other modalities, whereas cover-flow and mouse-over generate more user actions. Mouse-over, click-to-download, and zoom-inout tend to foster more favorable attitudes among power users, whereas cover-flow and slide generate more positive attitudes among non-power users. Design implications are discussed.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134329450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CapWidgets: tangile widgets versus multi-touch controls on mobile devices","authors":"Sven G. Kratz, T. Westermann, M. Rohs, Georg Essl","doi":"10.1145/1979742.1979773","DOIUrl":"https://doi.org/10.1145/1979742.1979773","url":null,"abstract":"We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"291 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134429056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Deterding, M. Sicart, L. Nacke, Kenton O'hara, Dan Dixon
{"title":"Gamification. using game-design elements in non-gaming contexts","authors":"Sebastian Deterding, M. Sicart, L. Nacke, Kenton O'hara, Dan Dixon","doi":"10.1145/1979742.1979575","DOIUrl":"https://doi.org/10.1145/1979742.1979575","url":null,"abstract":"\"Gamification\" is an informal umbrella term for the use of video game elements in non-gaming systems to improve user experience (UX) and user engagement. The recent introduction of 'gamified' applications to large audiences promises new additions to the existing rich and diverse research on the heuristics, design patterns and dynamics of games and the positive UX they provide. However, what is lacking for a next step forward is the integration of this precise diversity of research endeavors. Therefore, this workshop brings together practitioners and researchers to develop a shared understanding of existing approaches and findings around the gamification of information systems, and identify key synergies, opportunities, and questions for future research.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134510144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TimeCapsule: connecting past","authors":"Yikun Liu, H. Huang","doi":"10.1145/1979742.1979505","DOIUrl":"https://doi.org/10.1145/1979742.1979505","url":null,"abstract":"Our world is changing at an ever-growing rate. The tide of urbanization and globalization has resulted in population migration that consequentially separates people from what is familiar to them. To combat this issue, we propose TimeCapsule. TimeCapsule is a social networking community intending to reserve, organize, share and utilize personal and collective memories by members of the community contributing location-related digitalized materials. Two clients will be designed to meet two kinds of usage: Mobile and Desktop. The mobile application will provide real-time old and new street view fusion in order to facilitate the user experience of appreciating the change in one location. The desktop client will help users organize and share personal and group memories. Special consideration for seniors will be addressed. By utilizing a connection to our past, we hope this initiative will help us to position ourselves to better appreciate the disparity between cultures and generations, thus unifying us.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114634870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Loutfouz Zaman, Ashish Kalra, Wolfgang Stuerzlinger
{"title":"DARLS: differencing and merging diagrams using dual view, animation, re-layout, layers and a storyboard","authors":"Loutfouz Zaman, Ashish Kalra, Wolfgang Stuerzlinger","doi":"10.1145/1979742.1979824","DOIUrl":"https://doi.org/10.1145/1979742.1979824","url":null,"abstract":"We present a new system for visualizing and merging differences in diagrams. It uses animation, dual views, a storyboard, relative re-layout, and layering to visualize differences. The system is also capable of differencing UML class diagrams. An evaluation produced positive results for animation and dual views with difference layer.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115901423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using gaze patterns to study and predict reading struggles due to distraction","authors":"Vidhya Navalpakkam, Justin M. Rao, M. Slaney","doi":"10.1145/1979742.1979832","DOIUrl":"https://doi.org/10.1145/1979742.1979832","url":null,"abstract":"We analyze gaze patterns to study how users in online reading environments cope with visual distraction, and we report gaze markers that identify reading difficulties due to distraction. The amount of visual distraction is varied from none, medium to high by presenting irrelevant graphics beside the reading content in one of 3 conditions: no graphic, static or animated graphics. We find that under highly-distracting conditions, a struggling reader puts more effort into the text -- she takes a longer time to comprehend the text, performs more fixations on the text and frequently revisits previously read content. Furthermore, she reports an unpleasant reading experience. Interestingly, we find that whether the user is distracted and struggles or not can be predicted from gaze patterns alone with up to 80% accuracy and up to 15% better than with non-gaze based features. This suggests that gaze patterns can be used to detect key events such as user strugglefrustration while reading.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115319290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}