Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services最新文献

筛选
英文 中文
Mobile context-aware cognitive testing system 移动情境感知认知测试系统
Sean-Ryan Smith
{"title":"Mobile context-aware cognitive testing system","authors":"Sean-Ryan Smith","doi":"10.1145/3098279.3119926","DOIUrl":"https://doi.org/10.1145/3098279.3119926","url":null,"abstract":"Traditional cognitive testing for older adults can be inaccessible, expensive, and time consuming. The development of computerized cognitive tests (CCTs) has made strides to alleviate such issues with traditional cognitive testing. Self-administered CCTs allow for individuals to test rapidly and conveniently on various devices. However, such tests may not factor in relevant contextual information pertinent to the testing situation (e.g., is the user in a proper environment or context to test?). This dissertation aims to develop a mobile, context-aware cognitive testing system (CACTS) capable of tracking and analyzing contextual information during CCTs. By utilizing mobile device sensors and user input, the proposed context-aware system will capture ambient and behavioral data during testing to compliment user performance results. This research will help provide insight into the contextual factors that are relevant to the user's testing efficacy and performance in CCTs.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127513520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A design space for conversational in-vehicle information systems 对话式车载信息系统的设计空间
Michael Braun, N. Broy, Bastian Pfleging, Florian Alt
{"title":"A design space for conversational in-vehicle information systems","authors":"Michael Braun, N. Broy, Bastian Pfleging, Florian Alt","doi":"10.1145/3098279.3122122","DOIUrl":"https://doi.org/10.1145/3098279.3122122","url":null,"abstract":"In this paper we chart a design space for conversational in-vehicle information systems (IVIS). Our work is motivated by the proliferation of speech interfaces in our everyday life, which have already found their way into consumer electronics and will most likely become pervasive in future cars. Our design space is based on expert interviews as well as a comprehensive literature review. We present five core dimensions - assistant, position, dialog design, system capabilities, and driver state - and show in an initial study how these dimensions affect the design of a prototypical IVIS. Design spaces have paved the way for much of the work done in HCI including but not limited to areas such as input and pointing devices, smart phones, displays, and automotive UIs. In a similar way, we expect our design space to aid practitioners in designing future IVIS but also researchers as they explore this young area of research.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128544861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Text entry tap accuracy and exploration of tilt controlled layered interaction on Smartwatches 智能手表上的文本输入点击精度和倾斜控制分层交互的探索
Mark D. Dunlop, M. Roper, G. Imperatore
{"title":"Text entry tap accuracy and exploration of tilt controlled layered interaction on Smartwatches","authors":"Mark D. Dunlop, M. Roper, G. Imperatore","doi":"10.1145/3098279.3098560","DOIUrl":"https://doi.org/10.1145/3098279.3098560","url":null,"abstract":"Design of text entry on small screen devices, e.g. smartwatches, faces two related challenges: trading off a reasonably sized keyboard area against space to display the entered text and the concern over \"fat fingers\". This paper investigates tap accuracy and revisits layered interfaces to explore a novel layered text entry method. A two part user study identifies preferred typing and reading tilt angles and then investigates variants of a tilting layered keyboard against a standard layout. We show good typing speed (29 wpm) and very high accuracy on the standard layout - contradicting fears of fat-fingers limiting watch text-entry. User feedback is positive towards tilting interaction and we identify ∼14° tilt as a comfortable typing angle. However, layering resulted in slightly slower and more erroneous entry. The paper contributes new data on tilt angles and key offsets for smartwatch text entry and supporting evidence for the suitability of QWERTY on smartwatches.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130361676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Customizable automatic detection of bad usability smells in mobile accessed web applications 可自定义自动检测移动访问web应用程序中的不良可用性气味
F. Paternò, Antonio Giovanni Schiavone, A. Conte
{"title":"Customizable automatic detection of bad usability smells in mobile accessed web applications","authors":"F. Paternò, Antonio Giovanni Schiavone, A. Conte","doi":"10.1145/3098279.3098558","DOIUrl":"https://doi.org/10.1145/3098279.3098558","url":null,"abstract":"Remote usability evaluation enables the possibility of analysing users' behaviour in their daily settings. We present a method and an associated tool able to identify potential usability issues through the analysis of client-side logs of mobile Web interactions. Such log analysis is based on the identification of specific usability smells. We describe an example set of bad usability smells, and how they are detected. The tool also allows evaluators to add new usability smells not included in the original set. We also report on the tool use in analysing the usability of a real, widely used application accessed by forty people through their smartphones whenever and wherever they wanted.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116288329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Leveraging user-made predictions to help understand personal behavior patterns 利用用户做出的预测来帮助理解个人行为模式
Miriam Greis, Tilman Dingler, A. Schmidt, C. Schmandt
{"title":"Leveraging user-made predictions to help understand personal behavior patterns","authors":"Miriam Greis, Tilman Dingler, A. Schmidt, C. Schmandt","doi":"10.1145/3098279.3122147","DOIUrl":"https://doi.org/10.1145/3098279.3122147","url":null,"abstract":"People use more and more applications and devices that quantify daily behavior such as the step count or phone usage. Purely presenting the collected data does not necessarily support users in understanding their behavior. In recent research, concepts such as learning by reflection are proposed to foster behavior change based on personal data. In this paper, we introduce user-made predictions to help users understand personal behavior patterns. Therefore, we developed an Android application that tracks users' screen-on and unlock patterns on their phone. The application asks users to predict their daily behavior based on their former usage data. In a user study with 12 participants, we showed the feasibility of leveraging user-made predictions in a quantified self approach. By trying to improve their predictions over the course of the study, participants automatically discovered new insights into personal behavior patterns.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121562804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Exploring the feasibility of subliminal priming on smartphones 探索智能手机上潜意识启动的可行性
C. Pinder, Jo Vermeulen, Benjamin R. Cowan, R. Beale, R. Hendley
{"title":"Exploring the feasibility of subliminal priming on smartphones","authors":"C. Pinder, Jo Vermeulen, Benjamin R. Cowan, R. Beale, R. Hendley","doi":"10.1145/3098279.3098531","DOIUrl":"https://doi.org/10.1145/3098279.3098531","url":null,"abstract":"Subliminal priming has the potential to influence people's attitudes and behaviour, making them prefer certain choices over others. Yet little research has explored its feasibility on smartphones, even though the global popularity and increasing use of smartphones has spurred interest in mobile behaviour change interventions. This paper addresses technical, ethical and design issues in delivering mobile subliminal priming. We present three explorations of the technique: a technical feasibility study, and two participant studies. A pilot study (n=34) explored subliminal goal priming in-the-wild over 1 week, while a semi-controlled study (n=101) explored the immediate effect of subliminal priming on 3 different types of stimuli. We found that although subliminal priming is technically possible on smartphones, there is limited evidence of impact on changes in how much stimuli are preferred by users, with inconsistent effects across stimuli types. We discuss the implications of our results and directions for future research.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114872713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Feasibility analysis of detecting the finger orientation with depth cameras 深度相机检测手指方向的可行性分析
Sven Mayer, Michael Mayer, N. Henze
{"title":"Feasibility analysis of detecting the finger orientation with depth cameras","authors":"Sven Mayer, Michael Mayer, N. Henze","doi":"10.1145/3098279.3122125","DOIUrl":"https://doi.org/10.1145/3098279.3122125","url":null,"abstract":"Over the last decade, a body of research investigated enriching touch actions by using finger orientation as an additional input. Beyond new interaction techniques, we envision new user interface elements to make use of the additional input information. We define the fingers orientation by the pitch, roll, and yaw on the touch surface. Determining the finger orientation is not possible using current state-of-the-art devices. As a first step, we built a system that can determine the finger orientation. We developed a working prototype with a depth camera mounted on a tablet. We conducted a study with 12 participants to record ground truth data for the index, middle, ring and little finger to evaluate the accuracy of our prototype using the PointPose [3] algorithm to estimate the pitch and yaw of the finger. By applying 2D linear correction models, we further show a reduction of RMSE by 45.4% for pitch and 21.83% for yaw.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127923342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
EXHI-bit: a mechanical structure for prototyping EXpandable handheld interfaces expo -bit:用于可扩展手持接口原型设计的机械结构
Michaël Ortega, Jérôme Maisonnasse, L. Nigay
{"title":"EXHI-bit: a mechanical structure for prototyping EXpandable handheld interfaces","authors":"Michaël Ortega, Jérôme Maisonnasse, L. Nigay","doi":"10.1145/3098279.3098533","DOIUrl":"https://doi.org/10.1145/3098279.3098533","url":null,"abstract":"We present EXHI-bit, a mechanical structure for prototyping unique shape-changing interfaces that can be easily built in a fabrication laboratory. EXHI-bit surfaces consist of inter-weaving units that slide in two dimensions. This assembly enables the creation of unique expandable handheld surfaces with continuous transitions while maintaining the surface flat, rigid, and non-porous. EXHI-bit surfaces can be combined to create 2D and 3D multi-surface objects. In this paper, we demonstrate the versatility and generality of EXHI-bit with user-deformed and self-actuated 1D, 2D, and 3D prototypes employed in an architectural urban planning scenario. We also present vision on the use of expandable tablets in our everyday life from 10 users after having interacted with an EXHI-bit tablet.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123705192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Speech and Hands-free interaction: myths, challenges, and opportunities 语音和免提交互:神话、挑战和机遇
Cosmin Munteanu, Gerald Penn
{"title":"Speech and Hands-free interaction: myths, challenges, and opportunities","authors":"Cosmin Munteanu, Gerald Penn","doi":"10.1145/3098279.3119919","DOIUrl":"https://doi.org/10.1145/3098279.3119919","url":null,"abstract":"HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines - despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines' ability to understand speech, the MobileHCI community (and the HCI field at large) has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the unexpected variations in error rates when processing speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces. As such, the development of interactive speech-based systems is mostly driven by engineering efforts to improve such systems with respect to largely arbitrary performance metrics. Such developments have often been void of any user-centered design principles or consideration for usability or usefulness. The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122089485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
PeriMR: a prototyping tool for head-mounted peripheral light displays in mixed reality PeriMR:混合现实中头戴式周边光显示器的原型工具
Uwe Gruenefeld, Tim Claudius Stratmann, Wilko Heuten, Susanne CJ Boll
{"title":"PeriMR: a prototyping tool for head-mounted peripheral light displays in mixed reality","authors":"Uwe Gruenefeld, Tim Claudius Stratmann, Wilko Heuten, Susanne CJ Boll","doi":"10.1145/3098279.3125439","DOIUrl":"https://doi.org/10.1145/3098279.3125439","url":null,"abstract":"Nowadays, Mixed and Virtual Reality devices suffer from a field of view that is too small compared to human visual perception. Although a larger field of view is useful (e.g., conveying peripheral information or improving situation awareness), technical limitations prevent the extension of the field-of-view. A way to overcome these limitations is to extend the field-of-view with peripheral light displays. However, there are no tools to support the design of peripheral light displays for Mixed or Virtual Reality devices. Therefore, we present our prototyping tool PeriMR that allows researchers to develop new peripheral head-mounted light displays for Mixed and Virtual Reality.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117201133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信