Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services最新文献

筛选
英文 中文
Supporting out of office software development using personal devices 支持使用个人设备进行办公外软件开发
Maria Husmann, Alfonso Murolo, Nicolas Kick, L. Geronimo, M. Norrie
{"title":"Supporting out of office software development using personal devices","authors":"Maria Husmann, Alfonso Murolo, Nicolas Kick, L. Geronimo, M. Norrie","doi":"10.1145/3229434.3229454","DOIUrl":"https://doi.org/10.1145/3229434.3229454","url":null,"abstract":"Software developers typically use multiple large screens in their office setup. However, they often work away from the office where such a setup is not available, instead only working with a laptop computer and drastically reduced screen real estate. We explore how developers can be better supported in ad-hoc scenarios, for example when they work in a cafe, an airport, or at a client's site. We present insights into current work practices and challenges when working away from the usual office desk sourced from a survey of professional software developers. Based on these insights, we introduce an IDE that makes use of additional personal devices, such as a phone or a tablet. Parts of the IDE can be offloaded to these mobile devices, for example the application that is being developed, a debugging console or navigational elements. A qualitative evaluation with professional software developers showed that they appreciate the increased screen real estate.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132814919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Good vibrations: can a digital nudge reduce digital overload? 良好的振动:数字轻推能减少数字过载吗?
F. Okeke, Michael Sobolev, Nicola Dell, D. Estrin
{"title":"Good vibrations: can a digital nudge reduce digital overload?","authors":"F. Okeke, Michael Sobolev, Nicola Dell, D. Estrin","doi":"10.1145/3229434.3229463","DOIUrl":"https://doi.org/10.1145/3229434.3229463","url":null,"abstract":"Digital overuse on mobile devices is a growing problem in everyday life. This paper describes a generalizable mobile intervention that combines nudge theory and negative reinforcement to create a subtle, repeating phone vibration that nudges a user to reduce their digital consumption. For example, if a user has a daily Facebook limit of 30 minutes but opens Facebook past this limit, the user's phone will issue gentle vibrations every five seconds, but the vibration stops once the user navigates away from Facebook. We evaluated the intervention through a three-week controlled experiment with 50 participants on Amazon's Mechanical Turk platform with findings that show daily digital consumption was successfully reduced by over 20%. Although the reduction did not persist after the intervention was removed, insights from qualitative feedback suggest that the intervention made participants more aware of their app usage habits; and we discuss design implications of episodically applying our intervention in specific everyday contexts such as education, sleep, and work. Taken together, our findings advance the HCI community's understanding of how to curb digital overload.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"19 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132870025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
$Q 美元的问
Radu-Daniel Vatavu, Lisa Anthony, Jacob O. Wobbrock
{"title":"$Q","authors":"Radu-Daniel Vatavu, Lisa Anthony, Jacob O. Wobbrock","doi":"10.1145/3229434.3229465","DOIUrl":"https://doi.org/10.1145/3229434.3229465","url":null,"abstract":"We introduce $Q, a super-quick, articulation-invariant point-cloud stroke-gesture recognizer for mobile, wearable, and embedded devices with low computing resources. $Q ran up to 142X faster than its predecessor $P in our benchmark evaluations on several mobile CPUs, and executed in less than 3% of $P's computations without any accuracy loss. In our most extreme evaluation demanding over 99% user-independent recognition accuracy, $P required 9.4s to run a single classification, while $Q completed in just 191ms (a 49X speed-up) on a Cortex-A7, one of the most widespread CPUs on the mobile market. $Q was even faster on a low-end 600-MHz processor, on which it executed in only 0.7% of $P's computations (a 142X speed-up), reducing classification time from two minutes to less than one second. $Q is the next major step for the \"$-family\" of gesture recognizers: articulation-invariant, extremely fast, accurate, and implementable on top of $P with just 30 extra lines of code.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134440964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
YouTube needs: understanding user's motivations to watch videos on mobile devices YouTube需求:了解用户在移动设备上观看视频的动机
Rodrigo de Oliveira, Christopher Pentoney, Mika Pritchard-Berman
{"title":"YouTube needs: understanding user's motivations to watch videos on mobile devices","authors":"Rodrigo de Oliveira, Christopher Pentoney, Mika Pritchard-Berman","doi":"10.1145/3229434.3229448","DOIUrl":"https://doi.org/10.1145/3229434.3229448","url":null,"abstract":"Watching online videos on mobile devices has been massively present in people's daily activities. However, different users can watch the same video for different purposes, and hence develop different expectations for their experience. Understanding people's motivations for watching videos on mobile can help address this problem by giving designers the information needed to craft the whole watching journey better adapting to user's expectations. To obtain this understanding, a comprehensive framework of viewer motivations is necessary. We present research that provides several contributions to understanding mobile video watchers: a thorough framework of user motivations to watch videos on mobile devices, a detailed procedure for collecting and categorizing these motivations, a set of challenges that viewers experience to address each motivation, insights on usage of mobile and non-mobile devices, and design recommendations for video sharing systems.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"396 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124757785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
SpokeIt SpokeIt
Jared Duval, Zachary Rubin, Elena Márquez Segura, Natalie Friedman, M. Zlatanov, Louise Yang, Sri Kurniawan
{"title":"SpokeIt","authors":"Jared Duval, Zachary Rubin, Elena Márquez Segura, Natalie Friedman, M. Zlatanov, Louise Yang, Sri Kurniawan","doi":"10.1145/3229434.3229484","DOIUrl":"https://doi.org/10.1145/3229434.3229484","url":null,"abstract":"SpokeIt is a mobile serious game for health designed to support speech articulation therapy. Here, we present SpokeIt as well as 2 preceding speech therapy prototypes we built, all of which use a novel offline critical speech recognition system capable of providing feedback in real-time. We detail key design motivations behind each of them and report on their potential to help adults with speech impairment co-occurring with developmental disabilities. We conducted a qualitative within-subject comparative study on 5 adults within this target group, who played all 3 prototypes. This study yielded refined functional requirements based on user feedback, relevant reward systems to implement based on user interest, and insights on the preferred hybrid game structure, which can be useful to others designing mobile games for speech articulation therapy for a similar target group.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"437 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122144959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Brassau: automatic generation of graphical user interfaces for virtual assistants Brassau:为虚拟助手自动生成图形用户界面
Michael H. Fischer, Giovanni Campagna, Silei Xu, M. Lam
{"title":"Brassau: automatic generation of graphical user interfaces for virtual assistants","authors":"Michael H. Fischer, Giovanni Campagna, Silei Xu, M. Lam","doi":"10.1145/3229434.3229481","DOIUrl":"https://doi.org/10.1145/3229434.3229481","url":null,"abstract":"This paper presents Brassau, a graphical virtual assistant that converts natural language commands into GUIs. A virtual assistant with a GUI has the following benefits compared to text or speech based virtual assistants: users can monitor multiple queries simultaneously, it is easy to re-run complex commands, and user can adjust settings using multiple modes of interaction. Brassau introduces a novel template-based approach that leverages a large corpus of images to make GUIs visually diverse and interesting. Brassau matches a command from the user to an image to create a GUI. This approach decouples the commands from GUIs and allows for reuse of GUIs across multiple commands. In our evaluation, users prefer the widgets produced by Brassau over plain GUIs.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123530217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Designing finger orientation input for mobile touchscreens 设计移动触摸屏的手指方向输入
Sven Mayer, Huy Viet Le, N. Henze
{"title":"Designing finger orientation input for mobile touchscreens","authors":"Sven Mayer, Huy Viet Le, N. Henze","doi":"10.1145/3229434.3229444","DOIUrl":"https://doi.org/10.1145/3229434.3229444","url":null,"abstract":"A large number of today's systems use interactive touch surfaces as the main input channel. Current devices reduce the richness of touch input to two-dimensional positions on the screen. A growing body of work develops methods that enrich touch input to provide additional degrees of freedom for touch interaction. In particular, previous work proposed to use the finger's orientation as additional input. To efficiently implement new input techniques which make use of the new input dimensions, we need to understand the limitations of the input. Therefore, we conducted a study to derive the ergonomic constraints for using finger orientation as additional input in a two-handed smartphone scenario. We show that for both hands, the comfort and the non-comfort zone depend on how the user interacts with a touch surface. For two-handed smart-phone scenarios, the range is 33.3% larger than for tabletop scenarios. We further show that the phone orientation correlates with the finger orientation. Finger orientations which are harder to perform result in phone orientations where the screen does not directly face the user.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128417137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Mobile location-based games to support orientation & mobility training for visually impaired students 移动定位游戏,支持视力受损学生的定向和移动训练
Georg Regal, Elke E. Mattheiss, D. Sellitsch, M. Tscheligi
{"title":"Mobile location-based games to support orientation & mobility training for visually impaired students","authors":"Georg Regal, Elke E. Mattheiss, D. Sellitsch, M. Tscheligi","doi":"10.1145/3229434.3229472","DOIUrl":"https://doi.org/10.1145/3229434.3229472","url":null,"abstract":"Orientation and Mobility (O&M) training is an important aspect in the education of visually impaired students. In this work we present a scavenger hunt-like location-based game to support O&M training. In two comparative studies with blind and partially sighted students and interviews with teachers we investigate if a mobile game played in the real world is a suitable approach to support O&M training and if a mobile location-based O&M training game is preferred over a game played in a virtual world. Our results show that a mobile location-based game is a fruitful approach to support O&M training for visually impaired students, and that a mobile location based game is preferred over playing in a virtual world. Based on the gathered insights we discuss implications for using mobile location-based games in O&M training.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127397935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Pinchmove
Yun Suen Pai, Zikun Chen, Liwei Chan, M. Isogai, Hideaki Kimata, Kai Kunze
{"title":"Pinchmove","authors":"Yun Suen Pai, Zikun Chen, Liwei Chan, M. Isogai, Hideaki Kimata, Kai Kunze","doi":"10.1145/3229434.3229470","DOIUrl":"https://doi.org/10.1145/3229434.3229470","url":null,"abstract":"Navigation and mobility mechanics for virtual environments aim to be realistic or fun, but rarely prioritize the accuracy of movement. We propose PinchMove, a highly accurate navigation mechanic utilizing pinch gestures and manipulation of the viewport for confined environments that prefers accurate movement. We ran a pilot study to first determine the degree of simulator sickness caused by this mechanic, and a comprehensive user study to evaluate its accuracy in a virtual environment. We found that utilizing an 80° tunneling effect at a maximum speed of 15.18° per second was deemed suitable for PinchMove in reducing motion sickness. We also found our system to be at average, more accurate in enclosed virtual environments when compared to conventional methods. This paper makes the following three contributions: 1) We propose a navigation solution in near-field virtual environments for accurate movement, 2) we determined the appropriate tunneling effect for our method to minimize motion sickness, and 3) We validated our proposed solution by comparing it with conventional navigation solutions in terms of accuracy of movement. We also propose several use- case scenarios where accuracy in movement is desirable and further discuss the effectiveness of PinchMove.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115069414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Typical phone use habits: intense use does not predict negative well-being 典型的手机使用习惯:过度使用并不预示消极的幸福感
Kleomenis Katevas, I. Arapakis, M. Pielot
{"title":"Typical phone use habits: intense use does not predict negative well-being","authors":"Kleomenis Katevas, I. Arapakis, M. Pielot","doi":"10.1145/3229434.3229441","DOIUrl":"https://doi.org/10.1145/3229434.3229441","url":null,"abstract":"Not all smartphone owners use their device in the same way. In this work, we uncover broad, latent patterns of mobile phone use behavior. We conducted a study where, via a dedicated logging app, we collected daily mobile phone activity data from a sample of 340 participants for a period of four weeks. Through an unsupervised learning approach and a methodologically rigorous analysis, we reveal five generic phone use profiles which describe at least 10% of the participants each: limited use, business use, power use, and personality- & externally induced problematic use. We provide evidence that intense mobile phone use alone does not predict negative well-being. Instead, our approach automatically revealed two groups with tendencies for lower well-being, which are characterized by nightly phone use sessions.","PeriodicalId":344738,"journal":{"name":"Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"21 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132398533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信