Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems最新文献

筛选
英文 中文
Children in 2077: Designing Children's Technologies in the Age of Transhumanism 2077年的儿童:在超人类主义时代设计儿童技术
O. Buruk, Oğuzhan Özcan, Gökçe Elif Baykal, T. Göksun, Selcuk Acar, Guler Akduman, Mehmet Baytaş, Ceylan Beşevli, Joe Best, A. Coşkun, Hüseyin Uğur Genç, A. B. Kocaballi, Samuli Laato, Cassia Mota, Konstantinos Papangelis, Marigo Raftopoulos, Richard Ramchurn, Juan Sádaba, Mattia Thibault, A. Wolff, Mert Yildiz
{"title":"Children in 2077: Designing Children's Technologies in the Age of Transhumanism","authors":"O. Buruk, Oğuzhan Özcan, Gökçe Elif Baykal, T. Göksun, Selcuk Acar, Guler Akduman, Mehmet Baytaş, Ceylan Beşevli, Joe Best, A. Coşkun, Hüseyin Uğur Genç, A. B. Kocaballi, Samuli Laato, Cassia Mota, Konstantinos Papangelis, Marigo Raftopoulos, Richard Ramchurn, Juan Sádaba, Mattia Thibault, A. Wolff, Mert Yildiz","doi":"10.1145/3334480.3381821","DOIUrl":"https://doi.org/10.1145/3334480.3381821","url":null,"abstract":"What for and how will we design children's technologies in the transhumanism age, and what stance will we take as designers? This paper aims to answer this question with 13 fictional abstracts from sixteen authors of different countries, institutions and disciplines. Transhumanist thinking envisions enhancing human body and mind by blending human biology with technological augmentations. Fundamentally, it seeks to improve the human species, yet the impacts of such movement are unknown and the implications on children's lives and technologies were not explored deeply. In an age, where technologies such as under-skin chips or brain-machine interfaces can clearly be defined as transhumanist, our aim is to reveal probable pitfalls and benefits of those technologies on children's lives by using the power of design fiction. Thus, main contribution of this paper is to create diverse presentation of provocative research ideas that will foster the discussion on the transhumanist technologies impacting the lives of children in the future.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116727426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Multimodal Interaction Design for Public Window Displays: A Case Study 公共窗口展示的多模式交互设计:一个案例研究
X. Liu, Min-Zhi Shao
{"title":"Multimodal Interaction Design for Public Window Displays: A Case Study","authors":"X. Liu, Min-Zhi Shao","doi":"10.1145/3334480.3382838","DOIUrl":"https://doi.org/10.1145/3334480.3382838","url":null,"abstract":"We present a case study of multimodal interaction design for public window displays. Using a classic fairytale as the theme story, a prototype system that integrates mobile, gesture, tangible, touchscreen, and puppet interfaces has been implemented. The preliminary field deployment results demonstrate that our interactive window is well-received, with a significantly extended duration of user's interaction time. We conclude with a discussion of lessons learned and potential new research problems for interactive public window design. We believe our findings are useful in future design for interactive shop windows, theater showcases, and exhibition displays.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132356379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Empirical Investigation of Users' Preferred Timing Parameters for American Sign Language Animations 美国手语动画用户偏好时间参数的实证研究
Sedeeq Al-khazraji, Becca Dingman, Matt Huenerfauth
{"title":"Empirical Investigation of Users' Preferred Timing Parameters for American Sign Language Animations","authors":"Sedeeq Al-khazraji, Becca Dingman, Matt Huenerfauth","doi":"10.1145/3334480.3382989","DOIUrl":"https://doi.org/10.1145/3334480.3382989","url":null,"abstract":"To make it easier to add American Sign Language (ASL) to websites, which would increase information accessibility for many Deaf users, we investigate software to semi-automatically produce ASL animation from an easy-to-update script of the message, requiring us to automatically select the speed and timing for the animation. While we can model speed and timing of human signers from video recordings, prior work has suggested that users prefer animations to be slower than videos of humans signers. However, no prior study had systematically examined the multiple parameters of ASL timing, which include: sign duration, transition time, pausing frequency, pausing duration, and differential signing rate. In an experimental study, 16 native ASL signers provided subjective preference judgements during a side-by-side comparison of ASL animations in which each of these five parameters was varied. We empirically identified and report users' preferences for each of these individual timing parameters of ASL animation.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128321314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Designing Automatic Speech Recognition Technologies to Improve Accessibility for Deaf and Hard-of-Hearing People in Small Group Meetings 设计自动语音识别技术以提高聋人和听障人士在小组会议中的可及性
Matthew Seita
{"title":"Designing Automatic Speech Recognition Technologies to Improve Accessibility for Deaf and Hard-of-Hearing People in Small Group Meetings","authors":"Matthew Seita","doi":"10.1145/3334480.3375039","DOIUrl":"https://doi.org/10.1145/3334480.3375039","url":null,"abstract":"Deaf and hard of hearing (DHH) individuals face several barriers to communication in the workplace, particularly in small-group meetings with their hearing peers. The impromptu nature of these meetings makes scheduling sign-language interpreting or professional captioning services difficult. Recent advances in Automatic Speech Recognition (ASR) technology could help remove some of these barriers that prevent DHH people from becoming involved in group meetings. However, ASR is still imperfect, and it contains errors in its output text in many real-world conversation settings. My research proposes to investigate whether there are benefits in using ASR technology to aid understanding and communication among DHH and hearing individuals. My dissertation research will evaluate the effectiveness of using ASR in small group meetings (through empirical studies with DHH and hearing participants), as well as develop guidelines for system design to encourage hearing participants to communicate and speak more clearly.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"673 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116104176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Attention-Aware Brain Computer Interface to Avoid Distractions in Augmented Reality 增强现实中避免分心的注意感知脑机接口
Lisa-Marie Vortmann, F. Putze
{"title":"Attention-Aware Brain Computer Interface to Avoid Distractions in Augmented Reality","authors":"Lisa-Marie Vortmann, F. Putze","doi":"10.1145/3334480.3382889","DOIUrl":"https://doi.org/10.1145/3334480.3382889","url":null,"abstract":"Recently, the idea of using BCIs in Augmented Reality settings to operate systems has emerged. One problem of such head-mounted displays is the distraction caused by an unavoidable display of control elements even when focused on internal thoughts. In this project, we reduced this distraction by including information about the current attentional state. A multimodal smart-home environment was altered to adapt to the user's state of attention. The system only responded if the attentional orientation was classified as \"external\". The classification was based on multimodal EEG and eye tracking data. Seven users tested the attention-aware system in comparison to the unaware system. We show that the adaptation of the interface improved the usability of the system. We conclude that more systems would benefit from awareness of the user's ongoing attentional state.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"749 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116123669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Combining Participatory and ESM: A Hybrid Approach to Collecting Annotated Mobility Data 结合参与式和ESM:收集带注释的移动数据的混合方法
Hsiu-Chi Chang, Yung-Ju Chang, Mark W. Newman, Chih-Hsin Lin
{"title":"Combining Participatory and ESM: A Hybrid Approach to Collecting Annotated Mobility Data","authors":"Hsiu-Chi Chang, Yung-Ju Chang, Mark W. Newman, Chih-Hsin Lin","doi":"10.1145/3334480.3383066","DOIUrl":"https://doi.org/10.1145/3334480.3383066","url":null,"abstract":"Collecting continual labeled activity data entails considerable effort from users to label a series of activity data. We propose Checkpoint-and-Remind (CAR), a hybrid approach that combines participatory (PART) and context-trigger ESM labeling (ESM). Checkpoint-and-Remind has the advantage of user control but reduces users' burden in recording activities. Meanwhile, it features a context-trigger mechanism of ESM as a backup to remind users of labeling. Our preliminary evaluation of CAR with nine participants, who collected and labeled their mobility activity data for 15 weekdays, showed that compared with PART and ESM, participants collected a larger amount of annotated mobility data using CAR. In addition, participants had a higher annotation rate when using CAR than when using ESM. Our results show that the hybrid approach that combines manual and automated recording is promising. Our future work is validating these results and measure more metrics related to compliance with more participants.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116241723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge Maps: Building Conceptual CSS Knowledge Through Comparison 知识地图:通过比较构建概念性CSS知识
Daniel Zhu, Salome Wairimu Kariuki
{"title":"Knowledge Maps: Building Conceptual CSS Knowledge Through Comparison","authors":"Daniel Zhu, Salome Wairimu Kariuki","doi":"10.1145/3334480.3381444","DOIUrl":"https://doi.org/10.1145/3334480.3381444","url":null,"abstract":"Many CSS tutorials exist online, yet novice web developers struggle to learn and apply professional CSS techniques. In this paper, we introduce Knowledge Maps (KM), a platform that guides novice developers to understand and compare professional web examples in order to learn and apply each example's professional techniques. By comparing professional techniques, learners are able to identify the tradeoffs and use cases associated with each technique. Knowledge Maps (KM) introduces three process management mechanisms to help learners understand examples: highlighted CSS properties, interactive CSS properties, and guided reflection prompts. In a user study where 9 users interacted with the Knowledge Maps (KM) platform on two examples using CSS grid layout, learners were able to understand the pros and cons between the CSS grid layout technique used in each example. Learners also demonstrated that they could apply their understanding to new use cases.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123869778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Demonstration of G-ID: Identifying 3D Prints Using Slicing Parameters G-ID演示:使用切片参数识别3D打印
Mustafa Doga Dogan, Faraz Faruqi, Andrew Day Churchill, Kenneth Friedman, Leon Cheng, S. Subramanian, Stefanie Mueller
{"title":"Demonstration of G-ID: Identifying 3D Prints Using Slicing Parameters","authors":"Mustafa Doga Dogan, Faraz Faruqi, Andrew Day Churchill, Kenneth Friedman, Leon Cheng, S. Subramanian, Stefanie Mueller","doi":"10.1145/3334480.3383141","DOIUrl":"https://doi.org/10.1145/3334480.3383141","url":null,"abstract":"We demonstrate G-ID, a method that utilizes the subtle patterns left by the 3D printing process to distinguish and identify objects that otherwise look similar to the human eye. The key idea is to mark different instances of a 3D model by varying slicing parameters that do not change the model geometry but can be detected as machine-readable differences in the print. As a result, G-ID does not add anything to the object but exploits the patterns appearing as a byproduct of slicing, an essential step of the 3D printing pipeline. We introduce the G-ID slicing & labeling interface that varies the settings for each instance, and the G-ID mobile app, which uses image processing techniques to retrieve the parameters and their associated labels from a photo of the 3D printed object.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124981040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Eye Tracking and Verbal Response to Understand the Impact of a Global Filter 结合眼动追踪和语言反应来理解全局过滤器的影响
Franceli Linney Cibrian, Jazette Johnson, Viseth Sean, Hollis E. Pass, Louanne E. Boyd
{"title":"Combining Eye Tracking and Verbal Response to Understand the Impact of a Global Filter","authors":"Franceli Linney Cibrian, Jazette Johnson, Viseth Sean, Hollis E. Pass, Louanne E. Boyd","doi":"10.1145/3334480.3382897","DOIUrl":"https://doi.org/10.1145/3334480.3382897","url":null,"abstract":"Visual attention guides the integration of two streams: the global, that rapidly processes the scene; and the local, that processes details. For people with autism, the integration of these two streams can be disrupted by the tendency to privilege details (local processing) instead of seeing the big picture (global processing). Consequently, people with autism may struggle with typical visual attention, evidenced by their verbal description of local features when asked to describe overall scenes. This paper aims to explore how one adult with autism see and understand the global filter of natural scenes.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125044175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LandSAGE: Collaborative Large Display Systems for Landslide Response in Southeast Asia LandSAGE:东南亚滑坡响应的协作大型显示系统
Dylan Kobayashi, Billy Troy Wooton, Nurit Kirshenbaum, J. Leigh, J. Haga
{"title":"LandSAGE: Collaborative Large Display Systems for Landslide Response in Southeast Asia","authors":"Dylan Kobayashi, Billy Troy Wooton, Nurit Kirshenbaum, J. Leigh, J. Haga","doi":"10.1145/3334480.3382873","DOIUrl":"https://doi.org/10.1145/3334480.3382873","url":null,"abstract":"LandSAGE is a program meant to advocate and train scientists and policy makers in Southeast Asia countries, such as Thailand, Vietnam, Cambodia, and Laos, to use collaborative large display systems (CyberCANOEs) to monitor and mitigate landslides. In this late breaking work we provide an overview of the first phase (out of three) of this program spread over five workshops conducted in Southeast Asia. We detail a design workshop meant to understand the needs of the local scientists and adapt them to large displays, an initial prototype we developed running on the SAGE2 platform, and conclude with some of the challenges we have encountered while bringing our workshops to Southeast Asia.","PeriodicalId":118996,"journal":{"name":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130275253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信