Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems最新文献

筛选
英文 中文
RayCursor: A 3D Pointing Facilitation Technique based on Raycasting RayCursor:一种基于光线投射的3D指向简化技术
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300331
Marc Baloup, Thomas Pietrzak, Géry Casiez
{"title":"RayCursor: A 3D Pointing Facilitation Technique based on Raycasting","authors":"Marc Baloup, Thomas Pietrzak, Géry Casiez","doi":"10.1145/3290605.3300331","DOIUrl":"https://doi.org/10.1145/3290605.3300331","url":null,"abstract":"Raycasting is the most common target pointing technique in virtual reality environments. However, performance on small and distant targets is impacted by the accuracy of the pointing device and the user's motor skills. Current pointing facilitation techniques are currently only applied in the context of the virtual hand, i.e. for targets within reach. We propose enhancements to Raycasting: filtering the ray, and adding a controllable cursor on the ray to select the nearest target. We describe a series of studies for the design of the visual feedforward, filtering technique, as well as a comparative study between different 3D pointing techniques. Our results show that highlighting the nearest target is one of the most efficient visual feedforward technique. We also show that filtering the ray reduces error rate in a drastic way. Finally we show the benefits of RayCursor compared to Raycasting and another technique from the literature.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82826351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Crowdsourcing Multi-label Audio Annotation Tasks with Citizen Scientists 众包多标签音频注释任务与公民科学家
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300522
M. Cartwright, G. Dove, Ana Elisa Méndez Méndez, J. Bello, O. Nov
{"title":"Crowdsourcing Multi-label Audio Annotation Tasks with Citizen Scientists","authors":"M. Cartwright, G. Dove, Ana Elisa Méndez Méndez, J. Bello, O. Nov","doi":"10.1145/3290605.3300522","DOIUrl":"https://doi.org/10.1145/3290605.3300522","url":null,"abstract":"Annotating rich audio data is an essential aspect of training and evaluating machine listening systems. We approach this task in the context of temporally-complex urban soundscapes, which require multiple labels to identify overlapping sound sources. Typically this work is crowdsourced, and previous studies have shown that workers can quickly label audio with binary annotation for single classes. However, this approach can be difficult to scale when multiple passes with different focus classes are required to annotate data with multiple labels. In citizen science, where tasks are often image-based, annotation efforts typically label multiple classes simultaneously in a single pass. This paper describes our data collection on the Zooniverse citizen science platform, comparing the efficiencies of different audio annotation strategies. We compared multiple-pass binary annotation, single-pass multi-label annotation, and a hybrid approach: hierarchical multi-pass multi-label annotation. We discuss our findings, which support using multi-label annotation, with reference to volunteer citizen scientists' motivations.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82828775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Methodological Gaps in Predicting Mental Health States from Social Media: Triangulating Diagnostic Signals 从社交媒体预测心理健康状态的方法差距:三角测量诊断信号
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300364
S. Ernala, M. Birnbaum, Kristin A. Candan, Asra F. Rizvi, W. A. Sterling, J. Kane, M. Choudhury
{"title":"Methodological Gaps in Predicting Mental Health States from Social Media: Triangulating Diagnostic Signals","authors":"S. Ernala, M. Birnbaum, Kristin A. Candan, Asra F. Rizvi, W. A. Sterling, J. Kane, M. Choudhury","doi":"10.1145/3290605.3300364","DOIUrl":"https://doi.org/10.1145/3290605.3300364","url":null,"abstract":"A growing body of research is combining social media data with machine learning to predict mental health states of individuals. An implication of this research lies in informing evidence-based diagnosis and treatment. However, obtaining clinically valid diagnostic information from sensitive patient populations is challenging. Consequently, researchers have operationalized characteristic online behaviors as \"proxy diagnostic signals\" for building these models. This paper posits a challenge in using these diagnostic signals, purported to support clinical decision-making. Focusing on three commonly used proxy diagnostic signals derived from social media, we find that predictive models built on these data, although offer strong internal validity, suffer from poor external validity when tested on mental health patients. A deeper dive reveals issues of population and sampling bias, as well as of uncertainty in construct validity inherent in these proxies. We discuss the methodological and clinical implications of these gaps and provide remedial guidelines for future research.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90253842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
Evaluating Sustainable Interaction Design of Digital Services: The Case of YouTube 评估数字服务的可持续交互设计:以YouTube为例
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300627
C. Preist, D. Schien, P. Shabajee
{"title":"Evaluating Sustainable Interaction Design of Digital Services: The Case of YouTube","authors":"C. Preist, D. Schien, P. Shabajee","doi":"10.1145/3290605.3300627","DOIUrl":"https://doi.org/10.1145/3290605.3300627","url":null,"abstract":"Recent research has advocated for a broader conception of evaluation for Sustainable HCI (SHCI), using interdisciplinary insights and methods. In this paper, we put this into practice to conduct an evaluation of Sustainable Interaction Design (SID) of digital services. We explore how SID can contribute to corporate greenhouse gas (GHG) reduction strategies. We show how a Digital Service Provider (DSP) might incorporate SID into their design process and quantitatively evaluate a specific SID intervention by combining user analytics data with environmental life cycle assessment. We illustrate this by considering YouTube. Replacing user analytics data with aggregate estimates from publicly available sources, we estimate emissions associated with the deployment of YouTube to be approximately 10MtCO2e p.a. We estimate emissions reductions enabled through the use of an SID intervention from prior literature to be approximately 300KtCO2e p.a., and demonstrate that this is significant when considered alongside other emissions reduction interventions used by DSPs.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90494066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Voice User Interfaces in Schools: Co-designing for Inclusion with Visually-Impaired and Sighted Pupils 学校的语音用户界面:与视障和有视力的学生共同设计共融
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300608
Oussama Metatla, A. Oldfield, Taimur Ahmed, Antonis Vafeas, Sunny Miglani
{"title":"Voice User Interfaces in Schools: Co-designing for Inclusion with Visually-Impaired and Sighted Pupils","authors":"Oussama Metatla, A. Oldfield, Taimur Ahmed, Antonis Vafeas, Sunny Miglani","doi":"10.1145/3290605.3300608","DOIUrl":"https://doi.org/10.1145/3290605.3300608","url":null,"abstract":"Voice user interfaces (VUIs) are increasingly popular, particularly in homes. However, little research has investigated their potential in other settings, such as schools. We investigated how VUIs could support inclusive education, particularly for pupils with visual impairments (VIs). We organised focused discussions with educators at a school, with support staff from local authorities and, through bodystorming, with a class of 27 pupils. We then ran a series of co-design workshops with participants with mixed-visual abilities to design an educational VUI application. This provided insights into challenges faced by pupils with VIs in mainstream schools, and opened a space for educators, sighted and visually impaired pupils to reflect on and design for their shared learning experiences through VUIs. We present scenarios, a design space and an example application that show novel ways of using VUIs for inclusive education. We also reflect on co-designing with mixed-visual-ability groups in this space.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83400979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using Deep Neural Networks 基于超声成像的深度神经网络静音语音交互
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300376
N. Kimura, Michinari Kono, J. Rekimoto
{"title":"SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using Deep Neural Networks","authors":"N. Kimura, Michinari Kono, J. Rekimoto","doi":"10.1145/3290605.3300376","DOIUrl":"https://doi.org/10.1145/3290605.3300376","url":null,"abstract":"The availability of digital devices operated by voice is expanding rapidly. However, the applications of voice interfaces are still restricted. For example, speaking in public places becomes an annoyance to the surrounding people, and secret information should not be uttered. Environmental noise may reduce the accuracy of speech recognition. To address these limitations, a system to detect a user's unvoiced utterance is proposed. From internal information observed by an ultrasonic imaging sensor attached to the underside of the jaw, our proposed system recognizes the utterance contents without the user's uttering voice. Our proposed deep neural network model is used to obtain acoustic features from a sequence of ultrasound images. We confirmed that audio signals generated by our system can control the existing smart speakers. We also observed that a user can adjust their oral movement to learn and improve the accuracy of their voice recognition.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89334336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
People Who Can Take It: How Women Wikipedians Negotiate and Navigate Safety 能承受的人:女性维基人如何协商和导航安全
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300702
Amanda Menking, Ingrid Erickson, W. Pratt
{"title":"People Who Can Take It: How Women Wikipedians Negotiate and Navigate Safety","authors":"Amanda Menking, Ingrid Erickson, W. Pratt","doi":"10.1145/3290605.3300702","DOIUrl":"https://doi.org/10.1145/3290605.3300702","url":null,"abstract":"Wikipedia is one of the most successful online communities in history, yet it struggles to attract and retain women editors-a phenomenon known as the gender gap. We investigate this gap by focusing on the voices of experienced women Wikipedians. In this interview-based study (N=25), we identify a core theme among these voices: safety. We reveal how our participants perceive safety within their community, how they manage their safety both conceptually and physically, and how they act on this understanding to create safe spaces on and off Wikipedia. Our analysis shows Wikipedia functions as both a multidimensional and porous space encompassing a spectrum of safety. Navigating this space requires these women to employ sophisticated tactics related to identity management, boundary management, and emotion work. We conclude with a set of provocations to spur the design of future online environments that encourage equity, inclusivity, and safety for historically marginalized users.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89345475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Direct Finger Manipulation of 3D Object Image with Ultrasound Haptic Feedback 基于超声触觉反馈的三维物体图像直接手指操作
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300317
Atsushi Matsubayashi, Yasutoshi Makino, H. Shinoda
{"title":"Direct Finger Manipulation of 3D Object Image with Ultrasound Haptic Feedback","authors":"Atsushi Matsubayashi, Yasutoshi Makino, H. Shinoda","doi":"10.1145/3290605.3300317","DOIUrl":"https://doi.org/10.1145/3290605.3300317","url":null,"abstract":"In this study, we prototype and examine a system that allows a user to manipulate a 3D virtual object with multiple fingers without wearing any device. An autostereoscopic display produces a 3D image and a depth sensor measures the movement of the fingers. When a user touches a virtual object, haptic feedback is provided by ultrasound phased arrays. By estimating the cross section of the finger in contact with the virtual object and by creating a force pattern around it, it is possible for the user to recognize the position of the surface relative to the finger. To evaluate our system, we conducted two experiments to show that the proposed feedback method is effective in recognizing the object surface and thereby enables the user to grasp the object quickly without seeing it.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87304244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Does Who Matter?: Studying the Impact of Relationship Characteristics on Receptivity to Mobile IM Messages 谁重要吗?:研究关系特征对移动即时消息接受度的影响
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300756
Hao-Ping Lee, Kuan-yin Chen, Chih-Heng Lin, Chia-Yu Chen, Yu-Lin Chung, Yung-Ju Chang, Chien-Ru Sun
{"title":"Does Who Matter?: Studying the Impact of Relationship Characteristics on Receptivity to Mobile IM Messages","authors":"Hao-Ping Lee, Kuan-yin Chen, Chih-Heng Lin, Chia-Yu Chen, Yu-Lin Chung, Yung-Ju Chang, Chien-Ru Sun","doi":"10.1145/3290605.3300756","DOIUrl":"https://doi.org/10.1145/3290605.3300756","url":null,"abstract":"This study examines the characteristics of mobile instant-messaging users' relationships with their social contacts and the effects of both relationship and interruption context on four measures of receptivity: Attentiveness, Responsiveness, Interruptibility, and Opportuneness. Overall, interruption context overshadows relationship characteristics as predictors of all four of these facets of receptivity; this overshadowing was most acute for Interruptibility and Opportuneness, but existed for all factors. In addition, while Mobile Maintenance Expectation and Activity Engagement were negatively correlated with all receptivity measures, each such measure had its own set of predictors, highlighting the conceptual differences among the measures. Finally, delving more deeply into potential relationship effects, we found that a single, simple closeness question was as effective at predicting receptivity as the 12-item Unidimensional Relationship Closeness Scale.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84699674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Quantitative Measurement of Tool Embodiment for Virtual Reality Input Alternatives 虚拟现实输入选择工具具体化的定量测量
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pub Date : 2019-05-02 DOI: 10.1145/3290605.3300673
Ayman Alzayat, Mark S. Hancock, Miguel A. Nacenta
{"title":"Quantitative Measurement of Tool Embodiment for Virtual Reality Input Alternatives","authors":"Ayman Alzayat, Mark S. Hancock, Miguel A. Nacenta","doi":"10.1145/3290605.3300673","DOIUrl":"https://doi.org/10.1145/3290605.3300673","url":null,"abstract":"Virtual reality (VR) strives to replicate the sensation of the physical environment by mimicking people's perceptions and experience of being elsewhere. These experiences are of-ten mediated by the objects and tools we interact with in the virtual world (e.g., a controller). Evidence from psychology posits that when using the tool proficiently, it becomes em-bodied (i.e., an extension of one's body). There is little work,however, on how to measure this phenomenon in VR, andon how different types of tools and controllers can affect the experience of interaction. In this work, we leverage cognitive psychology and philosophy literature to construct the Locus-of-Attention Index (LAI), a measure of tool embodiment. We designed and conducted a study that measures readiness-to-hand and unreadiness-to-hand for three VR interaction techniques: hands, a physical tool, and a VR controller. The study shows that LAI can measure differences in embodiment with working and broken tools and that using the hand directly results in more embodiment than using controllers.","PeriodicalId":20454,"journal":{"name":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84822186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信