Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems最新文献

筛选
英文 中文
Learning and Practicing Logic Circuits: Development of a Mobile-based Learning Prototype 学习和练习逻辑电路:基于移动的学习原型的开发
M. Seraj
{"title":"Learning and Practicing Logic Circuits: Development of a Mobile-based Learning Prototype","authors":"M. Seraj","doi":"10.1145/3411763.3451720","DOIUrl":"https://doi.org/10.1145/3411763.3451720","url":null,"abstract":"Nowadays, with the advent of electronic devices in everyday life, mobile devices can be utilized for learning purposes. When designing a mobile-based learning application, a large number of aspects should be taken into account. For the present paper, the following aspects are of special importance: first, it should be considered how to represent information; second, possible interactions between learner and system should be defined; third – and depending on the second aspect – it should be considered how real-time responses can be provided by the system. Moreover, psychological theories as for example the 4C/ID model and findings with respect to blended learning environments should be taken into account. In this paper, a mobile-based learning prototype concerning the learning topic ”logic circuit design” is presented which considers the mentioned aspects to support independent practice. The prototype includes four different representations: (i) code-based (Verilog hardware description language), (ii) graphical-based (gate-level view), (iii) Boolean function, and (iv) truth table for each gate. The proposed learning system divides the learning content into different sections to support independent practice in meaningful steps. Multiple representations are included in order to foster understanding and transfer. The resulting implications for future work are discussed.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121008605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Future of Human-Food Interaction 人类与食物互动的未来
Jialin Deng, Yan Wang, Carlos Velasco, Ferran Altarriba Bertran, R. Comber, Marianna Obrist, K. Isbister, C. Spence, F. Mueller
{"title":"The Future of Human-Food Interaction","authors":"Jialin Deng, Yan Wang, Carlos Velasco, Ferran Altarriba Bertran, R. Comber, Marianna Obrist, K. Isbister, C. Spence, F. Mueller","doi":"10.1145/3411763.3441312","DOIUrl":"https://doi.org/10.1145/3411763.3441312","url":null,"abstract":"There is an increasing interest in food within the HCI discipline, with many interactive prototypes emerging that augment, extend and challenge the various ways people engage with food, ranging from growing plants, cooking ingredients, serving dishes and eating together. Grounding theory is also emerging that in particular draws from embodied interactions, highlighting the need to consider not only instrumental, but also experiential factors specific to human-food interactions. Considering this, we are provided with an opportunity to extend human-food interactions through knowledge gained from designing novel systems emerging through technical advances. This workshop aims to explore the possibility of bringing practitioners, researchers and theorists together to discuss the future of human-food interaction with a particular highlight on the design of experiential aspects of human-food interactions beyond the instrumental. This workshop extends prior community building efforts in this area and hence explicitly invites submissions concerning the empirically-informed knowledge of how technologies can enrich eating experiences. In doing so, people will benefit not only from new technologies around food, but also incorporate the many rich benefits that are associated with eating, especially when eating with others.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121697656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Instrumeteor: Authoring tool for Guitar Performance Video 工具:创作工具的吉他表演视频
Yuichi Atarashi
{"title":"Instrumeteor: Authoring tool for Guitar Performance Video","authors":"Yuichi Atarashi","doi":"10.1145/3411763.3451521","DOIUrl":"https://doi.org/10.1145/3411763.3451521","url":null,"abstract":"To show off their playing, musicians publish musical performance videos on streaming services. In order to find out typical characteristics of guitar performance videos, we carried out a quantitative survey of guitar performance videos. Then, we discuss key problems of creating effects informed by the survey. According to the discussion, authoring videos with typical effects takes a long time even for experienced users because they typically need to combine multiple video tracks (e.g., lyrics and videos shot from multiple angles) into a single track. They need to synchronize all tracks with the musical piece and set transitions between them at the right timing, aware of the musical structure. This paper presents Instrumeteor, an authoring tool for musical performance videos. First, it automatically analyzes the musical structure in the tracks to align them on a single timeline. Second, it implements typical video effects informed by the survey. In this way, our tool reduces manual work and unleashes the musicians’ creativity.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121355529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TactiHelm: Tactile Feedback in a Cycling Helmet for Collision Avoidance 触觉头盔:避免碰撞的自行车头盔的触觉反馈
Dong-Bach Vo, Julia Saari, S. Brewster
{"title":"TactiHelm: Tactile Feedback in a Cycling Helmet for Collision Avoidance","authors":"Dong-Bach Vo, Julia Saari, S. Brewster","doi":"10.1145/3411763.3451580","DOIUrl":"https://doi.org/10.1145/3411763.3451580","url":null,"abstract":"This paper introduces TactiHelm, a helmet that can inform cyclists about potential collisions. To inform the design of TactiHelm, a survey on cycling safety was conducted. The results highlighted the need for a support system to inform on location and proximity of surrounding vehicles. A set of tactile cues for TactiHelm conveying proximity and directions of the collisions were designed and evaluated. The results show that participants could correctly identify proximity up to 91% and directions up to 85% when tactile cues were delivered on the head, making TactiHelm a suitable device for notifications when cycling.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114015502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
LightPaintAR: Assist Light Painting Photography with Augmented Reality LightPaintAR:辅助光绘画摄影与增强现实
Tianyi Wang, Xun Qian, F. He, K. Ramani
{"title":"LightPaintAR: Assist Light Painting Photography with Augmented Reality","authors":"Tianyi Wang, Xun Qian, F. He, K. Ramani","doi":"10.1145/3411763.3451672","DOIUrl":"https://doi.org/10.1145/3411763.3451672","url":null,"abstract":"Light painting photos are created by moving light sources in mid-air while taking a long exposure photo. However, it is challenging for novice users to leave accurate light traces without any spatial guidance. Therefore, we present LightPaintAR, a novel interface that leverages augmented reality (AR) traces as a spatial reference to enable precise movement of the light sources. LightPaintAR allows users to draft, edit, and adjust virtual light traces in AR, and move light sources along the AR traces to generate accurate light traces on photos. With LightPaintAR, users can light paint complex patterns with multiple strokes and colors. We evaluate the effectiveness and the usability of our system with a user study and showcase multiple light paintings created by the users. Further, we discuss future improvements of LightPaintAR.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122442769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Virtual Global Landmark: An Augmented Reality Technique to Improve Spatial Navigation Learning 虚拟全球地标:提高空间导航学习的增强现实技术
Avinash Kumar Singh, Jia Liu, C. A. T. Cortes, Chin-Teng Lin
{"title":"Virtual Global Landmark: An Augmented Reality Technique to Improve Spatial Navigation Learning","authors":"Avinash Kumar Singh, Jia Liu, C. A. T. Cortes, Chin-Teng Lin","doi":"10.1145/3411763.3451634","DOIUrl":"https://doi.org/10.1145/3411763.3451634","url":null,"abstract":"Navigation is a multifaceted human ability involving complex cognitive functions. It allows the active exploration of unknown environments without becoming lost while enabling us to move efficiently across well-known spaces. However, the increasing reliance on navigation assistance systems reduces surrounding environment processing and decreases spatial knowledge acquisition and thus orienting ability. To prevent such a skill loss induced by current navigation support systems like Google Maps, we propose a novel landmark technique in augmented reality (AR): the virtual global landmark (VGL). This technique seeks to help navigation and promote spatial learning. We conducted a pilot study with five participants to compare the directional arrows with VGL. Our result suggests that the participants learned more about the environment while navigation using VGL than directional arrows without any significant mental workload increase. The results have a substantial impact on the future of our navigation system.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122869818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
JINSense: Repurposing Electrooculography Sensors on Smart Glass for Midair Gesture and Context Sensing JINSense:在智能玻璃上重新利用眼电传感器进行空中手势和环境感知
H. Yeo, Juyoung Lee, Woontack Woo, H. Koike, A. Quigley, K. Kunze
{"title":"JINSense: Repurposing Electrooculography Sensors on Smart Glass for Midair Gesture and Context Sensing","authors":"H. Yeo, Juyoung Lee, Woontack Woo, H. Koike, A. Quigley, K. Kunze","doi":"10.1145/3411763.3451741","DOIUrl":"https://doi.org/10.1145/3411763.3451741","url":null,"abstract":"In this work, we explore a new sensing technique for smart eyewear equipped with Electrooculography (EOG) sensors. We repurpose the EOG sensors embedded in a JINS MEME smart eyewear, originally designed to detect eye movement, to detect midair hand gestures. We also explore the potential of sensing human proximity, rubbing action and to differentiate materials and objects using this sensor. This new found sensing capabilities enable a various types of novel input and interaction scenarios for such wearable eyewear device, whether it is worn on body or resting on a desk.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131362663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
BlahBlahBot: Facilitating Conversation between Strangers using a Chatbot with ML-infused Personalized Topic Suggestion BlahBlahBot:使用具有ml注入的个性化主题建议的聊天机器人促进陌生人之间的对话
Donghoon Shin, Sang-Taek Yoon, Soomin Kim, Joonhwan Lee
{"title":"BlahBlahBot: Facilitating Conversation between Strangers using a Chatbot with ML-infused Personalized Topic Suggestion","authors":"Donghoon Shin, Sang-Taek Yoon, Soomin Kim, Joonhwan Lee","doi":"10.1145/3411763.3451771","DOIUrl":"https://doi.org/10.1145/3411763.3451771","url":null,"abstract":"It is a prevalent behavior of having a chat with strangers in online settings where people can easily gather. Yet, people often find it difficult to initiate and maintain conversation due to the lack of information about strangers. Hence, we aimed to facilitate conversation between the strangers with the use of machine learning (ML) algorithms and present BlahBlahBot, an ML-infused chatbot that moderates conversation between strangers with personalized topics. Based on social media posts, BlahBlahBot supports the conversation by suggesting topics that are likely to be of mutual interest between users. A user study with three groups (control, random topic chatbot, and BlahBlahBot; N=18) found the feasibility of BlahBlahBot in increasing both conversation quality and closeness to the partner, along with the factors that led to such increases from the user interview. Overall, our preliminary results imply that an ML-infused conversational agent can be effective for augmenting a dyadic conversation.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130197396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Fake Moods: Can Users Trick an Emotion-Aware VoiceBot? 假情绪:用户能骗过感知情绪的语音机器人吗?
Yong Ma, Heiko Drewes, A. Butz
{"title":"Fake Moods: Can Users Trick an Emotion-Aware VoiceBot?","authors":"Yong Ma, Heiko Drewes, A. Butz","doi":"10.1145/3411763.3451744","DOIUrl":"https://doi.org/10.1145/3411763.3451744","url":null,"abstract":"The ability to deal properly with emotion could be a critical feature of future VoiceBots. Humans might even choose to use fake emotions, e.g., sound angry to emphasize what they are saying or sound nice to get what they want. However, it is unclear whether current emotion detection methods detect such acted emotions properly, or rather the true emotion of the speaker. We asked a small number of participants (26) to mimic five basic emotions and used an open source emotion-in-voice detector to provide feedback on whether their acted emotion was recognized as intended. We found that it was difficult for participants to mimic all five emotions and that certain emotions were easier to mimic than others. However, it remains unclear whether this is due to the fact that emotion was only acted or due to the insufficiency of the detection software. As an intended side effect, we collected a small corpus of labeled data for acted emotion in speech, which we plan to extend and eventually use as training data for our own emotion detection. We present the study setup and discuss some insights on our results.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126977117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Peril and Potential of XR-based Interactions with Wildlife 基于x射线与野生动物相互作用的危险和潜力
Daniel Pimentel
{"title":"The Peril and Potential of XR-based Interactions with Wildlife","authors":"Daniel Pimentel","doi":"10.1145/3411763.3450378","DOIUrl":"https://doi.org/10.1145/3411763.3450378","url":null,"abstract":"In “Being a Beast”, Charles Foster recounts living with, and as, wildlife (e.g., otters, foxes). These encounters, he contends, forge human-nature connections which have waned, negatively impacting biodiversity conservation. Yet, we need not live amidst beasts to bridge the human-nature gap. Cross-reality (XR) platforms (i.e., virtual and augmented reality) have the unique capacity to facilitate pseudo interactions with, and as, wildlife, connecting audiences to the plight of endangered species. However, XR-based wildlife interaction, I argue, is a double-edged sword whose implementation warrants as much attention in HCI as in environmental science. In this paper I highlight the promise of XR-based wildlife encounters, and discuss dilemmas facing developers tasked with fabricating mediated interactions with wildlife. I critique this approach by outlining how such experiences may negatively affect humans and the survivability of the very species seeking to benefit from them.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127011698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信