Proceedings of the Augmented Humans International Conference最新文献

筛选
英文 中文
ExemPoser
Proceedings of the Augmented Humans International Conference Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384788
Katsuhito Sasaki, Keisuke Shiro, J. Rekimoto
{"title":"ExemPoser","authors":"Katsuhito Sasaki, Keisuke Shiro, J. Rekimoto","doi":"10.1145/3384657.3384788","DOIUrl":"https://doi.org/10.1145/3384657.3384788","url":null,"abstract":"It is important for beginners to imitate poses of experts in various sports; especially in sport climbing, performance depends greatly on the pose that should be taken for given holds. However, it is difficult for beginners to learn the proper poses for all patterns from experts since climbing holds are completely different for each course. Therefore, we propose a system that predict a pose of experts from the positions of the hands and feet of the climber--the positions of holds used by the climber--using a neural network. In other words, our system simulates what pose experts take for the holds the climber is now using. The positions of hands and feet are calculated from a image of the climber captured from behind. To allow users to check what pose is ideal in real time during practice, we have adopted a simple and lightweight network structure with little computational delay. We asked experts to compare the poses predicted by our system with the poses of beginners, and we confirmed that the poses predicted by our system were in most cases better than or as good as those of beginners.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121577899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Accelerating Skill Acquisition of Two-Handed Drumming using Pneumatic Artificial Muscles 利用气动人工肌肉加速双手击鼓的技能习得
Proceedings of the Augmented Humans International Conference Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384780
Takashi Goto, Swagata Das, Katrin Wolf, Pedro Lopes, Y. Kurita, K. Kunze
{"title":"Accelerating Skill Acquisition of Two-Handed Drumming using Pneumatic Artificial Muscles","authors":"Takashi Goto, Swagata Das, Katrin Wolf, Pedro Lopes, Y. Kurita, K. Kunze","doi":"10.1145/3384657.3384780","DOIUrl":"https://doi.org/10.1145/3384657.3384780","url":null,"abstract":"While computers excel at augmenting user's cognitive abilities, only recently we started utilizing their full potential to enhance our physical abilities. More and more wearable force-feedback devices have been developed based on exoskeletons, electrical muscle stimulation (EMS) or pneumatic actuators. The latter, pneumatic-based artificial muscles, are of particular interest since they strike an interesting balance: lighter than exoskeletons and more precise than EMS. However, the promise of using artificial muscles to actually support skill acquisition and training users is still lacking empirical validation. In this paper, we unveil how pneumatic artificial muscles impact skill acquisition, using two-handed drumming as an example use case. To understand this, we conducted a user study comparing participants' drumming performance after training with the audio or with our artificial-muscle setup. Our haptic system is comprised of four pneumatic muscles and is capable of actuating the user's forearm to drum accurately up to 80 bpm. We show that pneumatic muscles improve participants' correct recall of drumming patterns significantly when compared to auditory training.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115825364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
WristLens
Proceedings of the Augmented Humans International Conference Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384797
Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, Alejandro Samboy, H. Koike, Woontack Woo, Aaron Quigley
{"title":"WristLens","authors":"Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, Alejandro Samboy, H. Koike, Woontack Woo, Aaron Quigley","doi":"10.1145/3384657.3384797","DOIUrl":"https://doi.org/10.1145/3384657.3384797","url":null,"abstract":"WristLens is a system for surface interaction from wrist-worn wearable devices such as smartwatches and fitness trackers. It enables eyes-free, single-handed gestures on surfaces, using an optical motion sensor embedded in a wrist-strap. This allows the user to leverage any proximate surface, including their own body, for input and interaction. An experimental study was conducted to measure the performance of gesture interaction on three different body parts. Our results show that directional gestures are accurately recognized but less so for shape gestures. Finally, we explore the interaction design space enabled by WristLens, and demonstrate novel use cases and applications, such as on-body interaction, bimanual interaction, cursor control and 3D measurement.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128371587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Wearable Reasoner: Towards Enhanced Human Rationality Through A Wearable Device With An Explainable AI Assistant 可穿戴推理器:通过带有可解释AI助手的可穿戴设备增强人类理性
Proceedings of the Augmented Humans International Conference Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384799
Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes
{"title":"Wearable Reasoner: Towards Enhanced Human Rationality Through A Wearable Device With An Explainable AI Assistant","authors":"Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes","doi":"10.1145/3384657.3384799","DOIUrl":"https://doi.org/10.1145/3384657.3384799","url":null,"abstract":"Human judgments and decisions are prone to errors in reasoning caused by factors such as personal biases and external misinformation. We explore the possibility of enhanced reasoning by implementing a wearable AI system as a human symbiotic counterpart. We present \"Wearable Reasoner\", a proof-of-concept wearable system capable of analyzing if an argument is stated with supporting evidence or not. We explore the impact of argumentation mining and explainability of the AI feedback on the user through an experimental study of verbal statement evaluation tasks. The results demonstrate that the device with explainable feedback is effective in enhancing rationality by helping users differentiate between statements supported by evidence and without. When assisted by an AI system with explainable feedback, users significantly consider claims supported by evidence more reasonable and agree more with them compared to those without. Qualitative interviews demonstrate users' internal processes of reflection and integration of the new information in their judgment and decision making, emphasizing improved evaluation of presented arguments.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133597177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
VersaTouch
Proceedings of the Augmented Humans International Conference Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384778
Yilei Shi, Haimo Zhang, Jiashuo Cao, Suranga Nanayakkara
{"title":"VersaTouch","authors":"Yilei Shi, Haimo Zhang, Jiashuo Cao, Suranga Nanayakkara","doi":"10.1145/3384657.3384778","DOIUrl":"https://doi.org/10.1145/3384657.3384778","url":null,"abstract":"We present VersaTouch, a portable, plug-and-play system that uses active acoustic sensing to track fine-grained touch locations as well as touch force of multiple fingers on everyday surfaces without having to permanently instrument them or do extensive calibration. Our system is versatile in multiple aspects. First, with simple calibration, VersaTouch can be arranged in arbitrary layouts in order to fit into crowded surfaces while retaining its accuracy. Second, various modalities of touch input, such as distance and position, can be supported depending on the number of sensors used to suit the interaction scenario. Third, VersaTouch can sense multi-finger touch, touch force, as well as identify the touch source. Last, VersaTouch is capable of providing vibrotactile feedback to fingertips through the same actuators used for touch sensing. We conducted a series of studies and demonstrated that VersaTouch was able to track finger touch using various layouts with average error from 9.62mm to 14.25mm on different surfaces within a circular area of 400mm diameter centred around the sensors, as well as detect touch force. Finally, we discuss the interaction design space and interaction techniques enabled by VersaTouch.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114412813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Understanding Face Gestures with a User-Centered Approach Using Personal Computer Applications as an Example 以个人电脑应用为例,以用户为中心理解面部手势
Proceedings of the Augmented Humans International Conference Pub Date : 2020-03-16 DOI: 10.1145/3384657.3385333
Yenchin Lai, Benjamin Tag, K. Kunze, R. Malaka
{"title":"Understanding Face Gestures with a User-Centered Approach Using Personal Computer Applications as an Example","authors":"Yenchin Lai, Benjamin Tag, K. Kunze, R. Malaka","doi":"10.1145/3384657.3385333","DOIUrl":"https://doi.org/10.1145/3384657.3385333","url":null,"abstract":"While face gesture input has been proposed by researchers, the issue of practical gestures remains unsolved. We present the first comprehensive investigation of user-defined face gestures as an augmented input modality. Based on a focus group discussion, we developed three sets of tasks, where we asked participants to spontaneously produce face gestures to complete these tasks. We report our findings of a user study and discuss the user preference of face gestures. The results inform the development of future interaction systems utilizing face gestures.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122571410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Facilitating Experiential Knowledge Sharing through Situated Conversations 通过情境对话促进经验知识共享
Proceedings of the Augmented Humans International Conference Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384798
R. Fujikura, Y. Sumi
{"title":"Facilitating Experiential Knowledge Sharing through Situated Conversations","authors":"R. Fujikura, Y. Sumi","doi":"10.1145/3384657.3384798","DOIUrl":"https://doi.org/10.1145/3384657.3384798","url":null,"abstract":"This paper proposes a system that facilitates knowledge sharing among people in similar situations by providing audio of past conversations. Our system records all voices of conversations among the users in the specific fields such as tourist spots, museums, digital fabrication studio, etc. and then timely provides users in a similar situation with fragments of the accumulated conversations. For segmenting and retrieving past conversation from vast amounts of captured data, we focus on non-verbal contextual information, i.e., location, attention targets, and hand operations of the conversation participants. All voices of conversation are recorded, without any selection or classification. The delivery of the voices to a user is determined not based on the content of the conversation but on the similarity of situations between the conversation participants and the user. To demonstrate the concept of the proposed system, we performed a series of experiments to observe changes in user behavior due to past conversations related to the situation at the digital fabrication workshop. Since we have not achieved a satisfactory implementation to sense user's situation, we used Wizard of Oz (WOZ) method. That is, the experimenter visually judges the change in the situation of the user and inputs it to the system, and the system automatically provides the users with voices of past conversation corresponding to the situation. Experimental results show that most of the conversations presented when the situation perfectly matches is related to the user's situation, and some of them prompts the user to change their behavior effectively. Interestingly, we could observe that conversations that were done in the same area but not related to the current task also had the effect of expanding the user's knowledge. We also observed a case that although a conversation highly related to the user's situation was timely presented but the user could not utilize the knowledge to solve the problem of the current task. It shows the limitation of our system, i.e., even if a knowledgeable conversation is timely provided, it is useless unless it fits with the user's knowledge level.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131813834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Lateral Line: Augmenting Spatiotemporal Perception with a Tactile Interface 侧线:用触觉界面增强时空感知
Proceedings of the Augmented Humans International Conference Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384775
Matti Krüger, Christiane B. Wiebel-Herboth, H. Wersing
{"title":"The Lateral Line: Augmenting Spatiotemporal Perception with a Tactile Interface","authors":"Matti Krüger, Christiane B. Wiebel-Herboth, H. Wersing","doi":"10.1145/3384657.3384775","DOIUrl":"https://doi.org/10.1145/3384657.3384775","url":null,"abstract":"In this paper we describe a concept for artificially supplementing peoples' spatiotemporal perception. Our target is to improve performance in tasks that rely on a fast and accurate understanding of movement dynamics in the environment. To provide an exemplary research and application scenario, we implemented a prototype of the concept in a driving simulation environment and used an interface capable of providing vibrotactile stimuli around the waist to communicate spatiotemporal information. The tactile stimuli dynamically encode directions and temporal proximities towards approaching objects. Temporal proximity is defined as inversely proportional to the time-to-contact and can be interpreted as a measure of imminent collision risk and temporal urgency. Results of a user study demonstrate performance benefits in terms of enhanced driving safety. This indicates a potential for improving peoples' capabilities in assessing relevant properties of dynamic environments in order to purposefully adapt their actions.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125044026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
High-speed Projection Method of Swing Plane for Golf Training 高尔夫球训练挥杆平面高速投射方法
Proceedings of the Augmented Humans International Conference Pub Date : 2020-03-16 DOI: 10.1145/3384657.3385330
Tomohiro Sueishi, Chikara Miyaji, Masataka Narumiya, Y. Yamakawa, M. Ishikawa
{"title":"High-speed Projection Method of Swing Plane for Golf Training","authors":"Tomohiro Sueishi, Chikara Miyaji, Masataka Narumiya, Y. Yamakawa, M. Ishikawa","doi":"10.1145/3384657.3385330","DOIUrl":"https://doi.org/10.1145/3384657.3385330","url":null,"abstract":"Display technologies that show dynamic information such as club swing motion are useful for golf training, but conventional methods have a large latency from sensing the motion to displaying them for users. In this study, we propose an immediate, high-speed projection method of swing plane geometric information onto the ground during the swing. The method utilizes marker-based clubhead posture estimation and a mirror-based high-speed tracking system. The intersection line with the ground, which is the geometric information of the swing plane, is immediately cast by a high-speed projector. We have experimentally confirmed the sufficiently low latency of the projection itself for swing motions and have demonstrated the temporal convergence and predictive display of the swing plane line projection around the bottom of the swing motion.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133215217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信