Proceedings of the 10th Augmented Human International Conference 2019最新文献

筛选
英文 中文
Let Your World Open: CAVE-based Visualization Methods of Public Virtual Reality towards a Shareable VR Experience 让你的世界开放:面向共享VR体验的基于cave的公共虚拟现实可视化方法
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311860
Akira Ishii, M. Tsuruta, Ippei Suzuki, Shuta Nakamae, Junichi Suzuki, Yoichi Ochiai
{"title":"Let Your World Open: CAVE-based Visualization Methods of Public Virtual Reality towards a Shareable VR Experience","authors":"Akira Ishii, M. Tsuruta, Ippei Suzuki, Shuta Nakamae, Junichi Suzuki, Yoichi Ochiai","doi":"10.1145/3311823.3311860","DOIUrl":"https://doi.org/10.1145/3311823.3311860","url":null,"abstract":"Virtual reality (VR) games are currently becoming part of the public-space entertainment (e.g., VR amusement parks). Therefore, VR games should be attractive for players, as well as for bystanders. Current VR systems are still mostly focused on enhancing the experience of the head-mounted display (HMD) users; thus, bystanders without an HMD cannot enjoy the experience together with the HMD users. We propose the \"ReverseCAVE\": a proof-of-concept prototype for public VR visualization using CAVE-based projection with translucent screens for bystanders toward a shareable VR experience. The screens surround the HMD user and the VR environment is projected onto the screens. This enables the bystanders to see the HMD user and the VR environment simultaneously. We designed and implemented the ReverseCAVE, and evaluated it in terms of the degree of attention, attractiveness, enjoyment, and shareability, assuming that it is used in a public space. Thus, we can make the VR world more accessible and enhance the public VR experience of the bystanders via the ReverseCAVE.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114530293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
BitoBody BitoBody
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311855
Erwin Wu, Mistki Piekenbrock, Hideki Koike
{"title":"BitoBody","authors":"Erwin Wu, Mistki Piekenbrock, Hideki Koike","doi":"10.1145/3311823.3311855","DOIUrl":"https://doi.org/10.1145/3311823.3311855","url":null,"abstract":"In this research, we propose a novel human body contact detection and projection system with dynamic mesh collider. We use motion capture camera and generated human 3D models to detect the contact between user's bodies. Since it is difficult to update human mesh collider every frame, a special algorithm that divides body meshes into small pieces of polygons to do collision detection is developed and detected hit information will be dynamically projected according to its magnitude of damage. The maximum deviation of damage projection is about 7.9cm under a 240-fps optitrack motion capture system and 12.0cm under a 30-fps Kinect camera. The proposed system can be used in various sports where bodies come in contact and it allows the audience and players to understand the context easier.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117279563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Universal Appliance Control through Wearable Augmented Reality 通过可穿戴增强现实技术研究通用电器控制
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311853
Vincent Becker, Felix Rauchenstein, Gábor Sörös
{"title":"Investigating Universal Appliance Control through Wearable Augmented Reality","authors":"Vincent Becker, Felix Rauchenstein, Gábor Sörös","doi":"10.1145/3311823.3311853","DOIUrl":"https://doi.org/10.1145/3311823.3311853","url":null,"abstract":"The number of interconnected devices around us is constantly growing. However, it may become challenging to control all these devices when control interfaces are distributed over mechanical elements, apps, and configuration webpages. We investigate interaction methods for smart devices in augmented reality. The physical objects are augmented with interaction widgets, which are generated on demand and represent the connected devices along with their adjustable parameters. For example, a loudspeaker can be overlaid with a controller widget for its volume. We explore three ways of manipulating the virtual widgets: (a) in-air finger pinching and sliding, (b) whole arm gestures rotating and waving, (c) incorporating physical objects in the surrounding and mapping their movements to the interaction primitives. We compare these methods in a user study with 25 participants and find significant differences in the preference of the users, the speed of executing commands, and the granularity of the type of control.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116210188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Design of Enhanced Flashcards for Second Language Vocabulary Learning with Emotional Binaural Narration 情绪性双耳叙述强化二语词汇学习抽认卡的设计
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311867
S. Fukushima
{"title":"Design of Enhanced Flashcards for Second Language Vocabulary Learning with Emotional Binaural Narration","authors":"S. Fukushima","doi":"10.1145/3311823.3311867","DOIUrl":"https://doi.org/10.1145/3311823.3311867","url":null,"abstract":"In this paper, we report on the design of a flashcard application with which learners experience the meaning of written words with emotional binaural voice narrations to enhance second language vocabulary learning. Typically, voice used in English vocabulary learning is recorded by a native speaker with no accent, and it aims for accurate pronunciation and clarity. However, the voice can also be flat and monotonous, and it can be difficult for learners to retain the new vocabulary in the semantic memory. Enhancing textual flashcards with emotional narration in the learner's native language helps the retention of new second language vocabulary items in the episodic memory instead of the semantic memory. Further, greater emotionality in the narration reinforces the retention of episodic memory.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122021619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MusiArm
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311873
Kaito Hatakeyama, M. Y. Saraiji, K. Minamizawa
{"title":"MusiArm","authors":"Kaito Hatakeyama, M. Y. Saraiji, K. Minamizawa","doi":"10.1145/3311823.3311873","DOIUrl":"https://doi.org/10.1145/3311823.3311873","url":null,"abstract":"The emergence of prosthetic limbs where solely focused on substituting the missing limb with an artificial one, in order for the handicap people to manage their daily life independently. Past research on prosthetic hands has mainly focused on prosthesis' function and performance. Few proposals focused on the entertainment aspect of prosthetic hands. In this research, we considered the defective part as a potential margin for freely designing our bodies, and coming up with new use cases beyond the original function of the limb. Thus, we are not aiming to create anthropomorphic designs or functions of the limbs. By fusing the prosthetic hands and musical instruments, we propose a new prosthetic hand called \"MusiArm\" that extends the body part's function to become an instrument. MusiArm concept was developed through the dialogue between the handicapped people, engineers and prosthetists using the physical characteristics of the handicapped people as a \"new value\" that only the handicapped person can possess. We asked handicapped people who cannot play musical instruments, as well as people who do not usually play instruments, to use prototypes we made. As a result of the usability tests, using MusiArm, we made a part of the body function as a musical instrument, drawing out the unique expression methods of individuals, and enjoying the performance and clarify the possibility of showing interests.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122839215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Brain Computer Interface for Neuro-rehabilitation With Deep Learning Classification and Virtual Reality Feedback 基于深度学习分类和虚拟现实反馈的神经康复脑机接口
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311864
Tamás Karácsony, J. P. Hansen, H. Iversen, S. Puthusserypady
{"title":"Brain Computer Interface for Neuro-rehabilitation With Deep Learning Classification and Virtual Reality Feedback","authors":"Tamás Karácsony, J. P. Hansen, H. Iversen, S. Puthusserypady","doi":"10.1145/3311823.3311864","DOIUrl":"https://doi.org/10.1145/3311823.3311864","url":null,"abstract":"Though Motor Imagery (MI) stroke rehabilitation effectively promotes neural reorganization, current therapeutic methods are immeasurable and their repetitiveness can be demotivating. In this work, a real-time electroencephalogram (EEG) based MI-BCI (Brain Computer Interface) system with a virtual reality (VR) game as a motivational feedback has been developed for stroke rehabilitation. If the subject successfully hits one of the targets, it explodes and thus providing feedback on a successfully imagined and virtually executed movement of hands or feet. Novel classification algorithms with deep learning (DL) and convolutional neural network (CNN) architecture with a unique trial onset detection technique was used. Our classifiers performed better than the previous architectures on datasets from PhysioNet offline database. It provided fine classification in the real-time game setting using a 0.5 second 16 channel input for the CNN architectures. Ten participants reported the training to be interesting, fun and immersive. \"It is a bit weird, because it feels like it would be my hands\", was one of the comments from a test person. The VR system induced a slight discomfort and a moderate effort for MI activations was reported. We conclude that MI-BCI-VR systems with classifiers based on DL for real-time game applications should be considered for motivating MI stroke rehabilitation.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123957810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Augmenting Human With a Tail 带着尾巴的扩增人
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311847
Haoran Xie, Kento Mitsuhashi, T. Torii
{"title":"Augmenting Human With a Tail","authors":"Haoran Xie, Kento Mitsuhashi, T. Torii","doi":"10.1145/3311823.3311847","DOIUrl":"https://doi.org/10.1145/3311823.3311847","url":null,"abstract":"Human-augmentation devices have been extensively proposed and developed recently and are useful in improving our work efficiency and our quality of life. Inspired by animal tails, this study aims to propose a wearable and functional tail device that combines physical and emotional-augmentation modes. In the physical-augmentation mode, the proposed device can be transformed into a consolidated state to support a user's weight, similar to a kangaroo's tail. In the emotional-augmentation mode, the proposed device can help users express their emotions, which are realized by different tail-motion patterns. For our initial prototype, we developed technical features that can support the weight of an adult, and we performed a perceptional investigation of the relations between the tail movements and the corresponding perceptual impressions. Using the animal-tail analog, the proposed device may be able to help the human user in both physical and emotional ways.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125500927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Guided Walking to Direct Pedestrians toward the Same Destination 引导步行,引导行人前往同一目的地
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311835
Nobuhito Sakamoto, M. Furukawa, M. Kurokawa, T. Maeda
{"title":"Guided Walking to Direct Pedestrians toward the Same Destination","authors":"Nobuhito Sakamoto, M. Furukawa, M. Kurokawa, T. Maeda","doi":"10.1145/3311823.3311835","DOIUrl":"https://doi.org/10.1145/3311823.3311835","url":null,"abstract":"In this paper, we propose a floor covering-type walking guidance sheet to direct pedestrians without requiring attachment/detachment. Polarity is reversed with respect to the direction of walking in the guidance sheet such that a pedestrian travelling in any direction can be guided toward a given point. In experiments, our system successfully guided a pedestrian along the same direction regardless of the direction of travel using the walking guidance sheet. The induction effect of the proposed method was also evaluated.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133208655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CapMat
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311874
Denys J. C. Matthies, Don Samitha Elvitigala, Sachith Muthukumarana, Jochen Huber, Suranga Nanayakkara
{"title":"CapMat","authors":"Denys J. C. Matthies, Don Samitha Elvitigala, Sachith Muthukumarana, Jochen Huber, Suranga Nanayakkara","doi":"10.1145/3311823.3311874","DOIUrl":"https://doi.org/10.1145/3311823.3311874","url":null,"abstract":"We present CapMat, a smart foot mat that enables user identification, supporting applications such as multi-layer authentication. CapMat leverages a large form factor capacitive sensor to capture shoe sole images. These images vary based on shoe form factors, the individual wear, and the user's weight. In a preliminary evaluation, we distinguished 15 users with an accuracy of up to 100%.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114184398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SubMe SubMe
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311865
Katsuya Fujii, Junichi Rekimoto
{"title":"SubMe","authors":"Katsuya Fujii, Junichi Rekimoto","doi":"10.1145/3311823.3311865","DOIUrl":"https://doi.org/10.1145/3311823.3311865","url":null,"abstract":"Owing to the improvement in accuracy of eye tracking devices, eye gaze movements occurring while conducting tasks are now a part of physical activities that can be monitored just like other life-logging data. Analyzing eye gaze movement data to predict reading comprehension has been widely explored and researchers have proven the potential of utilizing computers to estimate the skills and expertise level of users in various categories, including language skills. However, though many researchers have worked specifically on written texts to improve the reading skills of users, little research has been conducted to analyze eye gaze movements in correlation to watching movies, a medium which is known to be a popular and successful method of studying English as it includes reading, listening, and even speaking, the later of which is attributed to language shadowing. In this research, we focus on movies with subtitles due to the fact that they are very useful in order to grasp what is occurring on screen, and therefore, overall understanding of the content. We realized that the viewers' eye gaze movements are distinct depending on their English level. After retrieving the viewers' eye gaze movement data, we implemented a machine learning algorithm to detect their English levels and created a smart subtitle system called SubMe. The goal of this research is to estimate English levels through tracking eye movement. This was conducted by allowing the users to view a movie with subtitles. Our aim is create a system that can give the user certain feedback that can help improve their English studying methods.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114876316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信