Proceedings of the 10th Augmented Human International Conference 2019最新文献

筛选
英文 中文
Build your Own!: Open-Source VR Shoes for Unity3D 建立你自己的!Unity3D的开源VR鞋
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311852
J. Reinhardt, E. Lewandowski, Katrin Wolf
{"title":"Build your Own!: Open-Source VR Shoes for Unity3D","authors":"J. Reinhardt, E. Lewandowski, Katrin Wolf","doi":"10.1145/3311823.3311852","DOIUrl":"https://doi.org/10.1145/3311823.3311852","url":null,"abstract":"Hand-held controllers enable all kinds of interaction in Virtual Reality (VR), such as object manipulation as well as for locomotion. VR shoes allow using the hand exclusively for naturally manual tasks, such as object manipulation, while locomotion could be realized through feet input -- just like in the physical world. While hand-held VR controllers became standard input devices for consumer VR products, VR shoes are only barely available, and also research on that input modality remains open questions. We contribute here with open-source VR shoes and describe how to build and implement them as Unity3D input device. We hope to support researchers in VR research and practitioners in VR product design to increase usability and natural interaction in VR.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121263918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Evaluation of a device reproducing the pseudo-force sensation caused by a clothespin 对一种再现衣夹引起的伪力感的装置的评价
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311837
Masahiro Miyakami, Takuto Nakamura, H. Kajimoto
{"title":"Evaluation of a device reproducing the pseudo-force sensation caused by a clothespin","authors":"Masahiro Miyakami, Takuto Nakamura, H. Kajimoto","doi":"10.1145/3311823.3311837","DOIUrl":"https://doi.org/10.1145/3311823.3311837","url":null,"abstract":"A pseudo-force sensation can be elicited by pinching a finger with a clothespin. When the clothespin is used to pinch the finger from the palm side, a pseudo-force is felt in the direction towards the palm side, and when it is used to pinch the finger from the back side of the hand, the pseudo-force is felt in the extension direction. Here, as a first step to utilizing this phenomenon in human-machine interfaces, we developed a device that reproduces the clothespin phenomenon and confirmed the occurrence rate of the pseudo-force sensation.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"89 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131957734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Estimation of Fingertip Contact Force by Measuring Skin Deformation and Posture with Photo-reflective Sensors 利用光反射传感器测量皮肤变形和姿态估算指尖接触力
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311824
Ayane Saito, W. Kuno, Wataru Kawai, N. Miyata, Yuta Sugiura
{"title":"Estimation of Fingertip Contact Force by Measuring Skin Deformation and Posture with Photo-reflective Sensors","authors":"Ayane Saito, W. Kuno, Wataru Kawai, N. Miyata, Yuta Sugiura","doi":"10.1145/3311823.3311824","DOIUrl":"https://doi.org/10.1145/3311823.3311824","url":null,"abstract":"A wearable device for measuring skin deformation of the fingertip---to obtain contact force when the finger touches an object---was prototyped and experimentally evaluated. The device is attached to the fingertip and uses multiple photo-reflective sensors (PRSs) to measures the distance from the PRSs to the side surface of the fingertip. The sensors do not touch the contact surface between the fingertip and the object; as a result, the contact force is obtained without changing the user's tactile sensation. In addition, the accuracy of estimated contact force was improved by determining the posture of the fingertip by measuring the distance between the fingertip and the contact surface. Based on the prototyped device, a system for estimating three-dimensional contact force on the fingertip was implemented.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133641673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
TongueBoard: An Oral Interface for Subtle Input 舌板:用于细微输入的口头界面
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311831
Richard Li, Jason Wu, Thad Starner
{"title":"TongueBoard: An Oral Interface for Subtle Input","authors":"Richard Li, Jason Wu, Thad Starner","doi":"10.1145/3311823.3311831","DOIUrl":"https://doi.org/10.1145/3311823.3311831","url":null,"abstract":"We present TongueBoard, a retainer form-factor device for recognizing non-vocalized speech. TongueBoard enables absolute position tracking of the tongue by placing capacitive touch sensors on the roof of the mouth. We collect a dataset of 21 common words from four user study participants (two native American English speakers and two non-native speakers with severe hearing loss). We train a classifier that is able to recognize the words with 91.01% accuracy for the native speakers and 77.76% accuracy for the non-native speakers in a user dependent, offline setting. The native English speakers then participate in a user study involving operating a calculator application with 15 non-vocalized words and two tongue gestures at a desktop and with a mobile phone while walking. TongueBoard consistently maintains an information transfer rate of 3.78 bits per decision (number of choices = 17, accuracy = 97.1%) and 2.18 bits per second across stationary and mobile contexts, which is comparable to our control conditions of mouse (desktop) and touchpad (mobile) input.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126164361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Detection Threshold of the Height Difference between a Visual and Physical Step 视觉步长与物理步长高度差的检测阈值
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311857
Masatora Kobayashi, Yuki Kon, H. Kajimoto
{"title":"Detection Threshold of the Height Difference between a Visual and Physical Step","authors":"Masatora Kobayashi, Yuki Kon, H. Kajimoto","doi":"10.1145/3311823.3311857","DOIUrl":"https://doi.org/10.1145/3311823.3311857","url":null,"abstract":"In recent years, virtual reality (VR) applications that accompany real-space walking have become popular. In these applications, the expression of steps, such as a stairway, is a technical challenge. Preparing a real step with the same scale as that of the step in the VR space is one alternative; however, it is costly and impractical. We propose using a real step, but one physical step for the expression of various steps, by manipulating the viewpoint and foot position when ascending and descending real steps. The hypothesis is that the height of a step can be complemented to some extent visually, even if the heights of the real step and that in the VR space are different. In this paper, we first propose a viewpoint and foot position manipulation algorithm. T hen we measure the detection threshold of the height difference between the visual and physical step when ascending and descending the physical step using our manipulation algorithm. As a result, we found that the difference can be detected if there is a difference of approximately 1.0 cm between the VR space and the real space, irrespective of the height of the physical step.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124720421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Orochi: Investigating Requirements and Expectations for Multipurpose Daily Used Supernumerary Robotic Limbs 研究多用途日常使用的多余机械肢体的需求和期望
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311850
Mohammed Al Sada, Thomas Höglund, M. Khamis, Jaryd Urbani, T. Nakajima
{"title":"Orochi: Investigating Requirements and Expectations for Multipurpose Daily Used Supernumerary Robotic Limbs","authors":"Mohammed Al Sada, Thomas Höglund, M. Khamis, Jaryd Urbani, T. Nakajima","doi":"10.1145/3311823.3311850","DOIUrl":"https://doi.org/10.1145/3311823.3311850","url":null,"abstract":"Supernumerary robotic limbs (SRLs) present many opportunities for daily use. However, their obtrusiveness and limitations in interaction genericity hinder their daily use. To address challenges of daily use, we extracted three design considerations from previous literature and embodied them in a wearable we call Orochi. The considerations include the following: 1) multipurpose use, 2) wearability by context, and 3) unobtrusiveness in public. We implemented Orochi as a snake-shaped robot with 25 DoFs and two end effectors, and demonstrated several novel interactions enabled by its limber design. Using Orochi, we conducted hands-on focus groups to explore how multipurpose SRLs are used daily and we conducted a survey to explore how they are perceived when used in public. Participants approved Orochi's design and proposed different use cases and postures in which it could be worn. Orochi's unobtrusive design was generally well received, yet novel interactions raise several challenges for social acceptance. We discuss the significance of our results by highlighting future research opportunities based on the design, implementation, and evaluation of Orochi.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130795905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Automatic Smile and Frown Recognition with Kinetic Earables 自动微笑和皱眉识别与动态耳机
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311869
Seungchul Lee, Chulhong Min, A. Montanari, Akhil Mathur, Youngjae Chang, Junehwa Song, F. Kawsar
{"title":"Automatic Smile and Frown Recognition with Kinetic Earables","authors":"Seungchul Lee, Chulhong Min, A. Montanari, Akhil Mathur, Youngjae Chang, Junehwa Song, F. Kawsar","doi":"10.1145/3311823.3311869","DOIUrl":"https://doi.org/10.1145/3311823.3311869","url":null,"abstract":"In this paper, we introduce inertial signals obtained from an earable placed in the ear canal as a new compelling sensing modality for recognising two key facial expressions: smile and frown. Borrowing principles from Facial Action Coding Systems, we first demonstrate that an inertial measurement unit of an earable can capture facial muscle deformation activated by a set of temporal micro-expressions. Building on these observations, we then present three different learning schemes - shallow models with statistical features, hidden Markov model, and deep neural networks to automatically recognise smile and frown expressions from inertial signals. The experimental results show that in controlled non-conversational settings, we can identify smile and frown with high accuracy (F1 score: 0.85).","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121817882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Hearing Is Believing: Synthesizing Spatial Audio from Everyday Objects to Users 听觉就是相信:从日常物品到用户的空间音频合成
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311872
J. Yang, Yves Frank, Gábor Sörös
{"title":"Hearing Is Believing: Synthesizing Spatial Audio from Everyday Objects to Users","authors":"J. Yang, Yves Frank, Gábor Sörös","doi":"10.1145/3311823.3311872","DOIUrl":"https://doi.org/10.1145/3311823.3311872","url":null,"abstract":"The ubiquity of wearable audio devices and the importance of the auditory sense imply great potential for audio augmented reality. In this work, we propose a concept and a prototype of synthesizing spatial sounds from arbitrary real objects to users in everyday interactions, whereby all sounds are rendered directly by the user's own ear pods instead of loudspeakers on the objects. The proposed system tracks the user and the objects in real time, creates a simplified model of the environment, and generates realistic 3D audio effects. We thoroughly evaluate the usability and the usefulness of such a system based on a user study with 21 participants. We also investigate how an acoustic environment model improves the sense of engagement of the rendered 3D sounds.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122899821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Social Activity Measurement by Counting Faces Captured in First-Person View Lifelogging Video 通过第一人称视角生活记录视频中捕捉的面部计数来测量社会活动
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311846
Akane Okuno, Y. Sumi
{"title":"Social Activity Measurement by Counting Faces Captured in First-Person View Lifelogging Video","authors":"Akane Okuno, Y. Sumi","doi":"10.1145/3311823.3311846","DOIUrl":"https://doi.org/10.1145/3311823.3311846","url":null,"abstract":"This paper proposes a method to measure the daily face-to-face social activity of a camera wearer by detecting faces captured in first-person view lifelogging videos. This study was inspired by pedometers used to estimate the amount of physical activity by counting the number of steps detected by accelerometers, which is effective for reflecting individual health and facilitating behavior change. We investigated whether we can estimate the amount of social activity by counting the number of faces captured in the first-person view videos like a pedometer. Our system counts not only the number of faces but also weighs in the numbers according to the size of the face (corresponding to a face's closeness) and the amount of time it was shown in the video. By doing so, we confirmed that we can measure the amount of social activity based on the quality of each interaction. For example, if we simply count the number of faces, we overestimate social activities while passing through a crowd of people. Our system, on the other hand, gives a higher score to a social actitivity even when speaking with a single person for a long time, which was also positively evaluated by experiment participants who viewed the lifelogging videos. Through evaluation experiments, many evaluators evaluated the social activity high when the camera wearer speaks. An interesting feature of the proposed system is that it can correctly evaluate such scenes higher as the camera wearer actively engages in conversations with others, even though the system does not measure the camera wearer's utterances. This is because the conversation partners tend to turn their faces towards to the camera wearer, and that increases the number of detected faces as a result. However, the present system fails to correctly estimate the depth of social activity compared to what the camera wearer recalls especially when the conversation partners are standing out of the camera's field of view. The paper briefly descibes how the results can be improved by widening the camera's field of view.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132232105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Augmented taste of wine by artificial climate room: Influence of temperature and humidity on taste evaluation 人工气候室增强葡萄酒的口感:温度和湿度对口感评价的影响
Proceedings of the 10th Augmented Human International Conference 2019 Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311871
Toshiharu Igarashi, Tatsuya Minagawa, Yoichi Ochiai
{"title":"Augmented taste of wine by artificial climate room: Influence of temperature and humidity on taste evaluation","authors":"Toshiharu Igarashi, Tatsuya Minagawa, Yoichi Ochiai","doi":"10.1145/3311823.3311871","DOIUrl":"https://doi.org/10.1145/3311823.3311871","url":null,"abstract":"In previous research, there is a augmenting device limited taste influences due to limited contact with utensils. However, in the situation such as enjoying wine while talking with other people and matching cheese with wine, the solution that limits human behaviors must not have been acceptable. So, we focused on changing the temperature and humidity when drinking wine. To study the influence of temperature and humidity on the ingredients and subjective taste of wine, we conducted wine tasting experiments with 16 subjects using an artificial climate room. For the environmental settings, three conditions, i.e., a room temperature of 14°C and humidity of 35%, 17°C and 40% humidity, and 26°C and 40% humidity, were evaluated. In one of the two wines used in the experiment, significant differences in [Color intensity], [Smell development] and [Body] were detected among conditions (p < 0.05). We further investigated changes in the components of the two wines at different temperature conditions (14°C, 17°C, 23°C, and 26°C). Malic acid, protocatechuic acid, gallic acid, and epicatechin were related to temperature in the former wine only. In conclusion, we confirmed that we can change the taste evaluation of wine by adjusting temperature and humidity using the artificial climate room, without attaching the device to human beings themselves. This suggests the possibility to serve wine in a more optimal environment if we can identify the type of wine and person's preference.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114483990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信