Gaku Fukui, Takuto Nakamura, Keigo Matsumoto, Takuji Narumi, H. Kuzuoka
{"title":"Effects of Wearing Knee-tightening Devices and Presenting Shear Forces to the Knee on Redirected Walking","authors":"Gaku Fukui, Takuto Nakamura, Keigo Matsumoto, Takuji Narumi, H. Kuzuoka","doi":"10.1145/3582700.3582720","DOIUrl":"https://doi.org/10.1145/3582700.3582720","url":null,"abstract":"Natural and realistic walking experience in the virtual environment is critical in increasing immersion in virtual reality. Redirected walking (RDW) was proposed to improve the walking experience, and many studies were conducted to expand its detection threshold (DT). In this study, we investigated the effect on redirected walking when the knee was subjected to the hanger reflex, which creates the illusion of a strong force by applying a shearing force to the skin. The results suggest that the hanger reflex at the knee affects gait, but contrary to expectations, the hanger reflex at the knee did not affect the DT of RDW, and wearing the device on the knee itself expands the DT. Although the results are not what one would expect, the findings of this study provide an interesting perspective for more effective RDW in the future.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128035139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juling Li, Xiongqi Wang, Junyu Chen, Thad Starner, G. Chernyshov, Jing Huang, Yifei Huang, K. Kunze, Qing Zhang
{"title":"First Bite/Chew: distinguish typical allergic food by two IMUs","authors":"Juling Li, Xiongqi Wang, Junyu Chen, Thad Starner, G. Chernyshov, Jing Huang, Yifei Huang, K. Kunze, Qing Zhang","doi":"10.1145/3582700.3583708","DOIUrl":"https://doi.org/10.1145/3582700.3583708","url":null,"abstract":"Eating or overtaking allergic foods may cause fatal symptoms or even death for people with food allergies. Most current food intake tracking methods are camera-based, on-body sensor-based, microphone based, and self-reported. However, challenges that remain are allergic food detection, social acceptance, lightweight, easy to use, and inexpensive. Our approach leverages the first bite/chew and the corresponding hand movement as an indicator to distinguish typical types of the allergic food. Our initial feasibility study shows that our approach can distinguish six types of food at an accuracy of 89.7% over all four participants’ mixed data. Particularly, our method successfully detected and distinguished typical allergic foods such as burgers (wheat), instant noodles (wheat), peanuts, egg fried rice, and edamame, which can be expected to contribute to not only personal use but also medical usage.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126748606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikko Korkiakoski, Anssi Antila, Jouni Annamaa, Saeid Sheikhi, Paula Alavesa, Panos Kostakos
{"title":"Hack the Room: Exploring the potential of an augmented reality game for teaching cyber security","authors":"Mikko Korkiakoski, Anssi Antila, Jouni Annamaa, Saeid Sheikhi, Paula Alavesa, Panos Kostakos","doi":"10.1145/3582700.3583955","DOIUrl":"https://doi.org/10.1145/3582700.3583955","url":null,"abstract":"There is a need for creating new educational paths for beginners as well as experienced students for cyber security. Recently, ethical hacking gamification platforms like Capture the Flag (CTF) have grown in popularity, providing newcomers with entertaining and engaging material that encourages the development of offensive and defensive cyber security skills. However, augmented reality (AR) applications for the development of cyber security skills remain mostly an untapped resource. The purpose of this work-in-progress study is to investigate whether CTF games in AR might improve learning in information security and increase security situational awareness (SA). In particular, we investigate how AR gamification influences training and overall experience in the context of ethical hacking tasks. To do this, we developed a Unity-based ethical hacking game in which participants complete CTF-style objectives. The game requires the player to execute basic Linux terminal commands, such as listing files in folders and reading data stored on virtual machines. Each gameplay session lasts up to twenty minutes and consists of three objectives. The game may be altered or made more challenging by modifying the virtual machines. In a pilot, our game was tested with six individuals separated into two groups: an expert group (N=3) and a novice group (N=3). The questionnaire given to the expert group examined their SA during the game, whereas the questionnaire administered to the novice group measured learning and remembering certain things they did in the game. In this paper we discuss our observations from the pilot.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127603657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taiga Saito, Takumi Hamazaki, Seitaro Kaneko, H. Kajimoto
{"title":"Coldness Presentation to Ventral Forearm using Electrical Stimulation with Elastic Gel and Anesthetic Cream","authors":"Taiga Saito, Takumi Hamazaki, Seitaro Kaneko, H. Kajimoto","doi":"10.1145/3582700.3582713","DOIUrl":"https://doi.org/10.1145/3582700.3582713","url":null,"abstract":"To augment and enhance VR experience, various devices have been proposed to provide thermal sensations. In particular, Peltier devices are commonly used to induce cold sensations. However, these devices are unsuitable for long-term use due to high energy consumption. This study investigates the use of electrical stimulation to generate thermal sensation in the arm for future wearable applications. Due to its small size and low power requirements, electrical stimulation is unlikely to interfere with body movement or disrupt immersion. Furthermore, providing thermal sensation to the arm is expected to enhance immersion in VR content without interfering with hand movements. In addition, tactile sensation can also be presented by the electrical stimulation. However, electrical stimulation to the arm normally cannot provide a stable temperature sensation because pain threshold is too close to tactile and temperature threshold. We tackled this issue by two ways; one is by applying a gel layer to the arm to suppress the pain sensation by diffusing the current, and the other is by using local anesthetic cream. As a result, we found that electrical stimulation to the arm generated a cold sensation at several points out of 61 electrodes in both cases. The results of the evaluation experiments revealed that stimulation pulse width and polarity of electrical stimulation gave little effect, while there seems to be a trend that anodic stimulation using the gel tended to generate a cold sensation at relatively high intensity, and cathodic stimulation using either the gel and local anesthetic cream tended to have cold sensation over a wider area.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120962243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shigeo Yoshida, Tomoya Sasaki, Zendai Kashino, M. Inami
{"title":"TOMURA: A Mountable Hand-shaped Interface for Versatile Interactions","authors":"Shigeo Yoshida, Tomoya Sasaki, Zendai Kashino, M. Inami","doi":"10.1145/3582700.3582719","DOIUrl":"https://doi.org/10.1145/3582700.3582719","url":null,"abstract":"We introduce TOMURA, a mountable hand-shaped interface for conducting a wide range of interactions by leveraging the versatility of the human hand. TOMURA can be mounted in any number of locations and orientations on the body and environment. By combining freedom in mounting with the versatile expressivity of the human hand, TOMURA enables interactions that integrate elements of shape-changing interfaces and wearable robots. For example, TOMURA can be worn to the wrist to assist in grasping and to enable haptic interactions with a remote operator. By placing TOMURA on the desk, it can be used as a physical avatar representing a remote user’s hand during a video meeting. We illustrated TOMURA’s design space and demonstrated the feasibility of our concept by implementing a prototype and employing it in several application scenarios. We then discussed the possibilities and limitations of the prototype based on user feedback.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128156333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Brain-Computer Interface using Directional Auditory Perception","authors":"Yuto Koike, Yuichi Hiroi, Yuta Itoh, J. Rekimoto","doi":"10.1145/3582700.3583713","DOIUrl":"https://doi.org/10.1145/3582700.3583713","url":null,"abstract":"We investigate the potential of brain-computer interface (BCI) using electroencephalogram (EEG) induced by listening (or recalling) auditory stimuli of different directions. In the initial attempt, we apply a time series classification model based on deep learning to the EEG to demonstrate whether each EEG can be classified by recognizing binary (left or right) auditory directions. The results showed high classification accuracy when trained and tested on the same users. Discussion is provided to further explore this topic.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133706261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of the Number of Bodies on Ownership of Multiple Bodies","authors":"Ryota Kondo, M. Sugimoto","doi":"10.1145/3582700.3583704","DOIUrl":"https://doi.org/10.1145/3582700.3583704","url":null,"abstract":"Virtually reality can induce body ownership (Sense of being one’s own body) of a not innate body or multiple bodies. However, the relationship between the number of bodies and body ownership has not been clarified. In a study that investigated body ownership for one, two, and four virtual bodies, the greater the number of bodies, the closer the body ownership tended to approach the strength of ownership for a single body. Therefore, in the present study, we investigated whether increasing the number of bodies strengthens body ownership for multiple bodies. In the experiment, multiple virtual bodies that moved in synchronization with the participant’s movement were lined up in a single file line, and the participant observed the multiple virtual bodies through a head-mounted display from the position of the hindmost virtual body. We measured body ownership and drifts in self-location. Our results showed that contrary to expectations, the greater the number of bodies, the weaker body ownership.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114230484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Effect of Transfer Learning on Facial Expression Recognition using Photo-Reflective Sensors embedded into a Head-Mounted Display","authors":"Fumihiko Nakamura, M. Sugimoto","doi":"10.1145/3582700.3583705","DOIUrl":"https://doi.org/10.1145/3582700.3583705","url":null,"abstract":"As one of the techniques to recognize head-mounted display (HMD) user’s facial expressions, the photo-reflective sensor (PRS) has been employed. Since the classification performance of PRS-based method is affected by rewearing an HMD and difference in facial geometry for each user, the user have to perform dataset collection for each wearing of an HMD to build a facial expression classifier. To tackle this issue, we investigate how transfer learning improve within-user and cross-user accuracy and reduce training data in the PRS-based facial expression recognition. We collected a dataset of five facial expressions (Neutral, Smile, Angry, Surprised, Sad) when participants wore the PRS-embedded HMD five times. Using the dataset, we evaluated facial expression classification accuracy using a neural network with/without fine tuning. Our result showed fine tuning improved the within-user and cross-user facial expression classification accuracy compared with non-fine-tuned classifier. Also, applying fine tuning to the classifier trained with the other participant dataset achieved higher classification accuracy than the non-fine-tuned classifier.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134309867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Miura, Erwin Wu, Masaki Kuribayashi, H. Koike, S. Morishima
{"title":"Exploration of Sonification Feedback for People with Visual Impairment to Use Ski Simulator","authors":"Y. Miura, Erwin Wu, Masaki Kuribayashi, H. Koike, S. Morishima","doi":"10.1145/3582700.3582702","DOIUrl":"https://doi.org/10.1145/3582700.3582702","url":null,"abstract":"Training opportunities for visually impaired (VI) skiers are limited because it is essential for them to have sighted people who guide them with their voices. This study investigates an auditory feedback system that enables ski training using a ski simulator for VI skiers alone. Based on the results of interviews with actual VI skiers and their guides, we designed the following three types of sounds: 1) a single sound (ATS: Advance Turn Sound) that conveys information about turns; 2) a continuous sound (CES: Continuous Error Sound) that is emitted according to the difference between the user’s future position and the position he/she should progress to; and 3) a single sound (Gate-Passed Sound) which is emitted when a user passed through a gate. We conducted an evaluation experiment with four blind skiers and three sighted guides. Results showed that three out of four skiers performed better under the conditions in which ATS and gate-passed sound were emitted than the condition in which a human guide gave calls. The result suggests that a sonification-based method such as ATS is effective for ski training on the ski simulator for VI skiers.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114851894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tablet Cutting Board: Tablet-based Knife-control Support System for Cookery Beginners","authors":"Hatsune Masuda, Kunihiro Kato, Kaori Ikematsu, Yoshinari Takegawa, Keiji Hirata","doi":"10.1145/3582700.3582708","DOIUrl":"https://doi.org/10.1145/3582700.3582708","url":null,"abstract":"It is difficult for novice cooks to cut food evenly and quickly with a knife. In this study, we propose a learning support system for kitchen knife skills using a tablet terminal. The system aims to help novice cooks cut food evenly and vertically. The system has a function to display guide lines for cutting food, and a function to detect the movement of the knife using a touch panel and provide real-time feedback on whether the width of the cut food is appropriate or not. In addition, an inertial sensor was attached to the knife to estimate the tilt of the knife based on acceleration data, and a function was implemented to provide feedback on whether the knife is cutting at an appropriate angle or not. A prototype system was implemented, and the accuracy of estimating the width of the food and the accuracy of tilt detection were measured when the prototype system was used to cut the food. It was found that the prototype system was able to estimate the width of the food with an average error of 0.5 mm and the tilt of the knife with an average error of 0.5 degrees.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122931324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}