Maarten Hundscheid, Linghan Zhang, A. Tummers-Heemels, W. Ijsselsteijn
{"title":"Mindful Waters: An Interactive Digital Aquarium for People with Dementia","authors":"Maarten Hundscheid, Linghan Zhang, A. Tummers-Heemels, W. Ijsselsteijn","doi":"10.3390/mti8080065","DOIUrl":"https://doi.org/10.3390/mti8080065","url":null,"abstract":"Dementia can be associated with social withdrawal, mood changes, and decreased interaction. Animal-assisted therapies and robotic companions have shown potential in enhancing well-being but come with limitations like high maintenance costs and complexity. This research presents an interactive digital aquarium called Mindful Waters, which was developed to promote social interaction and engagement among People with Dementia. The pilot study involved interactive sessions at a community center and a care facility, with situated observations, video and audio recordings, and interviews to assess user engagement motivation, behavior, and user experience with Mindful Waters. The study revealed that Mindful Waters functioned well with People with Dementia and stimulated conversational topics about aquariums through engagement. User feedback was generally positive, with participants appreciating the visual appeal and simplicity. However, some participants with advanced dementia found it challenging to interact due to their mobility limitations, cognitive impairments, and the limited duration of interaction sessions. The overall results suggest that Mindful Waters can benefit dementia care; further research is needed to optimize its design and functionality for long-term placement in care facilities.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141801044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Serious Games for Cognitive Rehabilitation in Older Adults: A Conceptual Framework","authors":"Diego E. Guzmán, C. Rengifo, C. García-Cena","doi":"10.3390/mti8080064","DOIUrl":"https://doi.org/10.3390/mti8080064","url":null,"abstract":"This paper presents a conceptual framework for the development of serious games aimed at cognitive rehabilitation in older adults. Following Jabareen’s methodology, a literature review was conducted to identify concepts and theories that are relevant in this field. The resulting framework comprises the use of virtual reality, integration of physical activity, incorporation of social interaction features, adaptability of difficulty levels, and customization of game content. The interconnections between these concepts and underlying cognitive theories, such as the cognitive reserve hypothesis and the scaffolding theory of aging and cognition, are highlighted. As we are in the early stages of our research, our goal is to introduce and test novel interpretations of current knowledge within this conceptual framework. Additionally, the practical implications of the conceptual framework are discussed, including its strengths and limitations, as well as its relevance for future research and clinical practice in the field of cognitive rehabilitation. It is hoped that this framework will provide a guide for the design and implementation of effective interventions to improve cognitive health and well-being in the older adult population.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":"97 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141812487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
X. Zabulis, Nikolaos Partarakis, Valentina Bartalesi, Nicolò Pratelli, C. Meghini, Arnaud Dubois, Ines Moreno, S. Manitsaris
{"title":"Multimodal Dictionaries for Traditional Craft Education","authors":"X. Zabulis, Nikolaos Partarakis, Valentina Bartalesi, Nicolò Pratelli, C. Meghini, Arnaud Dubois, Ines Moreno, S. Manitsaris","doi":"10.3390/mti8070063","DOIUrl":"https://doi.org/10.3390/mti8070063","url":null,"abstract":"We address the problem of systematizing the authoring of digital dictionaries for craft education from ethnographic studies and recordings. First, we present guidelines for the collection of ethnographic data using digital audio and video and identify terms that are central in the description of crafting actions, products, tools, and materials. Second, we present a classification scheme for craft terms and a way to semantically annotate them, using a multilingual and hierarchical thesaurus, which provides term definitions and a semantic hierarchy of these terms. Third, we link ethnographic resources and open-access data to the identified terms using an online platform for the representation of traditional crafts, associating their definition with illustrations, examples of use, and 3D models. We validate the efficacy of the approach by creating multimedia vocabularies for an online eLearning platform for introductory courses to nine traditional crafts.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":" 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141824345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Hand Motion Generation for VR Interactions Using a Haptic Data Glove","authors":"Sang-Woo Seo, Woo-Sug Jung, Yejin Kim","doi":"10.3390/mti8070062","DOIUrl":"https://doi.org/10.3390/mti8070062","url":null,"abstract":"Recently, VR-based training applications have become popular and promising, as they can simulate real-world situations in a safe, repeatable, and cost-effective way. For immersive simulations, various input devices have been designed and proposed to increase the effectiveness of training. In this study, we developed a novel device that generates 3D hand motion data and provides haptic force feedback for VR interactions. The proposed device can track 3D hand positions using a combination of the global position estimation of ultrasonic sensors and the hand pose estimation of inertial sensors in real time. For haptic feedback, shape–memory alloy (SMA) actuators were designed to provide kinesthetic forces and an efficient power control without an overheat problem. Our device improves upon the shortcomings of existing commercial devices in tracking and haptic capabilities such that it can track global 3D positions and estimate hand poses in a VR space without using an external suit or tracker. For better flexibility in handling and feeling physical objects compared to exoskeleton-based devices, we introduced an SMA-based actuator to control haptic forces. Overall, our device was designed and implemented as a lighter and less bulky glove which provides comparable accuracy and performance in generating 3D hand motion data for a VR training application (i.e., the use of a fire extinguisher), as demonstrated in the experimental results.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":"27 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141648547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergio Hernandez-Mendez, C. Hernández-Mejía, D. Torres-Muñoz, Carolina Maldonado-Mendez
{"title":"The Optimization of Numerical Algorithm Parameters with a Genetic Algorithm to Animate Letters of the Sign Alphabet","authors":"Sergio Hernandez-Mendez, C. Hernández-Mejía, D. Torres-Muñoz, Carolina Maldonado-Mendez","doi":"10.3390/mti8070061","DOIUrl":"https://doi.org/10.3390/mti8070061","url":null,"abstract":"At present, the development of animation-based works for human–computer interaction applications has increased. To generate animations, actions are pre-recorded and animation flows are configured. In this research, from two images of letters of the sign language alphabet, intermediate frames were generated using a numerical traced algorithm based on homotopy. The parameters of a homotopy curve were optimized with a genetic algorithm to generate intermediate frames. In the experiments performed, sequences where a person executes pairs of letters in sign language were recorded and animations of the same pairs of letters were generated with the proposed method. Subsequently, the similarity of the real sequences to the animations was measured using Dynamic Time Wrapping. The results obtained show that the images obtained are consistent with their execution by a person. Animation files between sign pairs were created from sign images, with each file weighing an average of 18.3 KB. By having sequences between pairs of letters it is possible to animate words and sentences. The animations generated by this homotopy-based animation method optimized with a genetic algorithm can be used in various deaf interaction applications to provide assistance. From several pairs of letters a file base was generated using the animations between pairs of letters; with these files you can create animations of words and sentences.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":"5 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141662585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Virtual Reality Direct-Manipulation Tool for Posing and Animation of Digital Human Bodies: An Evaluation of Creativity Support","authors":"Youssef Benbelkheir, Alvaro Lerga, Oscar Ardaiz","doi":"10.3390/mti8070060","DOIUrl":"https://doi.org/10.3390/mti8070060","url":null,"abstract":"Creating body poses and animations is a critical task for digital content creators, movement artists, and sports professionals. Traditional desktop-based tools for generating 3D poses and animations often lack intuitiveness and are challenging to master. Virtual reality (VR) offers a solution through more intuitive direct-manipulation capabilities. We designed and implemented a VR tool that enables direct manipulation of virtual body parts with inverse kinematics. This tool allows users to pose and animate virtual bodies with one- or two-handed manipulations, while also moving, including bending, jumping, or walking. Our user study demonstrated that participants could produce creative poses and animations using this tool, which we evaluated for creativity support across six factors. Additionally, we discuss further opportunities to enhance creativity support.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141662755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Urech, P. Meier, Stephan Gut, Pascal Duchene, Oliver Christ
{"title":"Mapping or no Mapping: The Influence of Controller Interaction Design in an Immersive Virtual Reality Tutorial in Two Different Age Groups","authors":"Andreas Urech, P. Meier, Stephan Gut, Pascal Duchene, Oliver Christ","doi":"10.3390/mti8070059","DOIUrl":"https://doi.org/10.3390/mti8070059","url":null,"abstract":"Navigation and interaction in the virtual world will occur via controllers. Previous literature suggests that not all buttons on these controllers are suitable for all functions and that unclear or uncomfortable operations can lead to frustration. This study aimed to determine if an operation with freely selectable buttons differs from one with mapped buttons, in which human-factor differences can be identified, and if there are differences between primary and advanced training. A field experiment with professionals employed by Swiss Federal Railways and apprentices and future apprentices (N = 60) was conducted in a VR tutorial that was previously developed using design cues from existing literature. Controller operation was varied in the groups. The results show significant differences and interaction effects, which indicate that a different operation is more appropriate for apprentices than for professionals in further education.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":"122 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141665586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Dordio, Eva Lancho, M. J. Merchán, P. Merchán
{"title":"Cultural Heritage as a Didactic Resource through Extended Reality: A Systematic Review of the Literature","authors":"Andrea Dordio, Eva Lancho, M. J. Merchán, P. Merchán","doi":"10.3390/mti8070058","DOIUrl":"https://doi.org/10.3390/mti8070058","url":null,"abstract":"Extended reality, which encompasses virtual, augmented, and mixed reality, emerges as an important didactic resource, offering great potential to bring heritage closer to the educational environment. Heritage education is a challenge for today’s society, contributing to cultural and social development. Therefore, the aim of this systematic review is to analyze documents related to heritage in education through extended reality. The search was conducted using Scopus and Web of Sciences databases, allowing for an international search. We excluded any document that was not a scientific article published in a journal. A total of 52 documents were analyzed, all of which were open-access articles. Preliminary results indicate that virtual reality and augmented reality are utilized in education to work with architectural, historical, archaeological, and musical heritage. Extended reality has been applied in all the stages of the education system, and in formal and non-formal settings, thereby demonstrating its versatility. These experiences increase motivation and academic performance in comparison to less innovative methodologies, which is in line with other studies on this topic. In conclusion, extended reality has the potential to make a significant impact on education, particularly when combined with heritage. The creation of multidisciplinary groups will enhance research in the near future.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":" 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141673482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuki Nishi, Yugo Nakamura, S. Fukushima, Yutaka Arakawa
{"title":"LightSub: Unobtrusive Subtitles with Reduced Information and Decreased Eye Movement","authors":"Yuki Nishi, Yugo Nakamura, S. Fukushima, Yutaka Arakawa","doi":"10.3390/mti8060051","DOIUrl":"https://doi.org/10.3390/mti8060051","url":null,"abstract":"Subtitles play a crucial role in facilitating the understanding of visual content when watching films and television programs. In this study, we propose a method for presenting subtitles in a way that considers cognitive load when viewing video content in a non-native language. Subtitles are generally displayed at the bottom of the screen, which causes frequent eye focus switching between subtitles and video, increasing the cognitive load. In our proposed method, we focused on the position, display time, and amount of information contained in the subtitles to reduce the cognitive load and to avoid disturbing the viewer’s concentration. We conducted two experiments to investigate the effects of our proposed subtitle method on gaze distribution, comprehension, and cognitive load during English-language video viewing. Twelve non-native English-speaking subjects participated in the first experiment. The results show that participants’ gazes were more focused around the center of the screen when using our proposed subtitles compared to regular subtitles. Comprehension levels recorded using LightSub were similar, but slightly inferior to those recorded using regular subtitles. However, it was confirmed that most of the participants were viewing the video with a higher cognitive load using the proposed subtitle method. In the second experiment, we investigated subtitles considering connected speech form in English with 18 non-native English speakers. The results revealed that the proposed method, considering connected speech form, demonstrated an improvement in cognitive load during video viewing but it remained higher than that of regular subtitles.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":"13 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141340749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OnMapGaze and GraphGazeD: A Gaze Dataset and a Graph-Based Metric for Modeling Visual Perception Differences in Cartographic Backgrounds Used in Online Map Services","authors":"Dimitrios Liaskos, Vassilios Krassanakis","doi":"10.3390/mti8060049","DOIUrl":"https://doi.org/10.3390/mti8060049","url":null,"abstract":"In the present study, a new eye-tracking dataset (OnMapGaze) and a graph-based metric (GraphGazeD) for modeling visual perception differences are introduced. The dataset includes both experimental and analyzed gaze data collected during the observation of different cartographic backgrounds used in five online map services, including Google Maps, Wikimedia, Bing Maps, ESRI, and OSM, at three different zoom levels (12z, 14z, and 16z). The computation of the new metric is based on the utilization of aggregated gaze behavior data. Our dataset aims to serve as an objective ground truth for feeding artificial intelligence (AI) algorithms and developing computational models for predicting visual behavior during map reading. Both the OnMapGaze dataset and the source code for computing the GraphGazeD metric are freely distributed to the scientific community.","PeriodicalId":508555,"journal":{"name":"Multimodal Technologies and Interaction","volume":"40 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141345376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}