Multimodal Technologies and Interaction最新文献

筛选
英文 中文
Reviews of Social Embodiment for Design of Non-Player Characters in Virtual Reality-Based Social Skill Training for Autistic Children 基于虚拟现实的自闭症儿童社交技能训练中非玩家角色设计的社会体现研究综述
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-09-04 DOI: 10.3390/MTI2030053
Jewoong Moon
{"title":"Reviews of Social Embodiment for Design of Non-Player Characters in Virtual Reality-Based Social Skill Training for Autistic Children","authors":"Jewoong Moon","doi":"10.3390/MTI2030053","DOIUrl":"https://doi.org/10.3390/MTI2030053","url":null,"abstract":"The purpose of this paper is to review the scholarly works regarding social embodiment aligned with the design of non-player characters in virtual reality (VR)-based social skill training for autistic children. VR-based social skill training for autistic children has been a naturalistic environment, which allows autistic children themselves to shape socially-appropriate behaviors in real world. To build up the training environment for autistic children, it is necessary to identify how to simulate social components in the training. In particular, designing non-player characters (NPCs) in the training is essential to determining the quality of the simulated social interactions during the training. Through this literature review, this study proposes multiple design themes that underline the nature of social embodiment in which interactions with NPCs in VR-based social skill training take place.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030053","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Design for an Art Therapy Robot: An Explorative Review of the Theoretical Foundations for Engaging in Emotional and Creative Painting with a Robot 艺术治疗机器人的设计:探索用机器人进行情感和创造性绘画的理论基础
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-09-03 DOI: 10.3390/MTI2030052
M. Cooney, M. Menezes
{"title":"Design for an Art Therapy Robot: An Explorative Review of the Theoretical Foundations for Engaging in Emotional and Creative Painting with a Robot","authors":"M. Cooney, M. Menezes","doi":"10.3390/MTI2030052","DOIUrl":"https://doi.org/10.3390/MTI2030052","url":null,"abstract":"Social robots are being designed to help support people’s well-being in domestic and public environments. To address increasing incidences of psychological and emotional difficulties such as loneliness, and a shortage of human healthcare workers, we believe that robots will also play a useful role in engaging with people in therapy, on an emotional and creative level, e.g., in music, drama, playing, and art therapy. Here, we focus on the latter case, on an autonomous robot capable of painting with a person. A challenge is that the theoretical foundations are highly complex; we are only just beginning ourselves to understand emotions and creativity in human science, which have been described as highly important challenges in artificial intelligence. To gain insight, we review some of the literature on robots used for therapy and art, potential strategies for interacting, and mechanisms for expressing emotions and creativity. In doing so, we also suggest the usefulness of the responsive art approach as a starting point for art therapy robots, describe a perceived gap between our understanding of emotions in human science and what is currently typically being addressed in engineering studies, and identify some potential ethical pitfalls and solutions for avoiding them. Based on our arguments, we propose a design for an art therapy robot, also discussing a simplified prototype implementation, toward informing future work in the area.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030052","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Animals Make Music: A Look at Non-Human Musical Expression 动物创造音乐:非人类音乐表达的观察
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-09-02 DOI: 10.3390/MTI2030051
Reinhard Gupfinger, Martin Kaltenbrunner
{"title":"Animals Make Music: A Look at Non-Human Musical Expression","authors":"Reinhard Gupfinger, Martin Kaltenbrunner","doi":"10.3390/MTI2030051","DOIUrl":"https://doi.org/10.3390/MTI2030051","url":null,"abstract":"The use of musical instruments and interfaces that involve animals in the interaction process is an emerging, yet not widespread practice. The projects that have been implemented in this unusual field are raising questions concerning ethical principles, animal-centered design processes, and the possible benefits and risks for the animals involved. Animal–Computer Interaction is a novel field of research that offers a framework (ACI manifesto) for implementing interactive technology for animals. Based on this framework, we have examined several projects focusing on the interplay between animals and music technology in order to arrive at a better understanding of animal-based musical projects. Building on this, we will discuss how the implementation of new musical instruments and interfaces could provide new opportunities for improving the quality of life for grey parrots living in captivity.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Deep Learning and Medical Diagnosis: A Review of Literature 深度学习与医学诊断:文献综述
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-08-17 DOI: 10.3390/MTI2030047
Mihalj Bakator, D. Radosav
{"title":"Deep Learning and Medical Diagnosis: A Review of Literature","authors":"Mihalj Bakator, D. Radosav","doi":"10.3390/MTI2030047","DOIUrl":"https://doi.org/10.3390/MTI2030047","url":null,"abstract":"In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 278
Technology for Remote Health Monitoring in an Older Population: A Role for Mobile Devices 老年人口远程健康监测技术:移动设备的作用
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-07-27 DOI: 10.3390/MTI2030043
Kate Dupuis, L. Tsotsos
{"title":"Technology for Remote Health Monitoring in an Older Population: A Role for Mobile Devices","authors":"Kate Dupuis, L. Tsotsos","doi":"10.3390/MTI2030043","DOIUrl":"https://doi.org/10.3390/MTI2030043","url":null,"abstract":"The impact of an aging population on healthcare and the sustainability of our healthcare system are pressing issues in contemporary society. Technology has the potential to address these challenges, alleviating pressures on the healthcare system and empowering individuals to have greater control over monitoring their own health. Importantly, mobile devices such as smartphones and tablets can allow older adults to have “on the go” access to health-related information. This paper explores mobile health apps that enable older adults and those who care for them to track health-related factors such as body readings and medication adherence, and it serves as a review of the literature on the usability and acceptance of mobile health apps in an older population.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Opportunities and Challenges of Bodily Interaction for Geometry Learning to Inform Technology Design 几何学习与技术设计的身体互动的机遇与挑战
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-07-09 DOI: 10.3390/MTI2030041
S. Price, S. Duffy
{"title":"Opportunities and Challenges of Bodily Interaction for Geometry Learning to Inform Technology Design","authors":"S. Price, S. Duffy","doi":"10.3390/MTI2030041","DOIUrl":"https://doi.org/10.3390/MTI2030041","url":null,"abstract":"An increasing body of work provides evidence of the importance of bodily experience for cognition and the learning of mathematics. Sensor-based technologies have potential for guiding sensori-motor engagement with challenging mathematical ideas in new ways. Yet, designing environments that promote an appropriate sensori-motoric interaction that effectively supports salient foundations of mathematical concepts is challenging and requires understanding of opportunities and challenges that bodily interaction offers. This study aimed to better understand how young children can, and do, use their bodies to explore geometrical concepts of angle and shape, and what contribution the different sensori-motor experiences make to the comprehension of mathematical ideas. Twenty-nine students aged 6–10 years participated in an exploratory study, with paired and group activities designed to elicit intuitive bodily enactment of angles and shape. Our analysis, focusing on moment-by-moment bodily interactions, attended to gesture, action, facial expression, body posture and talk, illustrated the ‘realms of possibilities’ of bodily interaction, and highlighted challenges around ‘felt’ experience and egocentric vs. allocentric perception of the body during collaborative bodily enactment. These findings inform digital designs for sensory interaction to foreground salient geometric features and effectively support relevant forms of enactment to enhance the learning experience, supporting challenging aspects of interaction and exploiting the opportunities of the body.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Animal-to-Animal Data Sharing Mechanism for Wildlife Monitoring in Fukushima Exclusion Zone 福岛隔离区野生动物监测动物间数据共享机制
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-07-03 DOI: 10.3390/MTI2030040
H. Kobayashi, Keijiro Nakagawa, K. Makiyama, Yuta Sasaki, Hiromi Kudo, Baburam Niraula, K. Sezaki
{"title":"Animal-to-Animal Data Sharing Mechanism for Wildlife Monitoring in Fukushima Exclusion Zone","authors":"H. Kobayashi, Keijiro Nakagawa, K. Makiyama, Yuta Sasaki, Hiromi Kudo, Baburam Niraula, K. Sezaki","doi":"10.3390/MTI2030040","DOIUrl":"https://doi.org/10.3390/MTI2030040","url":null,"abstract":"We propose an animal-to-animal data sharing mechanism that employs wildlife-borne sensing devices to expand the size of monitoring areas in which electricity, information, and road infrastructures are either limited or nonexistent. With the proposed approach, monitoring information can be collected from remote areas in a safe and cost-effective manner. To substantially prolong the life of a sensor node, the proposed mechanism activates the communication capabilities only when there is a plurality of animals; otherwise, the sensor node remains in a sleep state. This study aimed to achieve three objectives. First, we intend to obtain knowledge based on the actual field operations within the Fukushima exclusion zone. Second, we attempt to realize an objective evaluation of the power supply and work base that is required to properly evaluate the proposed mechanism. Third, we intend to acquire data to support wildlife research, which is the objective of both our present (and future) research.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring Emergent Features of Student Interaction within an Embodied Science Learning Simulation 探索具身科学学习模拟中学生互动的涌现特征
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-07-02 DOI: 10.3390/MTI2030039
Jina Kang, Robb Lindgren, James Planey
{"title":"Exploring Emergent Features of Student Interaction within an Embodied Science Learning Simulation","authors":"Jina Kang, Robb Lindgren, James Planey","doi":"10.3390/MTI2030039","DOIUrl":"https://doi.org/10.3390/MTI2030039","url":null,"abstract":"Theories of embodied cognition argue that human processes of thinking and reasoning are deeply connected with the actions and perceptions of the body. Recent research suggests that these theories can be successfully applied to the design of learning environments, and new technologies enable multimodal platforms that respond to students’ natural physical activity such as their gestures. This study examines how students engaged with an embodied mixed-reality science learning simulation using advanced gesture recognition techniques to support full-body interaction. The simulation environment acts as a communication platform for students to articulate their understanding of non-linear growth within different science contexts. In particular, this study investigates the different multimodal interaction metrics that were generated as students attempted to make sense of cross-cutting science concepts through using a personalized gesture scheme. Starting with video recordings of students’ full-body gestures, we examined the relationship between these embodied expressions and their subsequent success reasoning about non-linear growth. We report the patterns that we identified, and explicate our findings by detailing a few insightful cases of student interactions. Implications for the design of multimodal interaction technologies and the metrics that were used to investigate different types of students’ interactions while learning are discussed.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Predictive Fingerstroke-Level Model for Smartwatch Interaction 智能手表交互的预测手指触控级模型
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-07-02 DOI: 10.3390/MTI2030038
Shiroq Al-Megren
{"title":"A Predictive Fingerstroke-Level Model for Smartwatch Interaction","authors":"Shiroq Al-Megren","doi":"10.3390/MTI2030038","DOIUrl":"https://doi.org/10.3390/MTI2030038","url":null,"abstract":"The keystroke-level model (KLM) is commonly used to predict the time it will take an expert user to accomplish a task without errors when using an interactive system. The KLM was initially intended to predict interactions in conventional set-ups, i.e., mouse and keyboard interactions. However, it has since been adapted to predict interactions with smartphones, in-vehicle information systems, and natural user interfaces. The simplicity of the KLM and its extensions, along with their resource- and time-saving capabilities, has driven their adoption. In recent years, the popularity of smartwatches has grown, introducing new design challenges due to the small touch screens and bimanual interactions involved, which make current extensions to the KLM unsuitable for modelling smartwatches. Therefore, it is necessary to study these interfaces and interactions. This paper reports on three studies performed to modify the original KLM and its extensions for smartwatch interaction. First, an observational study was conducted to characterise smartwatch interactions. Second, the unit times for the observed interactions were derived through another study, in which the times required to perform the relevant physical actions were measured. Finally, a third study was carried out to validate the model for interactions with the Apple Watch and Samsung Gear S3. The results show that the new model can accurately predict the performance of smartwatch users with a percentage error of 12.07%; a value that falls below the acceptable percentage dictated by the original KLM ~21%.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Documenting the Elusive and Ephemeral in Embodied Design Ideation Activities 在具身设计创意活动中记录难以捉摸的和短暂的
IF 2.5
Multimodal Technologies and Interaction Pub Date : 2018-06-24 DOI: 10.3390/mti2030035
Laia Turmo Vidal, Elena Márquez Segura
{"title":"Documenting the Elusive and Ephemeral in Embodied Design Ideation Activities","authors":"Laia Turmo Vidal, Elena Márquez Segura","doi":"10.3390/mti2030035","DOIUrl":"https://doi.org/10.3390/mti2030035","url":null,"abstract":"","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85032392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信