Takashi Numata, Yasuhiro Asa, T. Hashimoto, K. Karasawa
{"title":"Young and old persons' subjective feelings when facing with a non-human computer-graphics-based agent's emotional responses in consideration of differences in emotion perception","authors":"Takashi Numata, Yasuhiro Asa, T. Hashimoto, K. Karasawa","doi":"10.3389/fcomp.2024.1321977","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1321977","url":null,"abstract":"Virtual agents (computer-graphics-based agents) have been developed for many purposes such as supporting the social life, mental care, education, and entertainment of both young and old people individuals. Promoting affective communication between young/old users and agents requires clarifying subjective feelings induced by an agent's expressions. However, an emotional response model of agents to induce positive feelings has not been fully understood due to differences in emotion perception between young and old adults. We investigated subjective feelings induced when facing with a non-human computer-graphics-based agent's emotional responses in consideration of differences of emotion perception between young adults and old adults. To emphasize the differences in emotion perception, the agent's expressions were developed by adopting exaggerated human expressions. The differences in their emotion perception of happy, sadness, and angry between young and old participants were then identified through a preliminary experiment. Considering the differences in emotion perception, induced feelings when facing with the agent's expressions were analyzed from the three types of emotion sources of the participant, agent, and other, which was defined as subject and responsibility of induced emotion. The subjective feelings were evaluated using a subjective rating task with 139 young and 211 old participants. The response of the agent that most induced positive feelings was happy when participants felt happy, and that was sad when participants felt sad, regardless emotion sources in both young and old groups. The response that most induced positive feelings was sad when participants felt angry and emotion sources were participant and the agent, and that was angry when the emotion source was other. The emotion types of the response to induce most positive feelings were the same between the young and old participants, and the way to induce most positive feelings was not always to mimic the emotional expressions, which is a typical tendency of human responses. These findings suggest that a common agent response model can be developed for young and old people by combining an emotional mimicry model with a response model to induce positive feelings in users and promote natural and affective communication, considering age characteristics of emotion perception.","PeriodicalId":510141,"journal":{"name":"Frontiers in Computer Science","volume":"227 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140475880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesús A. Ballesteros, G. Ramírez V., Fernando Moreira, Andrés Solano, C. Peláez
{"title":"Facial emotion recognition through artificial intelligence","authors":"Jesús A. Ballesteros, G. Ramírez V., Fernando Moreira, Andrés Solano, C. Peláez","doi":"10.3389/fcomp.2024.1359471","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1359471","url":null,"abstract":"This paper introduces a study employing artificial intelligence (AI) to utilize computer vision algorithms for detecting human emotions in video content during user interactions with diverse visual stimuli. The research aims to unveil the creation of software capable of emotion detection by leveraging AI algorithms and image processing pipelines to identify users' facial expressions. The process involves assessing users through images and facilitating the implementation of computer vision algorithms aligned with psychological theories defining emotions and their recognizable features. The study demonstrates the feasibility of emotion recognition through convolutional neural networks (CNN) and software development and training based on facial expressions. The results highlight successful emotion identification; however, precision improvement necessitates further training for contexts with more diverse images and additional algorithms to distinguish closely related emotional patterns. The discussion and conclusions emphasize the potential of A.I. and computer vision algorithms in emotion detection, providing insights into software development, ongoing training, and the evolving landscape of emotion recognition technology. Further training is necessary for contexts with more diverse images, alongside additional algorithms that can effectively distinguish between facial expressions depicting closely related emotional patterns, enhancing certainty and accuracy.","PeriodicalId":510141,"journal":{"name":"Frontiers in Computer Science","volume":"49 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140477555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of embodied listening studies with multimodal and wearable haptic interfaces for hearing accessibility in music","authors":"Doga Cavdir","doi":"10.3389/fcomp.2023.1162758","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1162758","url":null,"abstract":"The intersection of hearing accessibility and music research offers limited representations of the Deaf and Hard of Hearing (DHH) individuals, specifically as artists. This article presents inclusive design practices for hearing accessibility through wearable and multimodal haptic interfaces with participants with diverse hearing backgrounds.We develop a movement-based sound design practice and audio-tactile compositional vocabulary, co-created with a Deaf co-designer, to offer a more inclusive and embodied listening experience. This listening experience is evaluated with a focus group whose participants have background in music, dance, design, or accessibility in arts. By involving multiple stakeholders, we survey the participants' qualitative experiences in relation to Deaf co-designer's experience.Results show that multimodal haptic feedback enhanced the participants' listening experience while on-skin vibrations provided more nuanced understanding of the music for Deaf participants. Hearing participants reported interest in understanding the Deaf individuals' musical experience, preferences, and compositions.We conclude by presenting design practices when working with movement-based musical interaction and multimodal haptics. We lastly discuss the challenges and limitations of access barrier in hearing accessibility and music.","PeriodicalId":510141,"journal":{"name":"Frontiers in Computer Science","volume":"78 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139440670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Causality and tractable probabilistic models","authors":"David Cruz, Jorge Batista","doi":"10.3389/fcomp.2023.1263386","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1263386","url":null,"abstract":"Causal assertions stem from an asymmetric relation between some variable's causes and effects, i.e., they imply the existence of a function decomposition of a model where the effects are a function of the causes without implying that the causes are functions of the effects. In structural causal models, information is encoded in the compositions of functions that define variables because that information is used to constraint how an intervention that changes the definition of a variable influences the rest of the variables. Current probabilistic models with tractable marginalization also imply a function decomposition but with the purpose of allowing easy marginalization of variables. In this article, structural causal models are extended so that the information implicitly stored in their structure is made explicit in an input–output mapping in higher dimensional representation where we get to define the cause–effect relationships as constraints over a function space. Using the cause–effect relationships as constraints over a space of functions, the existing methodologies for handling causality with tractable probabilistic models are unified under a single framework and generalized.","PeriodicalId":510141,"journal":{"name":"Frontiers in Computer Science","volume":"42 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139447526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Siahaan, Reza Fauzan, Arya Widyadhana, Dony Bahtera Firmawan, Rahmi Rizkiana Putri, Y. Desnelita, Gustientiedina, Ramalia Noratama Putrian
{"title":"A scoping review of auto-generating transformation between software development artifacts","authors":"Daniel Siahaan, Reza Fauzan, Arya Widyadhana, Dony Bahtera Firmawan, Rahmi Rizkiana Putri, Y. Desnelita, Gustientiedina, Ramalia Noratama Putrian","doi":"10.3389/fcomp.2023.1306064","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1306064","url":null,"abstract":"Every process within software development refers to a specific set of input and output artifacts. Each artifact models specific design information of a system, yet they complement each other and make an improved system description. The requirements phase is an early stage of software development that drives the rest of the development process. Throughout the software development life cycle, checking that every artifact produced in every development stage should comply with the given requirements is necessary. Moreover, there should be relatedness between elements within artifacts of different development stages. This study provides an overview of the conformity between artifacts and the possibility of artifact transformation. This study also describes the methods and tools used in previous studies for ensuring the conformity of artifacts with requirements in the transformation process between artifacts. It also provides their applications in the real world. The review identified three applications, seven methods and approaches, and five challenges in ensuring the conformity of artifacts with requirements. We identified the artifacts as class diagrams, aspect-oriented software architecture, architectural models, entity relationship diagrams, and sequence diagrams. The applications for ensuring the conformity of artifacts with requirements are maintaining traceability, software verification and validation, and software reuse. The methods include information retrieval, natural language processing, model transformations, text mining, graph-based, ontology-based, and optimization algorithms. The benefits of adopting methods and tools for ensuring the conformity of artifacts with requirements can motivate and assist practitioners in designing and creating artifacts.","PeriodicalId":510141,"journal":{"name":"Frontiers in Computer Science","volume":"39 25","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139448258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian Stock, Oliver Stock, Julia Mönch, Markus Suren, Nadine Nicole Koch, Günter Daniel Rey, M. Wirzberger
{"title":"BeeLife: a mobile application to foster environmental awareness in classroom settings","authors":"Adrian Stock, Oliver Stock, Julia Mönch, Markus Suren, Nadine Nicole Koch, Günter Daniel Rey, M. Wirzberger","doi":"10.3389/fcomp.2023.1298888","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1298888","url":null,"abstract":"Significant threats to our environment tremendously affect biodiversity and related gains. Particularly wild bees actively contribute by pollinating plants and trees. Their increasing extinction comes with devastating consequences for nutrition and stability of our ecosystem. However, most people lack awareness about those species and their living conditions, preventing them to take on responsibility.We introduce an intervention consisting of a mobile app and related project workshops that foster responsibility already at an early stage in life. Drawing on principles from multimedia learning and child-centered design, six gamified levels and accompanying nature-based activities sensitize for the importance of wild bees and their role for a stable and diverse ecosystem. A pilot evaluation across three schools, involving 44 children aged between 9 and 12, included a pre-, post-, and delayed post-test to inspect app usability and learning gains.Most children perceived the app as intuitive, engaging, and visually appealing, and sustainably benefited from our intervention in terms of retention performance. Teacher interviews following the intervention support the fit with the envisioned target group and the classroom setting.Taken together, the obtained evidence emphasizes the benefits of our intervention, even though our sample size was limited due to dropouts. Future extensions might include adaptive instructional design elements to increase observable learning gains.","PeriodicalId":510141,"journal":{"name":"Frontiers in Computer Science","volume":"47 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139451787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Camurri, Emanuele Seminerio, Wanda Morganti, C. Canepa, Nicola Ferrari, Simone Ghisio, Andrea Cera, P. Coletta, Marina Barbagelata, Gianluca Puleo, Ilaria Nolasco, Claudio Costantini, B. Senesi, Alberto Pilotto
{"title":"Development and validation of an art-inspired multimodal interactive technology system for a multi-component intervention for older people: a pilot study","authors":"Antonio Camurri, Emanuele Seminerio, Wanda Morganti, C. Canepa, Nicola Ferrari, Simone Ghisio, Andrea Cera, P. Coletta, Marina Barbagelata, Gianluca Puleo, Ilaria Nolasco, Claudio Costantini, B. Senesi, Alberto Pilotto","doi":"10.3389/fcomp.2023.1290589","DOIUrl":"https://doi.org/10.3389/fcomp.2023.1290589","url":null,"abstract":"The World Health Organization (WHO) acknowledges the presence of a significant body of research on the positive effects of the arts on health, considering a variety of factors including physical well-being, quality of life, and social and community impact. The model that underlies cultural welfare puts the performing arts, visual arts, and cultural heritage at the service of people personal and societal well-being. The potential connections between movements of the body and artistic content have been extensively studied over time, considering movement as a non-verbal language with a universal character.This pilot study presents the results of the validation of an innovative multimodal system, the DanzArTe-Emotional Wellbeing Technology, designed to support active and participative experience of older people providing physical and cognitive activation through a full-body physical interaction with a traditional visual work of art of religious subject. DanzArTe supports a replicable treatment protocol for multidimensional frailty, administered through a low cost and scalable technological platform capable of generating real-time visual and auditory feedback (interactive sonification) from the automated analysis of individual as well as joint movement expressive qualities. The study involved 45 participants, 23 of whom participated in the DanzArTe program and 22 who were included in the control group.The two groups were similar in terms of age (p = 0.465) and gender (p = 0.683). The results showed that the DanzArTe program had a positive impact on participants' self-perceived psychological health and well-being (Mean Psychological General Well-Being Index—Short T1 = 19.6 ± 4.3 Vs. T2 = 20.8 ± 4.9; p = 0.029). The same trend was not observed in the control group (p = 0.389).The findings suggest that such programs may have a significant impact particularly on the mental and social well-being of older adults and could be a valuable tool for promoting healthy aging and improving quality of life.","PeriodicalId":510141,"journal":{"name":"Frontiers in Computer Science","volume":"44 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139452000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}