{"title":"Effects of AI-assisted dance skills teaching, evaluation and visual feedback on dance students' learning performance, motivation and self-efficacy","authors":"Liu-Jie Xu , Jing Wu , Jing-Dong Zhu , Ling Chen","doi":"10.1016/j.ijhcs.2024.103410","DOIUrl":"10.1016/j.ijhcs.2024.103410","url":null,"abstract":"<div><div>Despite the importance of artificial intelligence in education, its effectiveness in this field requires more empirical research for corroborating evidence. In this study, a dance skills teaching, evaluation, and visual feedback (DSTEVF) system was developed based on AI technology and applied in a dance classroom. Forty participants from a vocational school studying dance were randomly divided into two groups: DSTEVF-based learning (experimental group, n = 19) and traditional teaching (control group, n = 21). The DSTEVF-based learning approach significantly improved students’ dance skills and self-efficacy. However, there was no significant effect on students’ motivation. Students with higher levels of motivation and self-efficacy benefitted more from DSTEVF-based learning than those with lower levels. Evidently, it is possible to establish a smart classroom by applying DSTEVF to the teaching activities of dance education, physical education, and other disciplines.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"195 ","pages":"Article 103410"},"PeriodicalIF":5.3,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding preference: A meta-analysis of user studies","authors":"Morten Hertzum","doi":"10.1016/j.ijhcs.2024.103408","DOIUrl":"10.1016/j.ijhcs.2024.103408","url":null,"abstract":"<div><div>A user's preference for one system over another is probably the most basic user experience (UX) measure, yet user studies often focus on performance and treat preference as supplementary. This meta-analysis of 144 studies shows that while users in general prefer systems with which they achieve lower task time and error rate, they more consistently and more strongly prefer systems that impose lower workload. In only 2 % of the studies a preferred system imposes significantly higher workload than a nonpreferred system. Across the studies, a stronger preference coincides with a larger difference in workload, task time, and error rate. This correlation is strongest for workload, lower for task time, and lowest for error rate. That is, workload is a stronger predictor of preference than performance is, even for the near exclusively utilitarian tasks covered by this meta-analysis. The implications of these findings include that workload should be more fully integrated in research on usability, UX, and design and that it is risky for practitioners to infer preference from performance, or vice versa.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"195 ","pages":"Article 103408"},"PeriodicalIF":5.3,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Microgesture + Grasp: A journey from human capabilities to interaction with microgestures","authors":"Adrien Chaffangeon Caillet, Alix Goguey, Laurence Nigay","doi":"10.1016/j.ijhcs.2024.103398","DOIUrl":"10.1016/j.ijhcs.2024.103398","url":null,"abstract":"<div><div>Microgestures, <em>i.e</em>. fast and subtle finger movements, have shown a high potential for ubiquitous interaction. However, work to-date either focuses on grasp contexts (holding an object) or on the free-hand context (no held object). These two contexts influence the microgestures feasibility. Researchers have created sets of microgesture feasible across the entire taxonomy of everyday grasps, called transferable microgestures. However, those sets include a limited number of microgestures as compared to those for the free-hand context, for which microgestures are distinguished according to fine characteristics such as the part of the finger being touched or the number of fingers used. We provide knowledge and methods for identifying and recognizing microgestures that can transfer across contexts. First, we report a study on ergonomics factors that influence the feasibility of a microgesture in a given context. Then, we propose a conceptual model serving as a tool to determine the feasibility of a microgesture in a given context without the need for time-consuming user studies. As expected, not all microgestures were transferable to all considered contexts. Thus, we then expose two different ways of defining a set of microgestures transferable between free-hand and grasping contexts. Finally, we report a user study on recognition factors of a transferable microgesture set.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"195 ","pages":"Article 103398"},"PeriodicalIF":5.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142697394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco Lepe-Salazar , Lizbeth Escobedo , Tatsuo Nakajima
{"title":"Novice-friendly probes for the gathering and analysis of requirements and subsequent design of software","authors":"Francisco Lepe-Salazar , Lizbeth Escobedo , Tatsuo Nakajima","doi":"10.1016/j.ijhcs.2024.103405","DOIUrl":"10.1016/j.ijhcs.2024.103405","url":null,"abstract":"<div><div>Developing software, both in its entirety or some specific elements and components, poses a significant challenge. One of the main obstacles one may face during this process is conducting a thorough survey and subsequent analysis of requirements. This difficulty arises from stakeholders struggling to accurately articulate their needs, desires, and expectations. To address this issue, qualitative strategies such as interviews, surveys, work tables, brainstorming, observation, and user stories are commonly employed. However, mastering and utilising them effectively often takes years of experience. To simplify this process for novices (e.g., students, beginners, enthusiasts) in the field of computer science and related areas, inspired by participatory design guidelines, we devised a series of design probes that we call <em>Mirrors</em>. To explore their feasibility, we conducted two different interventions with students. In this document, we present these tools along with a methodology for their application. Additionally, we show the results obtained through their implementation. Lastly, we talk about their benefits and limitations, as well as our future work in order to consolidate their effectiveness.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"195 ","pages":"Article 103405"},"PeriodicalIF":5.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cameron W. Piercy , Gretchen Montgomery-Vestecka , Sun Kyong Lee
{"title":"Gender and accent stereotypes in communication with an intelligent virtual assistant","authors":"Cameron W. Piercy , Gretchen Montgomery-Vestecka , Sun Kyong Lee","doi":"10.1016/j.ijhcs.2024.103407","DOIUrl":"10.1016/j.ijhcs.2024.103407","url":null,"abstract":"<div><div>People are using intelligent virtual assistants (IVAs) more than ever before. Today's IVAs can be customized with unique voices including both gender and accent cues. Following evidence that people treat others differently based on their gender and accent, we ask: How do gender and accent of Siri, an IVA, affect users' trust? Students from two institutions (<em>N</em> <em>=</em> 270) participated in a two (Siri's voice gender: male or female) by two (Siri's voice accent: American or Indian) by two (task type: social or functional) fully crossed experiment, including a supplemental quasi-experimental condition for gender match between participants’ and Siri's voice. Results show little effect for gender or accent alone, but the functional tasks condition received higher ratings in reliability, understandability, and faith dimensions of trust. Interactions reveal nuanced effects regarding gender match and varying across accent types. Implications for human-machine communication, in particular differences between human-human and human-machine interaction scripts are presented.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"195 ","pages":"Article 103407"},"PeriodicalIF":5.3,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Limits of speech in connected homes: Experimental comparison of self-reporting tools for human activity recognition","authors":"Guillaume Levasseur , Kejia Tang , Hugues Bersini","doi":"10.1016/j.ijhcs.2024.103404","DOIUrl":"10.1016/j.ijhcs.2024.103404","url":null,"abstract":"<div><div>Data annotation for human activity recognition is a well-known challenge for researchers. In particular, annotation in daily life settings relies on self-reporting tools with unknown accuracy. Speech is a promising interface for activity labeling. In this work, we compare the accuracy of two commercially available tools for annotation: voice diaries and connected buttons. We retrofit the water meters of thirty homes in the USA for infrastructure-mediated sensing. Participants are split into equal groups and receive one of the self-reporting tools. The balanced accuracy metric is transferred from the field of machine learning to the evaluation of the annotation performance. Our results show that connected buttons perform significantly better than the voice diary, with 92% median accuracy and 65% median reporting rate. Using questionnaire answers, we highlight that annotation performance is impacted by habit formation and sentiments toward the annotation tool. The use case for data annotation is to disaggregate water meter data into human activities beyond the point of use. We show that it is feasible with a machine-learning model and the corrected annotations. Finally, we formulate recommendations for the design of studies and intelligent environments around the key ideas of proportionality and immediacy.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"195 ","pages":"Article 103404"},"PeriodicalIF":5.3,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Breaking down barriers: A new approach to virtual museum navigation for people with visual impairments through voice assistants","authors":"Yeliz Yücel, Kerem Rızvanoğlu","doi":"10.1016/j.ijhcs.2024.103403","DOIUrl":"10.1016/j.ijhcs.2024.103403","url":null,"abstract":"<div><div>People with visual imparments (PWVI) encounter challenges in accessing cultural, historical, and practical information in a predominantly visual world, limiting their participation in various activities, including visits to museums.Museums, as important centers for exploration and learning, often overlook these accessibility issues.This abstract presents the iMuse Model, an innovative approach to create accessible and inclusive museum environments for them.The iMuse Model centers around the co-design of a prototype voice assistant integrated into Google Home, aimed at enabling remote navigation for PWVI within the Basilica Cistern museum in Turkey.This model consists of a two-layer study.The first layer involves collaboration with PWVI and their sight loss instructors to develop a five level framework tailored to their unique needs and challenges.The second layer focuses on testing this design with 30 people with visual impairments, employing various methodologies, including the Wizard of Oz technique.Our prototype provides inclusive audio descriptions that encompass sensory, emotional, historical, and structural elements, along with spatialized sounds from the museum environment, improving spatial understanding and cognitive map development.Notably, we have developed two versions of the voice assistant: one with a humorous interaction and one with a non-humorous approach. Users expressed a preference for the humorous version, leading to increased interaction, enjoyment, and social learning, as supported by both qualitative and quantitative results.In conclusion, the iMuse Model highlights the potential of co-designed, humor-infused, and culturally sensitive voice assistants.Our model not only aid PWVI in navigating unfamiliar spaces but also enhance their social learning, engagement, and appreciation of cultural heritage within museum environments.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103403"},"PeriodicalIF":5.3,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xina Jiang , Wen Zhou , Jicheng Sun , Shihong Chen , Anthony Fung
{"title":"Empathy enhancement through VR: A practice-led design study","authors":"Xina Jiang , Wen Zhou , Jicheng Sun , Shihong Chen , Anthony Fung","doi":"10.1016/j.ijhcs.2024.103397","DOIUrl":"10.1016/j.ijhcs.2024.103397","url":null,"abstract":"<div><div>Virtual reality (VR) has been widely acknowledged as a highly effective medium for augmenting empathy, enabling individuals to better comprehend and resonate with the emotions and lived experiences of others. Despite its acknowledged potential, the field lacks clear design guidelines and a systematic framework for creating VR environments for empathy training. In this article, we present a practice-led research project in which we triangulated design research using a paired sample <em>t</em>-test to evaluate and optimize the design guidelines of the empathy-training VR design (EVRD) framework. We evaluated the impact of a VR experience, designed based on the EVRD framework, on emotional, cognitive, and behavioral empathy among Chinese higher education students (n=84). A comprehensive assessment approach, including the Interpersonal Reactivity Index, interviews, system log analysis, and monitoring of donation activities was utilized, to measure changes in empathy before and after the VR intervention. The results validated the EVRD framework and demonstrated that it is a practical and systematic tool for designing a VR that training empathy. The findings of this study provide design insights with regard to (1) the process of VR empathy and (2) how to design “doomed-to-fail” interactions to promote cognitive empathy in VR.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103397"},"PeriodicalIF":5.3,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptions of discriminatory decisions of artificial intelligence: Unpacking the role of individual characteristics","authors":"Soojong Kim","doi":"10.1016/j.ijhcs.2024.103387","DOIUrl":"10.1016/j.ijhcs.2024.103387","url":null,"abstract":"<div><div>This study investigates how personal differences (digital self-efficacy, technical knowledge, belief in equality, political ideology) and demographic factors (age, education, and income) are associated with perceptions of artificial intelligence (AI) outcomes exhibiting gender and racial bias and with general attitudes toward AI. Analyses of a large-scale experiment dataset (<em>N</em> = 1,206) indicate that digital self-efficacy and technical knowledge are positively associated with attitudes toward AI, while liberal ideologies are negatively associated with outcome trust, higher negative emotion, and greater skepticism. Furthermore, age and income are closely connected to cognitive gaps in understanding discriminatory AI outcomes. These findings highlight the importance of promoting digital literacy skills and enhancing digital self-efficacy to maintain trust in AI and beliefs in AI usefulness and safety. The findings also suggest that the disparities in understanding problematic AI outcomes may be aligned with economic inequalities and generational gaps in society. Overall, this study sheds light on the socio-technological system in which complex interactions occur between social hierarchies, divisions, and machines that reflect and exacerbate the disparities.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103387"},"PeriodicalIF":5.3,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fang Xu , Tianyu Zhou , Tri Nguyen , Haohui Bao , Christine Lin , Jing Du
{"title":"Integrating augmented reality and LLM for enhanced cognitive support in critical audio communications","authors":"Fang Xu , Tianyu Zhou , Tri Nguyen , Haohui Bao , Christine Lin , Jing Du","doi":"10.1016/j.ijhcs.2024.103402","DOIUrl":"10.1016/j.ijhcs.2024.103402","url":null,"abstract":"<div><div>Operation and Maintenance (O&M) missions are often time-sensitive and accuracy-dependent, requiring rapid and precise information processing in noisy, chaotic environments where oral communication can lead to cognitive overload and impaired decision-making. Augmented Reality (AR) and Large Language Models (LLMs) offer potential for enhancing situational awareness and lowering cognitive load by integrating digital visualizations with the physical world and improving dialogue management. However, synthesizing these technologies into a real-time system that effectively aids operators remains a challenge. This study explores the integration of AR and GPT-4, an advanced LLM, in time-sensitive O&M tasks, aiming to enhance situational awareness and manage cognitive load during oral communications. A customized AR system, incorporating the Microsoft HoloLens2 for cognitive monitoring and GPT-4 for decision making assistance, was tested in a human subject experiment with 30 participants. The 2×2 factorial experiment evaluated the effects of AR and LLM assistance on task performance and cognitive load. Results demonstrated significant improvements in task accuracy and reductions in cognitive load, highlighting the effectiveness of AR and LLM integration in supporting O&M missions. These findings emphasize the need for further research to optimize operational strategies in mission critical environments.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103402"},"PeriodicalIF":5.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}