Eunhee Chang , Yongjae Lee , Mark Billinghurst , Byounghyun Yoo
{"title":"Efficient VR-AR communication method using virtual replicas in XR remote collaboration","authors":"Eunhee Chang , Yongjae Lee , Mark Billinghurst , Byounghyun Yoo","doi":"10.1016/j.ijhcs.2024.103304","DOIUrl":"10.1016/j.ijhcs.2024.103304","url":null,"abstract":"<div><p>When using Virtual Reality (VR) and Augmented Reality (AR) to support remote collaboration, effective communication between a remote expert in VR and a local worker in AR is important for guiding and following task instructions. This is especially crucial for assembly tasks, which require precise identification of parts and clear directions for their combination. Despite the increasing interest in efficient VR-AR communication methods, previous studies have been limited to complex hardware setups and simplified assembly tasks. In this research, we introduce a communication approach for remote collaboration in complex assembly tasks, utilizing simplified hardware configurations. We conducted a user study (<span><math><mrow><mi>n</mi><mo>=</mo><mn>30</mn></mrow></math></span>) and compared three interaction interfaces (hand gestures, 3D drawing, and virtual replicas) in task completion time, subjective questionnaires, and preference rank. The results showed that the use of virtual replicas not only enhances task efficiency but also receives strong preference by users. These findings indicate that virtual replicas can provide intuitive instructions to local workers, resulting in a clearer understanding of the expert’s guidance.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924000880/pdfft?md5=41b9a30018bd997a2842345c45578261&pid=1-s2.0-S1071581924000880-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141405586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeanine Kirchner-Krath , Maximilian Altmeyer , Linda Schürmann , Bastian Kordyaka , Benedikt Morschheuser , Ana Carolina Tomé Klock , Lennart Nacke , Juho Hamari , Harald F.O. von Korflesch
{"title":"Uncovering the theoretical basis of user types: An empirical analysis and critical discussion of user typologies in research on tailored gameful design","authors":"Jeanine Kirchner-Krath , Maximilian Altmeyer , Linda Schürmann , Bastian Kordyaka , Benedikt Morschheuser , Ana Carolina Tomé Klock , Lennart Nacke , Juho Hamari , Harald F.O. von Korflesch","doi":"10.1016/j.ijhcs.2024.103314","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103314","url":null,"abstract":"<div><p>Gamification has become one of the main areas in information systems and human–computer interaction research related to users’ motivations and behaviors. Within this context, a significant research gap is the lack of understanding of how users’ characteristics, especially in terms of their preferences for gameful interaction (i.e., user typologies), moderate the effects of gamification and, furthermore, how gamification could be tailored to individual needs. Despite their prominence in classifying users, current typologies and their use in research and practice have received severe criticism regarding validity and reliability, as well as the application and interpretation of their results. Therefore, it is essential to reconsider the relationships and foundations of common user typologies and establish a sound empirical basis to critically discuss their value and limits for personalized gamification. To address this research gap, this study investigated the psychometric properties of the most popular player types within tailored gamification literature (i.e., Bartle’s player types, Yee’s motivations to play, BrainHex, and HEXAD) through a survey study (<span><math><mrow><mi>n</mi><mo>=</mo><mn>877</mn></mrow></math></span>) using their respective measurement instruments, followed by a correlation analysis to understand their empirical relations and an exploratory factor analysis to identify the underlying factors. The results confirm that user typologies, despite their different origins, show considerable overlap, some being consistent whereas others contradicted theoretically assumed relationships. Furthermore, we show that these four user typologies overall factor into five underlying and fundamental dimensions of <em>Socialization, Escapism, Achievement, Reward Pursuit, and Independence</em>, which could be considered common concepts that may essentially reflect key determinants of user motivation in gamification. Our findings imply that future research and practice in tailored gamification design should shift the focus from developing and applying ever more nuanced typologies to understanding and measuring the key underlying determinants of user motivation in gameful systems. Moreover, given the considerable interrelationships between these determinants, we also argue that researchers should favor continuous representations of users’ motivations in specific situations instead of a dichotomous operationalization of user types as static manifestations of their preferences.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924000983/pdfft?md5=87b79cafe0b8b5075108d0b59d2035dc&pid=1-s2.0-S1071581924000983-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141324508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sharon Macias-Velasquez , Hugo I. Medellin-Castillo , Abel Garcia-Barrientos
{"title":"New-user experience evaluation in a semi-immersive and haptic-enabled virtual reality system for assembly operations","authors":"Sharon Macias-Velasquez , Hugo I. Medellin-Castillo , Abel Garcia-Barrientos","doi":"10.1016/j.ijhcs.2024.103312","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103312","url":null,"abstract":"<div><p>Virtual reality (VR) systems have been developed to enhance the conventional industrial design and manufacturing process, including worker training and factory planning. However, research studies have shown that the prolonged use of VR systems can result in certain discomforts among users. This research evaluates the user experience (UX) during the first interaction in a semi-immersive and haptic-enabled virtual assembly system. The aim is to evaluate the UX by considering variations in task duration in order to determine whether the length of the time spent on a virtual assembly task has an effect on the improvement or deterioration of the UX during the initial interactions of new users. The UX evaluation is based on key elements that characterize the user experience, such as perceptions of the product, emotions, consequences to use, realism, and physiological factors. In particular, the interest is to investigate whether these factors vary when performing a virtual assembly task along different time frames. The results have revealed significant differences in some dimensions of the user experience, physiological factors, and realism. This information encourages the formulation of guidelines to enhance the user experience of new operators of haptic-enabled virtual assembly systems.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141290755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng Chen , Jingshi Kang , Pejman Sajjadi , S. Shyam Sundar
{"title":"Preventing users from going down rabbit holes of extreme video content: A study of the role played by different modes of autoplay","authors":"Cheng Chen , Jingshi Kang , Pejman Sajjadi , S. Shyam Sundar","doi":"10.1016/j.ijhcs.2024.103303","DOIUrl":"10.1016/j.ijhcs.2024.103303","url":null,"abstract":"<div><p>The autoplay feature of video platforms is often blamed for users going down rabbit holes of binge-watching extreme content. However, autoplay is not necessarily a passive experience, because users can toggle the feature off if they want. While the automation aspect is passive, the toggle option signals interactivity, making it “interpassive,” which lies between completely passive autoplay and manual initiation of each video. We empirically compare these three modes of video viewing in a user study (<em>N</em> = 394), which exposed participants to either extreme or non-extreme content under conditions of manual play, interpassive autoplay, or completely passive autoplay. Results show that interpassive autoplay is favored over the other two. It triggers the control heuristic compared to passive autoplay, but leads to higher inattentiveness compared to manual play. Both the invoked control heuristic and inattentiveness result in higher rabbit hole perception. These findings have implications for socially responsible design of the autoplay feature.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141410275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hayoun Moon , Mia Saade , Daniel Enriquez , Zachary Duer , Hye Sung Moon , Sang Won Lee , Myounghoon Jeon
{"title":"Mixed-reality art as shared experience for cross-device users: Materialize, understand, and explore","authors":"Hayoun Moon , Mia Saade , Daniel Enriquez , Zachary Duer , Hye Sung Moon , Sang Won Lee , Myounghoon Jeon","doi":"10.1016/j.ijhcs.2024.103291","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103291","url":null,"abstract":"<div><p>Virtual reality (VR) has opened new possibilities for creative expression, while the 360-degree head-worn display (HWD) delivers a fully immersive experience in the world of art. The immersiveness, however, comes with the cost of blocking out the physical world, including bystanders without an HWD. Therefore, VR experiences in public (e.g., galleries, museums) often lack social interactivity, which plays an important role in forming aesthetic experiences. In the current study, we explored the application of a cross-device mixed reality (MR) platform in the domain of art to enable social and inclusive experiences with artworks that utilize VR technology. Our concept of interest features co-located audiences of HWD and mobile device users who interact across physical and virtual worlds. We conducted focus groups (<em>N</em>=22) and expert interviews (<em>N</em>=7) to identify the concept’s potential scenarios and fundamental components, as well as expected benefits and concerns. We also share our process of creating <em>In-Between Spaces</em>, an interactive artwork in MR that encourages social interactivity among cross-device audiences. Our exploration presents a prospective direction for future VR/MR aesthetic content, especially at public events and exhibitions targeting crowd audiences.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141264275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DigCode—A generic mid-air gesture coding method on human-computer interaction","authors":"Xiaozhou Zhou , Lesong Jia , Ruidong Bai , Chengqi Xue","doi":"10.1016/j.ijhcs.2024.103302","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103302","url":null,"abstract":"<div><p>With high flexibility and rich semantic expressiveness, mid-air gesture interaction is an important part of natural human-computer interaction (HCI) and has broad application prospects. However, there is no unified representation frame for designing, recording, investigating and comparing HCI mid-air gestures. Therefore, this paper proposes an interpretable coding method, DigCode, for HCI mid-air gestures. DigCode converts the unstructured continuous actions into structured discrete string encoding. From the perspective of human cognition and expression, the research employed psychophysical methods to divide gesture actions into discrete intervals, defined the coding rules of representation in letters and numbers, and developed automated programs to enable encoding and decoding by using gesture sensors. The coding method can cover the existing representations of HCI mid-air gestures by considering human understanding and computer recognition and can be applied to HCI mid-air gesture design and gesture library construction.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141240866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From explainable to interactive AI: A literature review on current trends in human-AI interaction","authors":"Muhammad Raees , Inge Meijerink , Ioanna Lykourentzou , Vassilis-Javed Khan , Konstantinos Papangelis","doi":"10.1016/j.ijhcs.2024.103301","DOIUrl":"10.1016/j.ijhcs.2024.103301","url":null,"abstract":"<div><p>AI systems are increasingly being adopted across various domains and application areas. With this surge, there is a growing research focus and societal concern for actively involving humans in developing, operating, and adopting these systems. Despite this concern, most existing literature on AI and Human–Computer Interaction (HCI) primarily focuses on explaining how AI systems operate and, at times, allowing users to contest AI decisions. Existing studies often overlook more impactful forms of user interaction with AI systems, such as giving users agency beyond contestability and enabling them to adapt and even co-design the AI’s internal mechanics. In this survey, we aim to bridge this gap by reviewing the state-of-the-art in Human-Centered AI literature, the domain where AI and HCI studies converge, extending past Explainable and Contestable AI, delving into the Interactive AI and beyond. Our analysis contributes to shaping the trajectory of future Interactive AI design and advocates for a more user-centric approach that provides users with greater agency, fostering not only their understanding of AI’s workings but also their active engagement in its development and evolution.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141142214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The way you assess matters: User interaction design of survey chatbots for mental health","authors":"Yucheng Jin, Li Chen, Xianglin Zhao, Wanling Cai","doi":"10.1016/j.ijhcs.2024.103290","DOIUrl":"10.1016/j.ijhcs.2024.103290","url":null,"abstract":"<div><p>The global pandemic has pushed human society into a mental health crisis, prompting the development of various chatbots to supplement the limited mental health workforce. Several organizations have employed mental health survey chatbots for public mental status assessments. These survey chatbots typically ask closed-ended questions (Closed-EQs) to assess specific psychological issues like anxiety, depression, and loneliness, followed by open-ended questions (Open-EQs) for deeper insights. While Open-EQs are naturally presented conversationally in a survey chatbot, Closed-EQs can be delivered as embedded forms or within conversations, with the length of the questionnaire varying according to the psychological assessment. This study investigates how the <em>interaction style</em> of Closed-EQs and the <em>questionnaire length</em> affect user perceptions regarding survey credibility, enjoyment, and self-awareness, as well as their responses to Open-EQs in terms of quality and self-disclosure in a survey chatbot. We conducted a 2 (<em>interaction style</em>: form-based vs. conversation-based) <span><math><mo>×</mo></math></span> 3 (<em>questionnaire length</em>: short vs. middle vs. long) between-subjects study (N=213) with a loneliness survey chatbot. The results indicate that the form-based interaction significantly enhances the perceived credibility of the assessment, thereby improving response quality and self-disclosure in subsequent Open-EQs and fostering self-awareness. We discuss our findings for the interaction design of psychological assessment in a survey chatbot for mental health.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141140499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of interface design on cognitive workload in unmanned aerial vehicle control","authors":"Wenjuan Zhang , Yunmei Liu , David B. Kaber","doi":"10.1016/j.ijhcs.2024.103287","DOIUrl":"10.1016/j.ijhcs.2024.103287","url":null,"abstract":"<div><p>Unmanned Aerial Vehicle (UAV) control interfaces are critical channels for transferring information between the vehicle and an operator. Research on system performance has focused on enhancing vehicle automation and some work has evaluated cognitive workload for existing UAV interfaces. The potential for usable interface design to reduce cognitive workload during the early design phase has been largely overlooked. This study addresses these gaps by: (1) evaluating the effectiveness of a contemporary UAV interface design tool (the Modified GEDIS-UAV) to moderate user workload; (2) examining the effectiveness of various UAV interface designs for minimizing cognitive workload under different control task pacing; and (3) exploring the use of eye tracking measures, traditionally applied in other domains, as indicators of cognitive workload in UAV operations. We prototyped three different interface designs, classified as “baseline”, “enhanced” and “degraded” interfaces. Cognitive workload in UAV operation was manipulated in terms of levels of vehicle speed (“low” and “high”). Physiological and subjective measures of workload were collected for all combinations of interface design and task demand. Results revealed the “enhanced” interface to yield the lowest operator cognitive workload and supported operator resilience to increased control task demand, as compared to the “baseline” and “degraded” interfaces. In addition, task demand was found to elevate operator cognitive workload, particularly in terms of \"mental\" and \"temporal\" demands and operator perceptions of \"performance\". The study also demonstrated utility of eye-tracking technology for detecting cognitive workload in UAV operations. This research provides practical guidance for UAV control interface design to manage operator workload. The methods employed in the study are applicable to interface evaluation for various types of UAVs and other unmanned systems to enhance human-automation interaction.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141035920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanming He , Qizhang Sun , Peiyao Cheng , Shumeng Hou , Lei Zhou
{"title":"Priming users with babies’ gestures: Investigating the influences of priming with different development origin of image schemas in gesture elicitation study","authors":"Yanming He , Qizhang Sun , Peiyao Cheng , Shumeng Hou , Lei Zhou","doi":"10.1016/j.ijhcs.2024.103288","DOIUrl":"10.1016/j.ijhcs.2024.103288","url":null,"abstract":"<div><p>Gesture elicitation study is an effective method to design gestures for various contexts. Through involving end-users, GES results in intuitive gestures because they directly reflect end-users’ mental models and preferences. However, limited by personal experience, end-users are not capable of taking full advantages of technology while proposing gestures, which is referred as legacy bias. To overcome this, previous studies demonstrate that users’ performance can be improved by priming, such as viewing gestures, watching fictional movies, and experiencing framed scenarios. This research extends this line of studies by considering the developmental origin of image schemas in priming. More specifically, we compared the influences of no-priming, priming with early image schemas (EIS), and priming with late image schemas (LIS) on GES. Controlled experiments were conducted (<em>N</em> = 120) along the three stages of GES: users’ generation of gestures (Experiment 1), final gesture sets (Experiment 2), and end-users’ learnability of gestures (Experiment 3). Results show that users are largely influenced by developmental origin of image schemas in priming. LIS-priming improve gesture proposal production in comparison to no-priming condition. As for end-users’ evaluation, EIS-priming gestures exhibit higher initial and overall learnability.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141023630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}