{"title":"Effects of various in-vehicle human–machine interfaces on drivers’ takeover performance and gaze pattern in conditionally automated vehicles","authors":"","doi":"10.1016/j.ijhcs.2024.103362","DOIUrl":"10.1016/j.ijhcs.2024.103362","url":null,"abstract":"<div><p>With the era of automated driving approaching, designing an effective and suitable human–machine interface (HMI) to present takeover requests (TORs) is critical to ensure driving safety. The present study conducted a simulated driving experiment to explore the effects of three HMIs (instrument panel, head-up display [HUD], and peripheral HMI) on takeover performance, simultaneously considering the TOR type (informative and generic TORs). Drivers’ eye movement data were also collected to investigate how drivers distribute their attention between the HMI and surrounding environment during the takeover process. The results showed that using the peripheral HMI to present TORs can shorten takeover time, and drivers rated this HMI as more useful and satisfactory than conventional HMIs (instrument panel and HUD). Eye movement analysis revealed that the peripheral HMI encourages drivers to spend more time gazing at the road ahead and less time gazing at the TOR information than the instrument panel and HUD, indicating a better gaze pattern for traffic safety. The HUD seemed to have a risk of capturing drivers’ attention, which resulted in an ‘attention tunnel,’ compared to the instrument panel. In addition, informative TORs were associated with better takeover performance and prompted drivers to spend less time gazing at rear-view mirrors than generic TORs. The findings of the present study can provide insights into the design and implementation of in-vehicle HMIs to improve the driving safety of automated vehicles.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142123006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflections on using the story completion method in designing tangible user interfaces","authors":"","doi":"10.1016/j.ijhcs.2024.103360","DOIUrl":"10.1016/j.ijhcs.2024.103360","url":null,"abstract":"<div><p>There are many design techniques to support the co-design of tangible technologies. However, few of these design methods allow the involvement of users at scale and across diverse geographic locations. While popular in psychology, the story completion method (SCM) has only recently started to be adopted within the HCI community. We explore whether SCM can generate meaningful design insights from large, diverse study populations for the design of Tangible User Interfaces (TUIs). Based on the results of two questionnaire studies using SCM, we conclude that the method can be used to generate meaningful design insights. Drawing on a systematic review of 870 TUI papers, we then contextualise the strengths and weaknesses of SCM against commonly used design methods, before reflecting on our experience of using the method across two distinct domains. We discuss the advantages of the method (particularly in terms of the scale and diversity of participation) and the challenges (particularly around constructing meaningful story stems, and developing the correct level of scaffolding to support creativity). We conclude that SCM is particularly suitable to be used in the early stages of the design process to understand the socio-cultural context of deployment.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001435/pdfft?md5=de503404b927c3522829b4baaecf17e7&pid=1-s2.0-S1071581924001435-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spacetime trajectories as overlapping rhythms","authors":"","doi":"10.1016/j.ijhcs.2024.103358","DOIUrl":"10.1016/j.ijhcs.2024.103358","url":null,"abstract":"<div><p>The navigation of two-dimensional spaces by rhythmic patterns on two buttons is investigated. It is shown how direction and speed of a moving object can be controlled with discrete commands consisting of duplets or triplets of taps, whose rate is proportional to one of two orthogonal velocity components. The imparted commands generate polyrhythms and polytempi that can be used to monitor the object movement by perceptual streaming. Tacking back and forth must be used to make progress along certain directions, similarly to sailing a boat upwind. The proposed rhythmic velocity-control technique is tested with a target-following task. Users effectively learn the tapping control actions, and they can keep a relatively small distance from a moving target. They can potentially rely on overlapping auditory rhythmic streams to compensate for temporary deprivation of visual position of the controlled object. The interface is minimal and symmetric, and can be adapted to different sensing and display devices, exploiting the symmetry of the human body and the ability to follow two concurrent rhythmic streams.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Representing scents: An evaluation framework of scent-related experiences through associations between grounded and psychophysiological data","authors":"","doi":"10.1016/j.ijhcs.2024.103357","DOIUrl":"10.1016/j.ijhcs.2024.103357","url":null,"abstract":"<div><p>This study introduces an empirical approach for assessing human scent-related experiences within the field of Human-Computer Interaction (HCI). We labeled 43 fragrances based on grounded collective experience, incorporating semantic and impression-based data. Furthermore, we collected comprehensive psychophysiological data, including electroencephalogram (EEG), electrobulbogram (EBG), electrocardiogram (ECG), and facial dynamics captured by a camera, from participants who experienced the scents. By computing scent-wise similarity and correlating both grounded and psychophysiological scent spaces, we identified associations between them, demonstrating the potential of this approach to enhance our understanding of scent-related experiences. Additionally, we propose an iterative evaluation framework to refine the design of smell-based interactions and we conduct a real-life study to validate this framework.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cognitive abilities predict performance in everyday computer tasks","authors":"","doi":"10.1016/j.ijhcs.2024.103354","DOIUrl":"10.1016/j.ijhcs.2024.103354","url":null,"abstract":"<div><p>Fluency with computer applications has assumed a crucial role in work-related and other day-to-day activities. While prior experience is known to predict performance in tasks involving computers, the effects of more stable factors like cognitive abilities remain unclear. Here, we report findings from a controlled study (<span><math><mrow><mi>N</mi><mo>=</mo><mn>88</mn></mrow></math></span>) covering a wide spectrum of commonplace applications, from spreadsheets to video conferencing. Our main result is that cognitive abilities exert a significant, independent, and broad-based effect on computer users’ performance. In particular, users with high working memory, executive control, and perceptual reasoning ability complete tasks more quickly and with greater success while experiencing lower mental load. Remarkably, these effects are similar to or even larger in magnitude than the effects of prior experience in using computers and in completing tasks similar to those encountered in our study. However, the effects are varying and application-specific. We discuss the role that user interface design bears on decreasing ability-related differences, alongside benefits this could yield for functioning in society.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S107158192400137X/pdfft?md5=a902433d6ce6aad8ad7b4833a2deb786&pid=1-s2.0-S107158192400137X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do we really need this robot? Technology requirements for vestibular rehabilitation: Input from patients and clinicians","authors":"","doi":"10.1016/j.ijhcs.2024.103356","DOIUrl":"10.1016/j.ijhcs.2024.103356","url":null,"abstract":"<div><h3>Background</h3><p>A main challenge in many types of physical rehabilitation is patient adherence to recommended exercises. Vestibular rehabilitation is the most effective treatment for the symptoms of dizziness, vertigo, imbalance, and nausea caused by vestibular disorders, but adherence levels are particularly low as the rehabilitation program calls for many short exercise sets during the day, which can worsen symptoms and impair balance in the short term. Technological tools have the potential to increase adherence, but to date, there has been no comprehensive analysis, in the context of vestibular rehabilitation, of the specific needs from technology, of its limitations, and of concerns regarding its use.</p></div><div><h3>Objective</h3><p>The aim of the study is to identify the main features required from technology for vestibular rehabilitation, as perceived by patients with vestibular disorders and by vestibular physical therapists, using a socially assistive robot as a test case. We seek here to provide practical information for the development of future vestibular rehabilitation technologies which are based on human-computer interaction (HCI) and human-robot interaction (HRI).</p></div><div><h3>Methods</h3><p>We conducted a qualitative study with six focus groups (<em>N</em> = 39). Three groups of patients with vestibular disorders (<em>N</em> = 18) and three groups of physical therapists (<em>N</em> = 21) participated in this study. The participants answered structured questions on technologies for vestibular rehabilitation, watched a presentation of two videos of a socially assistive robot (SAR), and completed an online survey. Thematic analysis with a mixed deductive and inductive approach was used to analyze the data.</p></div><div><h3>Results</h3><p>Participants preferred phone applications or virtual/augmented reality platforms over an embodied robotic platform. They wanted technology to be adaptive to the different stages of rehabilitation, gamified, easy to use, safe, reliable, portable, and accessible remotely by the therapist. They reported that the technology should provide feedback on the quality and quantity of exercise performance and monitor these factors while considering the tolerability of the ensuing disruptive symptoms. Participants expected that using technology as part of the rehabilitation process would shorten exercise sessions and improve clinical outcomes compared to standard care. SARs for vestibular rehabilitation were perceived as useful mostly for children and patients with chronic vestibular disorders, and their potential use for rehabilitation raised concerns regarding safety, ethics, and technical complexity.</p></div><div><h3>Conclusions</h3><p>Although SARs can potentially be used to increase exercise adherence, a phone application appears to be a more suitable medium for this purpose, raising fewer notable concerns from users. We provide a summary of perceived advantages and disadvantages of te","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001393/pdfft?md5=d6491755bf4e3baa08ca08cd42cb3db8&pid=1-s2.0-S1071581924001393-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What you say vs what you do: Utilizing positive emotional expressions to relay AI teammate intent within human–AI teams","authors":"","doi":"10.1016/j.ijhcs.2024.103355","DOIUrl":"10.1016/j.ijhcs.2024.103355","url":null,"abstract":"<div><p>With the expansive growth of AI’s capabilities in recent years, researchers have been tasked with developing and improving human-centered AI collaborations, necessitating the creation of human–AI teams (HATs). However, the differences in communication styles between humans and AI often prevent human teammates from fully understanding the intent and needs of AI teammates. One core difference is that humans naturally leverage a positive emotional tone during communication to convey their confidence or lack thereof to convey doubt in their ability to complete a task. Yet, this communication strategy must be explicitly designed in order for an AI teammate to be human-centered. In this mixed-methods study, 45 participants completed a study examining how human teammates interpret the behaviors of their AI teammates when they express different positive emotions via specific words/phrases. Quantitative results show that, based on corresponding behaviors, AI teammates were able to use displays of emotion to increase trust in AI teammates and the positive mood of the human teammate. Additionally, our qualitative findings indicate that participants preferred their AI teammates to increase the intensity of their displayed emotions to help reduce the perceived risk of their AI teammate’s behavior. When taken in sum, these findings describe the relevance of AI teammates expressing intensities of emotion while performing various behavioral decisions as a continued means to provide social support to the wider team and better task performance.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142011569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A rapid-prototyping toolkit for people with intellectual disabilities","authors":"","doi":"10.1016/j.ijhcs.2024.103347","DOIUrl":"10.1016/j.ijhcs.2024.103347","url":null,"abstract":"<div><p>Micro-electronics tools, coupled with card-based tools, are employed for prototyping smart devices with non-experts. Lately, researchers have started investigating what tools can actively engage people with intellectual disabilities (ID) in their prototyping. This paper posits itself in this line of work. It presents a toolkit for ID people to rapidly prototype together their own ideas of smart things, for their own shared environment. It analyses and discusses engaging or disengaging features of the toolkit in light of the results of two workshops with eight ID participants. Lessons of broad interest for the design of similar toolkits are drawn from the literature and study findings.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond digital privacy: Uncovering deeper attitudes toward privacy in cameras among older adults","authors":"","doi":"10.1016/j.ijhcs.2024.103345","DOIUrl":"10.1016/j.ijhcs.2024.103345","url":null,"abstract":"<div><p>Fall detection cameras at home can detect emergencies of older adults and send timely life-saving alerts. However, the equilibrium between privacy protection and life safety remains a controversial issue when using cameras. In this study, we assessed the attitudes of older adults towards the privacy issue of cameras using surveys (N=389) and interviews (N=20). Furthermore, we conducted a co-design workshop (N=6) in which older adults and designers collaborated to develop a prototype of cameras. We found that for older adults, the disclosure of privacy not only involves a leakage of personal information, but also influences their dignity and control, which has rarely been expressed directly in the past. Our results expand the conceptualisation of privacy and provide novel design implications for smart product development on privacy for older adults.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards the use of virtual reality prototypes in architecture to collect user experiences: An assessment of the comparability of patient experiences in a virtual and a real ambulatory pathway","authors":"","doi":"10.1016/j.ijhcs.2024.103342","DOIUrl":"10.1016/j.ijhcs.2024.103342","url":null,"abstract":"<div><p>Virtual Reality (VR) enables the low-cost production of realistic prototypes of buildings at early stages of architectural projects. Such prototypes may be used to gather the experiences of future users and iterate early on in the design. However, it is essential to evaluate whether what is experienced within such VR prototypes corresponds to what will be experienced in reality. Here, we use an innovative method to compare the experiences of patients in a real building and in a virtual environment that plays the role of a prototype that could have been created by architects during the design phase. We first designed and implemented a VR environment replicating an existing ambulatory pathway. Then, we used micro-phenomenological interviews to collect the experiences of real patients in the VR environment (n=8), along with VR traces and first-person point of view videos, and in the real ambulatory pathway (n=8). We modeled and normalized the experiences, and compared them systematically. Results suggest that patients live comparable experiences along various experiential dimensions such as thought, emotion, sensation, social and sensory perceptions, and that VR prototypes may be adequate to assess issues with architectural design. This work opens unique perspectives towards involving patients in User-Centered Design in architecture, though challenges lie ahead in how to design VR prototypes from early blueprints of architects.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001253/pdfft?md5=e68b25d62c0e0391983e65bcbf8c4366&pid=1-s2.0-S1071581924001253-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}