Rosella Gennari , Maristella Matera , Alessandra Melonio , Marco Mores , Diego Morra , Mehdi Rizvi
{"title":"A rapid-prototyping toolkit for people with intellectual disabilities","authors":"Rosella Gennari , Maristella Matera , Alessandra Melonio , Marco Mores , Diego Morra , Mehdi Rizvi","doi":"10.1016/j.ijhcs.2024.103347","DOIUrl":"10.1016/j.ijhcs.2024.103347","url":null,"abstract":"<div><p>Micro-electronics tools, coupled with card-based tools, are employed for prototyping smart devices with non-experts. Lately, researchers have started investigating what tools can actively engage people with intellectual disabilities (ID) in their prototyping. This paper posits itself in this line of work. It presents a toolkit for ID people to rapidly prototype together their own ideas of smart things, for their own shared environment. It analyses and discusses engaging or disengaging features of the toolkit in light of the results of two workshops with eight ID participants. Lessons of broad interest for the design of similar toolkits are drawn from the literature and study findings.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103347"},"PeriodicalIF":5.3,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiwei Zhang , Jianing Yin , Ka I Chan , Tongxin Sun , Tongtong Jin , Jihong Jeung , Jiangtao Gong
{"title":"Beyond digital privacy: Uncovering deeper attitudes toward privacy in cameras among older adults","authors":"Weiwei Zhang , Jianing Yin , Ka I Chan , Tongxin Sun , Tongtong Jin , Jihong Jeung , Jiangtao Gong","doi":"10.1016/j.ijhcs.2024.103345","DOIUrl":"10.1016/j.ijhcs.2024.103345","url":null,"abstract":"<div><p>Fall detection cameras at home can detect emergencies of older adults and send timely life-saving alerts. However, the equilibrium between privacy protection and life safety remains a controversial issue when using cameras. In this study, we assessed the attitudes of older adults towards the privacy issue of cameras using surveys (N=389) and interviews (N=20). Furthermore, we conducted a co-design workshop (N=6) in which older adults and designers collaborated to develop a prototype of cameras. We found that for older adults, the disclosure of privacy not only involves a leakage of personal information, but also influences their dignity and control, which has rarely been expressed directly in the past. Our results expand the conceptualisation of privacy and provide novel design implications for smart product development on privacy for older adults.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103345"},"PeriodicalIF":5.3,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards the use of virtual reality prototypes in architecture to collect user experiences: An assessment of the comparability of patient experiences in a virtual and a real ambulatory pathway","authors":"Jean-Philippe Rivière, Louis Vinet, Yannick Prié","doi":"10.1016/j.ijhcs.2024.103342","DOIUrl":"10.1016/j.ijhcs.2024.103342","url":null,"abstract":"<div><p>Virtual Reality (VR) enables the low-cost production of realistic prototypes of buildings at early stages of architectural projects. Such prototypes may be used to gather the experiences of future users and iterate early on in the design. However, it is essential to evaluate whether what is experienced within such VR prototypes corresponds to what will be experienced in reality. Here, we use an innovative method to compare the experiences of patients in a real building and in a virtual environment that plays the role of a prototype that could have been created by architects during the design phase. We first designed and implemented a VR environment replicating an existing ambulatory pathway. Then, we used micro-phenomenological interviews to collect the experiences of real patients in the VR environment (n=8), along with VR traces and first-person point of view videos, and in the real ambulatory pathway (n=8). We modeled and normalized the experiences, and compared them systematically. Results suggest that patients live comparable experiences along various experiential dimensions such as thought, emotion, sensation, social and sensory perceptions, and that VR prototypes may be adequate to assess issues with architectural design. This work opens unique perspectives towards involving patients in User-Centered Design in architecture, though challenges lie ahead in how to design VR prototypes from early blueprints of architects.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103342"},"PeriodicalIF":5.3,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001253/pdfft?md5=e68b25d62c0e0391983e65bcbf8c4366&pid=1-s2.0-S1071581924001253-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marta Ferreira, Nuno Nunes, Pedro Ferreira, Henrique Pereira, Valentina Nisi
{"title":"Connecting audiences with climate change: Towards humanised and action-focused data interactions","authors":"Marta Ferreira, Nuno Nunes, Pedro Ferreira, Henrique Pereira, Valentina Nisi","doi":"10.1016/j.ijhcs.2024.103341","DOIUrl":"10.1016/j.ijhcs.2024.103341","url":null,"abstract":"<div><p>This paper investigates the relationship between design research and humancomputer interaction (HCI) in the context of climate change communication and engagement. We discuss current practices in climate change communication and the decrease in concern and engagement caused by “crisis fatigue”. Through Research through Design (RtD), we set out to investigate data humanism and how users react to climate change data, testing approaches to improve engagement. With this purpose, we designed and evaluated <em>Finding Arcadia</em>, an interactive data story that uses data humanism to shift the dialogue from crisis-focused to action-focused. One study with the original IMF visualisations (<em>N</em> = 17) and two studies in public spaces (<em>N</em> = 12 and <em>N</em> = 64) point to the contextualization of the data and presenting actionable solutions helping in engaging users with climate change issues; help in creating solution-focused narratives and interpreting and relating with climate data. From these results, we derive insights for designing empowering interactive data visualizations for resilient climate change engagement.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103341"},"PeriodicalIF":5.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001241/pdfft?md5=a32ba3c151f3843aa67936d8aa9a2206&pid=1-s2.0-S1071581924001241-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141963845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siyuan Zhou , Xu Sun , Qingfeng Wang , Bingjian Liu , Gary Burnett
{"title":"Development of a measurement instrument for pedestrians’ initial trust in automated vehicles","authors":"Siyuan Zhou , Xu Sun , Qingfeng Wang , Bingjian Liu , Gary Burnett","doi":"10.1016/j.ijhcs.2024.103344","DOIUrl":"10.1016/j.ijhcs.2024.103344","url":null,"abstract":"<div><p>Considering that a significant portion of the current pedestrian population has limited exposure to automated vehicles (AVs), it is crucial to have a reliable instrument for assessing pedestrians’ initial trust in AVs. Using a survey of 436 pedestrians, this study developed and validated a PITQA (Pedestrians’ Initial Trust Questionnaire for AVs) scale using partial least squares structural equation modeling (PLS-SEM). The proposed scale will be valuable in monitoring the progression of trust over time and considering trust-related factors during the design process. The results revealed that seven key constructs significantly contribute to predicting initial trust between pedestrians and AVs. These constructs include <em>propensity to trust, perceived statistical reliability, dependability and competence, perceived predictability, familiarity, authority/subversion, care/harm</em>, and <em>sanctity/degradation</em>. These shed light on how the trust propensity of individuals, different trust/trustworthiness attributes might constitute different aspects of initial trust in the pedestrian-AV context. The developed scale can be a potentially useful tool for future research endeavors concerning trust calibration and the design of AVs specifically tailored for vulnerable road users.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103344"},"PeriodicalIF":5.3,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001277/pdfft?md5=b2bb29c21ca7b7eafefd0b998c22c85c&pid=1-s2.0-S1071581924001277-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Musician-AI partnership mediated by emotionally-aware smart musical instruments","authors":"Luca Turchet , Domenico Stefani , Johan Pauwels","doi":"10.1016/j.ijhcs.2024.103340","DOIUrl":"10.1016/j.ijhcs.2024.103340","url":null,"abstract":"<div><p>The integration of emotion recognition capabilities within musical instruments can spur the emergence of novel art formats and services for musicians. This paper proposes the concept of emotionally-aware smart musical instruments, a class of musical devices embedding an artificial intelligence agent able to recognize the emotion contained in the musical signal. This spurs the emergence of novel services for musicians. Two prototypes of emotionally-aware smart piano and smart electric guitar were created, which embedded a recognition method for happiness, sadness, relaxation, aggressiveness and combination thereof. A user study, conducted with eleven pianists and eleven electric guitarists, revealed the strengths and limitations of the developed technology. On average musicians appreciated the proposed concept, who found its value in various musical activities. Most of participants tended to justify the system with respect to erroneous or partially erroneous classifications of the emotions they expressed, reporting to understand the reasons why a given output was produced. Some participants even seemed to trust more the system than their own judgments. Conversely, other participants requested to improve the accuracy, reliability and explainability of the system in order to achieve a higher degree of partnership with it. Our results suggest that, while desirable, perfect prediction of the intended emotion is not an absolute requirement for music emotion recognition to be useful in the construction of smart musical instruments.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103340"},"PeriodicalIF":5.3,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S107158192400123X/pdfft?md5=9a551949f1594cc78460b20e32ef1a41&pid=1-s2.0-S107158192400123X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muna Alebri , Enrico Costanza , Georgia Panagiotidou , Duncan P. Brumby , Fatima Althani , Riccardo Bovo
{"title":"Visualisations with semantic icons: Assessing engagement with distracting elements","authors":"Muna Alebri , Enrico Costanza , Georgia Panagiotidou , Duncan P. Brumby , Fatima Althani , Riccardo Bovo","doi":"10.1016/j.ijhcs.2024.103343","DOIUrl":"10.1016/j.ijhcs.2024.103343","url":null,"abstract":"<div><p>As visualisations reach a broad range of audiences, designing visualisations that attract and engage becomes more critical. Prior work suggests that semantic icons entice and immerse the reader; however, little is known about their impact with informational tasks and when the viewer’s attention is divided because of a distracting element. To address this gap, we first explored a variety of semantic icons with various visualisation attributes. The findings of this exploration shaped the design of our primary comparative online user studies, where participants saw a target visualisation with a distracting visualisation on a web page and were asked to extract insights. Their engagement was measured through three dependent variables: (1) visual attention, (2) effort to write insights, and (3) self-reported engagement. In Study 1, we discovered that visualisations with semantic icons were consistently perceived to be more engaging than the plain version. However, we found no differences in visual attention and effort between the two versions. Thus, we ran Study 2 using visualisations with more salient semantic icons to achieve maximum contrast. The results were consistent with our first Study. Furthermore, we found that semantic icons elevated engagement with visualisations depicting less interesting and engaging topics from the participant’s perspective. We extended prior work by demonstrating the semantic value after performing an informational task (extracting insights) and reflecting on the visualisation, besides its value to the first impression. Our findings may be helpful to visualisation designers and storytellers keen on designing engaging visualisations with limited resources. We also contribute reflections on engagement measurements with visualisations and provide future directions.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103343"},"PeriodicalIF":5.3,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001265/pdfft?md5=1b5dc2bccd837038da33997e5f1f4935&pid=1-s2.0-S1071581924001265-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhraneil Dam , YeaJi Lee , Arsh Siddiqui , Wallace Santos Lages , Myounghoon Jeon
{"title":"Audio augmented reality using sonification to enhance visual art experiences: Lessons learned","authors":"Abhraneil Dam , YeaJi Lee , Arsh Siddiqui , Wallace Santos Lages , Myounghoon Jeon","doi":"10.1016/j.ijhcs.2024.103329","DOIUrl":"10.1016/j.ijhcs.2024.103329","url":null,"abstract":"<div><p>Augmenting visual art in art galleries can be an effective Audio Augmented Reality (AAR) application for indoor exploration. In the current study, eight paintings from four genres were augmented with audio through their sonification. Basic Audio was generated using a sonification algorithm by identifying the major colors of the paintings, and Enhanced Audio was generated by a musician enhancing the Basic Audio; these were presented with the paintings to compare against No Audio. Twenty-six participants viewed each painting in all three conditions; eye gaze metrics, and qualitative data were collected. Results showed that Enhanced Audio led to significantly greater engagement and positive sentiments, compared to Basic Audio. Thematic analysis showed semantic and syntactic relationships of the audio with the paintings, and a tendency to guide users’ gaze over time. Findings from this study can guide future AAR developments to improve auditory display designs to enhance visual experiences.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103329"},"PeriodicalIF":5.3,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141959856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating the effect of key size, typing angle, and typing technique of virtual keyboard on typing productivity, biomechanics, and usability in a mixed reality environment","authors":"Chiuhsiang Joe Lin, Susmitha Canny","doi":"10.1016/j.ijhcs.2024.103339","DOIUrl":"10.1016/j.ijhcs.2024.103339","url":null,"abstract":"<div><p>This study investigates the effects of key sizes, typing angles, and typing techniques on typing productivity, biomechanics (muscle activity), and subjective experience with a mixed reality keyboard. The findings suggest that using smaller key sizes, such as 16 mm, may not be suitable due to slower typing speed, lower accuracy, lower user experience, higher muscle activity, and higher motion sickness. Typing with both index fingers results in the highest typing speed while using only a single index finger provides higher accuracy. Placing the keyboard at eye height leads to the highest typing speed, as participants can easily view the keys and the virtual environment simultaneously. However, typing accuracy is not affected by typing angle and typing technique. Implementing these findings on the virtual keyboard design could potentially benefit workers’ productivity and decrease errors in the mixed reality environment.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103339"},"PeriodicalIF":5.3,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141729314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Li , Juan Liu , Jin Huang , Yang Zhang , Xiaolan Peng , Yulong Bian , Feng Tian
{"title":"Evaluating the effects of user motion and viewing mode on target selection in augmented reality","authors":"Yang Li , Juan Liu , Jin Huang , Yang Zhang , Xiaolan Peng , Yulong Bian , Feng Tian","doi":"10.1016/j.ijhcs.2024.103327","DOIUrl":"10.1016/j.ijhcs.2024.103327","url":null,"abstract":"<div><p>Target selection is a crucial task in augmented reality (AR). Recent evidence suggests that user motion can significantly influence target selection. However, no systematic research has been conducted on target selection within varied intensity user motions and AR settings. This paper was carried out to investigate the effects of four user motions (i.e., standing, walking, running, and jumping) and two viewing modes (i.e., viewpoint-dependent and viewpoint-independent) on user performance of target selection in AR. Two typical selection techniques (i.e., virtual hand and ray-casting) were utilized for short-range and long-range selection tasks, respectively. Our results indicate that the target selection performance decreased as the intensity of user motion increased, and users demonstrated better performance in the viewpoint-independent mode than in the viewpoint-dependent mode. We also observed that users took a longer amount of time to select targets when using the ray-casting technique than the virtual hand technique. We conclude with a set of design guidelines to improve the AR target selection performance of users while in motion.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"191 ","pages":"Article 103327"},"PeriodicalIF":5.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141707376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}