{"title":"Connecting audiences with climate change: Towards humanised and action-focused data interactions","authors":"","doi":"10.1016/j.ijhcs.2024.103341","DOIUrl":"10.1016/j.ijhcs.2024.103341","url":null,"abstract":"<div><p>This paper investigates the relationship between design research and humancomputer interaction (HCI) in the context of climate change communication and engagement. We discuss current practices in climate change communication and the decrease in concern and engagement caused by “crisis fatigue”. Through Research through Design (RtD), we set out to investigate data humanism and how users react to climate change data, testing approaches to improve engagement. With this purpose, we designed and evaluated <em>Finding Arcadia</em>, an interactive data story that uses data humanism to shift the dialogue from crisis-focused to action-focused. One study with the original IMF visualisations (<em>N</em> = 17) and two studies in public spaces (<em>N</em> = 12 and <em>N</em> = 64) point to the contextualization of the data and presenting actionable solutions helping in engaging users with climate change issues; help in creating solution-focused narratives and interpreting and relating with climate data. From these results, we derive insights for designing empowering interactive data visualizations for resilient climate change engagement.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001241/pdfft?md5=a32ba3c151f3843aa67936d8aa9a2206&pid=1-s2.0-S1071581924001241-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141963845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of a measurement instrument for pedestrians’ initial trust in automated vehicles","authors":"","doi":"10.1016/j.ijhcs.2024.103344","DOIUrl":"10.1016/j.ijhcs.2024.103344","url":null,"abstract":"<div><p>Considering that a significant portion of the current pedestrian population has limited exposure to automated vehicles (AVs), it is crucial to have a reliable instrument for assessing pedestrians’ initial trust in AVs. Using a survey of 436 pedestrians, this study developed and validated a PITQA (Pedestrians’ Initial Trust Questionnaire for AVs) scale using partial least squares structural equation modeling (PLS-SEM). The proposed scale will be valuable in monitoring the progression of trust over time and considering trust-related factors during the design process. The results revealed that seven key constructs significantly contribute to predicting initial trust between pedestrians and AVs. These constructs include <em>propensity to trust, perceived statistical reliability, dependability and competence, perceived predictability, familiarity, authority/subversion, care/harm</em>, and <em>sanctity/degradation</em>. These shed light on how the trust propensity of individuals, different trust/trustworthiness attributes might constitute different aspects of initial trust in the pedestrian-AV context. The developed scale can be a potentially useful tool for future research endeavors concerning trust calibration and the design of AVs specifically tailored for vulnerable road users.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001277/pdfft?md5=b2bb29c21ca7b7eafefd0b998c22c85c&pid=1-s2.0-S1071581924001277-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Musician-AI partnership mediated by emotionally-aware smart musical instruments","authors":"","doi":"10.1016/j.ijhcs.2024.103340","DOIUrl":"10.1016/j.ijhcs.2024.103340","url":null,"abstract":"<div><p>The integration of emotion recognition capabilities within musical instruments can spur the emergence of novel art formats and services for musicians. This paper proposes the concept of emotionally-aware smart musical instruments, a class of musical devices embedding an artificial intelligence agent able to recognize the emotion contained in the musical signal. This spurs the emergence of novel services for musicians. Two prototypes of emotionally-aware smart piano and smart electric guitar were created, which embedded a recognition method for happiness, sadness, relaxation, aggressiveness and combination thereof. A user study, conducted with eleven pianists and eleven electric guitarists, revealed the strengths and limitations of the developed technology. On average musicians appreciated the proposed concept, who found its value in various musical activities. Most of participants tended to justify the system with respect to erroneous or partially erroneous classifications of the emotions they expressed, reporting to understand the reasons why a given output was produced. Some participants even seemed to trust more the system than their own judgments. Conversely, other participants requested to improve the accuracy, reliability and explainability of the system in order to achieve a higher degree of partnership with it. Our results suggest that, while desirable, perfect prediction of the intended emotion is not an absolute requirement for music emotion recognition to be useful in the construction of smart musical instruments.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S107158192400123X/pdfft?md5=9a551949f1594cc78460b20e32ef1a41&pid=1-s2.0-S107158192400123X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualisations with semantic icons: Assessing engagement with distracting elements","authors":"","doi":"10.1016/j.ijhcs.2024.103343","DOIUrl":"10.1016/j.ijhcs.2024.103343","url":null,"abstract":"<div><p>As visualisations reach a broad range of audiences, designing visualisations that attract and engage becomes more critical. Prior work suggests that semantic icons entice and immerse the reader; however, little is known about their impact with informational tasks and when the viewer’s attention is divided because of a distracting element. To address this gap, we first explored a variety of semantic icons with various visualisation attributes. The findings of this exploration shaped the design of our primary comparative online user studies, where participants saw a target visualisation with a distracting visualisation on a web page and were asked to extract insights. Their engagement was measured through three dependent variables: (1) visual attention, (2) effort to write insights, and (3) self-reported engagement. In Study 1, we discovered that visualisations with semantic icons were consistently perceived to be more engaging than the plain version. However, we found no differences in visual attention and effort between the two versions. Thus, we ran Study 2 using visualisations with more salient semantic icons to achieve maximum contrast. The results were consistent with our first Study. Furthermore, we found that semantic icons elevated engagement with visualisations depicting less interesting and engaging topics from the participant’s perspective. We extended prior work by demonstrating the semantic value after performing an informational task (extracting insights) and reflecting on the visualisation, besides its value to the first impression. Our findings may be helpful to visualisation designers and storytellers keen on designing engaging visualisations with limited resources. We also contribute reflections on engagement measurements with visualisations and provide future directions.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001265/pdfft?md5=1b5dc2bccd837038da33997e5f1f4935&pid=1-s2.0-S1071581924001265-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Audio augmented reality using sonification to enhance visual art experiences: Lessons learned","authors":"","doi":"10.1016/j.ijhcs.2024.103329","DOIUrl":"10.1016/j.ijhcs.2024.103329","url":null,"abstract":"<div><p>Augmenting visual art in art galleries can be an effective Audio Augmented Reality (AAR) application for indoor exploration. In the current study, eight paintings from four genres were augmented with audio through their sonification. Basic Audio was generated using a sonification algorithm by identifying the major colors of the paintings, and Enhanced Audio was generated by a musician enhancing the Basic Audio; these were presented with the paintings to compare against No Audio. Twenty-six participants viewed each painting in all three conditions; eye gaze metrics, and qualitative data were collected. Results showed that Enhanced Audio led to significantly greater engagement and positive sentiments, compared to Basic Audio. Thematic analysis showed semantic and syntactic relationships of the audio with the paintings, and a tendency to guide users’ gaze over time. Findings from this study can guide future AAR developments to improve auditory display designs to enhance visual experiences.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141959856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating the effect of key size, typing angle, and typing technique of virtual keyboard on typing productivity, biomechanics, and usability in a mixed reality environment","authors":"","doi":"10.1016/j.ijhcs.2024.103339","DOIUrl":"10.1016/j.ijhcs.2024.103339","url":null,"abstract":"<div><p>This study investigates the effects of key sizes, typing angles, and typing techniques on typing productivity, biomechanics (muscle activity), and subjective experience with a mixed reality keyboard. The findings suggest that using smaller key sizes, such as 16 mm, may not be suitable due to slower typing speed, lower accuracy, lower user experience, higher muscle activity, and higher motion sickness. Typing with both index fingers results in the highest typing speed while using only a single index finger provides higher accuracy. Placing the keyboard at eye height leads to the highest typing speed, as participants can easily view the keys and the virtual environment simultaneously. However, typing accuracy is not affected by typing angle and typing technique. Implementing these findings on the virtual keyboard design could potentially benefit workers’ productivity and decrease errors in the mixed reality environment.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141729314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the effects of user motion and viewing mode on target selection in augmented reality","authors":"","doi":"10.1016/j.ijhcs.2024.103327","DOIUrl":"10.1016/j.ijhcs.2024.103327","url":null,"abstract":"<div><p>Target selection is a crucial task in augmented reality (AR). Recent evidence suggests that user motion can significantly influence target selection. However, no systematic research has been conducted on target selection within varied intensity user motions and AR settings. This paper was carried out to investigate the effects of four user motions (i.e., standing, walking, running, and jumping) and two viewing modes (i.e., viewpoint-dependent and viewpoint-independent) on user performance of target selection in AR. Two typical selection techniques (i.e., virtual hand and ray-casting) were utilized for short-range and long-range selection tasks, respectively. Our results indicate that the target selection performance decreased as the intensity of user motion increased, and users demonstrated better performance in the viewpoint-independent mode than in the viewpoint-dependent mode. We also observed that users took a longer amount of time to select targets when using the ray-casting technique than the virtual hand technique. We conclude with a set of design guidelines to improve the AR target selection performance of users while in motion.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141707376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brandon Victor Syiem , Ryan M. Kelly , Tilman Dingler , Jorge Goncalves , Eduardo Velloso
{"title":"Addressing attentional issues in augmented reality with adaptive agents: Possibilities and challenges","authors":"Brandon Victor Syiem , Ryan M. Kelly , Tilman Dingler , Jorge Goncalves , Eduardo Velloso","doi":"10.1016/j.ijhcs.2024.103324","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103324","url":null,"abstract":"<div><p>Recent work on augmented reality (AR) has explored the use of adaptive agents to overcome attentional issues that negatively impact task performance. However, despite positive technical evaluations, adaptive agents have shown no significant improvements to user task performance in AR. Furthermore, previous works have primarily evaluated such agents using abstract tasks. In this paper, we develop an agent that observes user behaviour and performs appropriate actions to mitigate attentional issues in a realistic sense-making task in AR. We employ mixed methods to evaluate our agent in a between-subject experiment (N=60) to understand the agent’s effect on user task performance and behaviour. While we find no significant improvements in task performance, our analysis revealed that users’ preferences and trust in the agent affected their receptiveness of the agent’s recommendations. We discuss the pitfalls of autonomous agents and highlight the need to shift from designing better Human–AI interactions to better Human–AI collaborations.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001083/pdfft?md5=340b634ee8226bee5158c6450e7027cb&pid=1-s2.0-S1071581924001083-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Morra , Giacomo Caslini , Marco Mores , Franca Garzotto , Maristella Matera
{"title":"MakeNodes: Opening connected-IoT making to people with intellectual disability","authors":"Diego Morra , Giacomo Caslini , Marco Mores , Franca Garzotto , Maristella Matera","doi":"10.1016/j.ijhcs.2024.103325","DOIUrl":"https://doi.org/10.1016/j.ijhcs.2024.103325","url":null,"abstract":"<div><p>Recent developments in accessible electronic-making toolkits have opened up avenues for individuals with intellectual disabilities(ID) to actively participate in creating their own smart objects based on the Internet of Things (IoT) technology. These toolkits present a novel opportunity to foster the inclusion of this often-under-considered community in the development of personalized solutions that can impact their autonomy and well-being. However, understanding IoT encompasses comprehending the inter-object connection paradigm at the core of this technology, an aspect that is not adequately covered by the existing accessible toolkits. To fill this gap, this paper illustrates the design and evaluation of <em>MakeNodes</em>, a tangible toolkit to involve people with ID in making smart-thing networks in a guided, collaborative, and engaging way. The toolkit comprises a series of input and output nodes that can be paired to make any object or surface smart. The paper illustrates how MakeNodes empowers people with ID through IoT-making activities that promote collaborative work to address personal needs. Key findings are discussed in the form of lessons learned concerning the importance of physical and visual elements, hands-on exploration, and interaction robustness in improving the accessibility of IoT-making toolkits for people with ID.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001095/pdfft?md5=4de440f0e972702eea54c333020032ce&pid=1-s2.0-S1071581924001095-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the potential of supporting autonomy in online video interview training platforms","authors":"","doi":"10.1016/j.ijhcs.2024.103326","DOIUrl":"10.1016/j.ijhcs.2024.103326","url":null,"abstract":"<div><p>Rising unemployment has led to many discouraged job seekers. While the impact of job seekers’ motivation on interview performance is acknowledged in previous research, little attention has been given to understanding the effect of training on interview motivation <em>and</em> performance. We present <span>InterviewApp</span>, an online interview training tool aiming to support interview motivation through autonomy, relatedness and competence needs derived from Self-Determination Theory and, in turn, performance. Through a four-month study (N=135), we assess its effectiveness in supporting job seekers’ interview motivation and performance. Our results demonstrate the role of autonomy in mediating the effect of training on performance. We found that the intervention significantly affected the job seekers’ perceived autonomy. Furthermore, engagement with the recording and feedback features of the tool positively impacted performance. Overall, job seekers found InterviewApp helpful for online interview training and valued the provided expert feedback. These findings have implications for the design of online interview training tools and for behaviour change interventions to support employment.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1071581924001101/pdfft?md5=abddce4b6b73e37aa3de9755a9afa0de&pid=1-s2.0-S1071581924001101-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141959857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}