International Journal of Human-Computer Studies最新文献

筛选
英文 中文
Breaking down barriers: A new approach to virtual museum navigation for people with visual impairments through voice assistants 打破障碍:通过语音助手为视觉障碍者提供虚拟博物馆导航的新方法
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-17 DOI: 10.1016/j.ijhcs.2024.103403
Yeliz Yücel, Kerem Rızvanoğlu
{"title":"Breaking down barriers: A new approach to virtual museum navigation for people with visual impairments through voice assistants","authors":"Yeliz Yücel,&nbsp;Kerem Rızvanoğlu","doi":"10.1016/j.ijhcs.2024.103403","DOIUrl":"10.1016/j.ijhcs.2024.103403","url":null,"abstract":"<div><div>People with visual imparments (PWVI) encounter challenges in accessing cultural, historical, and practical information in a predominantly visual world, limiting their participation in various activities, including visits to museums.Museums, as important centers for exploration and learning, often overlook these accessibility issues.This abstract presents the iMuse Model, an innovative approach to create accessible and inclusive museum environments for them.The iMuse Model centers around the co-design of a prototype voice assistant integrated into Google Home, aimed at enabling remote navigation for PWVI within the Basilica Cistern museum in Turkey.This model consists of a two-layer study.The first layer involves collaboration with PWVI and their sight loss instructors to develop a five level framework tailored to their unique needs and challenges.The second layer focuses on testing this design with 30 people with visual impairments, employing various methodologies, including the Wizard of Oz technique.Our prototype provides inclusive audio descriptions that encompass sensory, emotional, historical, and structural elements, along with spatialized sounds from the museum environment, improving spatial understanding and cognitive map development.Notably, we have developed two versions of the voice assistant: one with a humorous interaction and one with a non-humorous approach. Users expressed a preference for the humorous version, leading to increased interaction, enjoyment, and social learning, as supported by both qualitative and quantitative results.In conclusion, the iMuse Model highlights the potential of co-designed, humor-infused, and culturally sensitive voice assistants.Our model not only aid PWVI in navigating unfamiliar spaces but also enhance their social learning, engagement, and appreciation of cultural heritage within museum environments.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103403"},"PeriodicalIF":5.3,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating augmented reality and LLM for enhanced cognitive support in critical audio communications 将增强现实与 LLM 相结合,增强关键音频通信中的认知支持
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-06 DOI: 10.1016/j.ijhcs.2024.103402
Fang Xu , Tianyu Zhou , Tri Nguyen , Haohui Bao , Christine Lin , Jing Du
{"title":"Integrating augmented reality and LLM for enhanced cognitive support in critical audio communications","authors":"Fang Xu ,&nbsp;Tianyu Zhou ,&nbsp;Tri Nguyen ,&nbsp;Haohui Bao ,&nbsp;Christine Lin ,&nbsp;Jing Du","doi":"10.1016/j.ijhcs.2024.103402","DOIUrl":"10.1016/j.ijhcs.2024.103402","url":null,"abstract":"<div><div>Operation and Maintenance (O&amp;M) missions are often time-sensitive and accuracy-dependent, requiring rapid and precise information processing in noisy, chaotic environments where oral communication can lead to cognitive overload and impaired decision-making. Augmented Reality (AR) and Large Language Models (LLMs) offer potential for enhancing situational awareness and lowering cognitive load by integrating digital visualizations with the physical world and improving dialogue management. However, synthesizing these technologies into a real-time system that effectively aids operators remains a challenge. This study explores the integration of AR and GPT-4, an advanced LLM, in time-sensitive O&amp;M tasks, aiming to enhance situational awareness and manage cognitive load during oral communications. A customized AR system, incorporating the Microsoft HoloLens2 for cognitive monitoring and GPT-4 for decision making assistance, was tested in a human subject experiment with 30 participants. The 2×2 factorial experiment evaluated the effects of AR and LLM assistance on task performance and cognitive load. Results demonstrated significant improvements in task accuracy and reductions in cognitive load, highlighting the effectiveness of AR and LLM integration in supporting O&amp;M missions. These findings emphasize the need for further research to optimize operational strategies in mission critical environments.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103402"},"PeriodicalIF":5.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT and me: First-time and experienced users’ perceptions of ChatGPT’s communicative ability as a dialogue partner 我和 ChatGPT:首次用户和资深用户对 ChatGPT 作为对话伙伴的交流能力的看法
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-04 DOI: 10.1016/j.ijhcs.2024.103400
Iona Gessinger , Katie Seaborn , Madeleine Steeds , Benjamin R. Cowan
{"title":"ChatGPT and me: First-time and experienced users’ perceptions of ChatGPT’s communicative ability as a dialogue partner","authors":"Iona Gessinger ,&nbsp;Katie Seaborn ,&nbsp;Madeleine Steeds ,&nbsp;Benjamin R. Cowan","doi":"10.1016/j.ijhcs.2024.103400","DOIUrl":"10.1016/j.ijhcs.2024.103400","url":null,"abstract":"<div><div>Chatbots like ChatGPT have the potential to produce more natural conversational user interface interactions. Yet, we currently know little about perceptions of ChatGPT as a dialogue partner, and if interaction changes these. Through an online, two-stage, mixed methods study conducted in July 2023, in which first-time and experienced users living in the UK or Ireland engaged in tasks with ChatGPT, we show that interaction improves attitudes towards the system for first-time users, while these attitudes are already positive and stable in experienced users. We further show that first-time users’ perceptions of ChatGPT’s communicative ability (competence, human-likeness, and flexibility) are more dynamic than those of experienced users, although the experienced users’ perceptions also peak post-interaction. When reflecting on their interaction experience with ChatGPT, both groups were positive with little mention of limitations. We discuss the implications of these findings for user perceptions of ChatGPT as a dialogue partner, and highlight the potential risks of uncritical adoption of such technology.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103400"},"PeriodicalIF":5.3,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Traceable teleportation: Improving spatial learning in virtual locomotion 可追踪的远程传送:改进虚拟运动中的空间学习
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-02 DOI: 10.1016/j.ijhcs.2024.103399
Ye Jia , Zackary P.T. Sin , Chen Li , Peter H.F. Ng , Xiao Huang , George Baciu , Jiannong Cao , Qing Li
{"title":"Traceable teleportation: Improving spatial learning in virtual locomotion","authors":"Ye Jia ,&nbsp;Zackary P.T. Sin ,&nbsp;Chen Li ,&nbsp;Peter H.F. Ng ,&nbsp;Xiao Huang ,&nbsp;George Baciu ,&nbsp;Jiannong Cao ,&nbsp;Qing Li","doi":"10.1016/j.ijhcs.2024.103399","DOIUrl":"10.1016/j.ijhcs.2024.103399","url":null,"abstract":"<div><div>In virtual reality, point-and-teleport (P&amp;T) is a locomotion technique that is popular for its user-friendliness, lowering workload and mitigating cybersickness. However, most P&amp;T schemes use instantaneous transitions, which has been known to hinder spatial learning. While replacing instantaneous transitions with animated interpolations can address this issue, they may inadvertently induce cybersickness. To counter these deficiencies, we propose <em><strong>Traceable Teleportation (TTP)</strong></em>, an enhanced locomotion technique grounded in a theoretical framework that was designed to improve spatial learning. <em>TTP</em> incorporates two novel features: an <em>Undo-Redo</em> mechanism that facilitates rapid back-and-forth movements, and a <em>Visualized Path</em> that offers additional visual cues. We have conducted a user study via a set of spatial learning tests within a virtual labyrinth to assess the effect of these enhancements on the P&amp;T technique. Our findings indicate that the <em>TTP Undo-Redo</em> design generally facilitates the learning of orientational spatial knowledge without incurring additional cybersickness or diminishing sense of presence.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103399"},"PeriodicalIF":5.3,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AniBalloons: Animated chat balloons as affective augmentation for social messaging and chatbot interaction AniBalloons:将动画聊天气球作为社交信息和聊天机器人互动的情感增强工具
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-10-18 DOI: 10.1016/j.ijhcs.2024.103365
Pengcheng An , Chaoyu Zhang , Haichen Gao , Ziqi Zhou , Yage Xiao , Jian Zhao
{"title":"AniBalloons: Animated chat balloons as affective augmentation for social messaging and chatbot interaction","authors":"Pengcheng An ,&nbsp;Chaoyu Zhang ,&nbsp;Haichen Gao ,&nbsp;Ziqi Zhou ,&nbsp;Yage Xiao ,&nbsp;Jian Zhao","doi":"10.1016/j.ijhcs.2024.103365","DOIUrl":"10.1016/j.ijhcs.2024.103365","url":null,"abstract":"<div><div>Despite being prominent and ubiquitous, message-based communication is limited in nonverbally conveying emotions. Besides emoticons or stickers, messaging users continue seeking richer options for affective communication. Recent research explored using chat-balloons’ shape and color to communicate emotional states. However, little work explored whether and how chat-balloon animations could be designed to convey emotions. We present the design of AniBalloons, 30 chat-balloon animations conveying Joy, Anger, Sadness, Surprise, Fear, and Calmness. Using AniBalloons as a research means, we conducted three studies to assess the animations’ affect recognizability and emotional properties (<span><math><mrow><mi>N</mi><mo>=</mo><mn>40</mn></mrow></math></span>), and probe how animated chat-balloons would influence communication experience in typical scenarios including instant messaging (<span><math><mrow><mi>N</mi><mo>=</mo><mn>72</mn></mrow></math></span>) and chatbot service (<span><math><mrow><mi>N</mi><mo>=</mo><mn>70</mn></mrow></math></span>). Our exploration contributes a set of chat-balloon animations to complement nonverbal affective communication for a range of text-message interfaces, and empirical insights into how animated chat-balloons might mediate particular conversation experiences (e.g., perceived interpersonal closeness, or chatbot personality).</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103365"},"PeriodicalIF":5.3,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring amBiDiguity: UI item direction interpretation by Arabic and Hebrew users 探索 amBiDiguity:阿拉伯语和希伯来语用户对用户界面项目方向的解释
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-10-17 DOI: 10.1016/j.ijhcs.2024.103383
Yulia Goldenberg, Noam Tractinsky
{"title":"Exploring amBiDiguity: UI item direction interpretation by Arabic and Hebrew users","authors":"Yulia Goldenberg,&nbsp;Noam Tractinsky","doi":"10.1016/j.ijhcs.2024.103383","DOIUrl":"10.1016/j.ijhcs.2024.103383","url":null,"abstract":"<div><div>Bidirectional user interfaces serve more than half a billion users worldwide. Despite increasing diversity-driven approaches to interface development, bidirectional interfaces still use UI elements inconsistently. In particular, UI items containing ambiguous information that BiDi users might process both from right-to-left and left-to-right pose a challenge to designers. We use the term amBiDiguous to denote such items and suggest that they are susceptible to ineffective use.</div><div>This paper reports on an empirical study with 1705 Arabic and Hebrew users, in which we collected explicit and implicit data about ambiguous UI items in bidirectional interfaces. We explored the directional interpretation of amBiDiguous UI items and investigated the influence of individual, linguistic, and UI design factors on how people perceive them. The findings suggest a complex picture in which various factors affect ambiguous items’ interpretation. While the analysis indicates that preventing all interpretation errors is probably impossible, a large portion of those errors can be addressed by proper design.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103383"},"PeriodicalIF":5.3,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing speech styles in captions for deaf and hard-of-hearing viewers 为聋人和重听观众可视化字幕中的语言风格
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-10-16 DOI: 10.1016/j.ijhcs.2024.103386
SooYeon Ahn , JooYeong Kim , Choonsung Shin , Jin-Hyuk Hong
{"title":"Visualizing speech styles in captions for deaf and hard-of-hearing viewers","authors":"SooYeon Ahn ,&nbsp;JooYeong Kim ,&nbsp;Choonsung Shin ,&nbsp;Jin-Hyuk Hong","doi":"10.1016/j.ijhcs.2024.103386","DOIUrl":"10.1016/j.ijhcs.2024.103386","url":null,"abstract":"<div><div>Speech styles such as extension, emphasis, and pause play an important role in capturing the audience's attention and conveying a message accurately. Unfortunately, it is challenging for Deaf and Hard-of-Hearing (DHH) people to enjoy these benefits when watching lectures with common captions. In this paper, we propose a new caption system that automatically analyzes speech styles from audio and visualizes them using visualization elements such as punctuation, paint-on, color, and boldness. We conducted a comparative study with 26 DHH viewers and found that the proposed caption system enabled them to recognize the speaker's speech style in lectures. As a result, the DHH viewers were able to watch lecture videos more vividly and were more engaged with the lectures. In particular, punctuation can be a practical solution to visualize speech styles and ensure legibility. Participants expressed a desire to use our caption system in their daily lives, providing valuable insights for future sound-visualized caption research.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103386"},"PeriodicalIF":5.3,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When more is less: Finding the optimal balance of intelligent agents’ transparency in level 3 automated vehicles 当多则少:在第三级自动驾驶汽车中寻找智能代理透明度的最佳平衡点
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-10-12 DOI: 10.1016/j.ijhcs.2024.103384
Jing Zang, Myounghoon Jeon
{"title":"When more is less: Finding the optimal balance of intelligent agents’ transparency in level 3 automated vehicles","authors":"Jing Zang,&nbsp;Myounghoon Jeon","doi":"10.1016/j.ijhcs.2024.103384","DOIUrl":"10.1016/j.ijhcs.2024.103384","url":null,"abstract":"<div><div>In automated vehicles, transparency of in-vehicle intelligent agents (IVIAs) is an important contributor to drivers’ perception, situation awareness, and driving performance. Our experiment focused on IVIA's transparency regarding information level and reliability on drivers’ perception and performance in level 3 automated vehicles. A 3 × 2 mixed factorial design was used in this study, with transparency (low, medium, high) as a between-subject variable and reliability (high vs. low) as a within-subjects variable. Forty-eight participants were recruited. Results suggested that transparency influenced drivers’ takeover time, lane keeping, and jerk. The high-reliability agent was associated with a higher perception of system accuracy and response speed and resulted in a longer takeover time than the low-reliability agent. Particularly, participants in medium transparency showed higher cognitive trust, lower workload, and higher situation awareness only when system reliability was high. Our findings can contribute to the advancement of intelligent agent transparency design in automated vehicles.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"193 ","pages":"Article 103384"},"PeriodicalIF":5.3,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing typing methods for uppercase input in virtual reality: Modifier Key vs. alternative approaches 比较虚拟现实中的大写输入法:修改键与其他方法
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-10-12 DOI: 10.1016/j.ijhcs.2024.103385
Min Joo Kim , Yu Gyeong Son , Yong Min Kim , Donggun Park
{"title":"Comparing typing methods for uppercase input in virtual reality: Modifier Key vs. alternative approaches","authors":"Min Joo Kim ,&nbsp;Yu Gyeong Son ,&nbsp;Yong Min Kim ,&nbsp;Donggun Park","doi":"10.1016/j.ijhcs.2024.103385","DOIUrl":"10.1016/j.ijhcs.2024.103385","url":null,"abstract":"<div><div>Typing tasks are basic interactions in a virtual environment (VE). The presence of uppercase letters affects the meanings of words and their readability. By typing uppercase letters on a QWERTY keyboard, the layers can be switched using a modifier key. Considering that VE controllers are typically used in a VE, this input method can result in user fatigue and errors. Thus, this study proposed new alternative interactions for the modifier key input and compared their typing performance and user experience. In an experiment, 30 participants were instructed to type 10 sentences using different typing interaction methods (shift, long press, and double-tap) on a virtual keyboard in a VE. The typing speed, error rate, and number of backspace inputs were measured to compare typing performance. Upon the completion of the typing task, the usability, workload, and sickness associated with each typing method were evaluated. The results showed that the double-tap method exhibited significantly higher typing speed, error rate, ease of use, satisfaction, and workload. This result is consistent with those of previous studies demonstrating that selection tasks were more efficient with fewer hand movements. Thus, this study implies that the double-tap method can be considered as a potential typing interaction for the VEs instead of the traditional method using the shift as a modifier key. Therefore, this study is expected to contribute to the design and development of user-friendly interactions.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"193 ","pages":"Article 103385"},"PeriodicalIF":5.3,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing collaborative signing songwriting experience of the d/Deaf individuals 增强聋人/聋人的合作手语歌曲创作经验
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-10-09 DOI: 10.1016/j.ijhcs.2024.103382
Youjin Choi , ChungHa Lee , Songmin Chung , Eunhye Cho , Suhyeon Yoo , Jin-Hyuk Hong
{"title":"Enhancing collaborative signing songwriting experience of the d/Deaf individuals","authors":"Youjin Choi ,&nbsp;ChungHa Lee ,&nbsp;Songmin Chung ,&nbsp;Eunhye Cho ,&nbsp;Suhyeon Yoo ,&nbsp;Jin-Hyuk Hong","doi":"10.1016/j.ijhcs.2024.103382","DOIUrl":"10.1016/j.ijhcs.2024.103382","url":null,"abstract":"<div><div>Songwriting can be an important means of developing the personal and social skills of d/Deaf individuals, but there is a lack of research on understanding and supporting their songwriting. We aimed to understand the d/Deaf people's songwriting experience for the song signing genre, which visually represents music with sign language and body movement. Through two workshops in which mixed-hearing individuals collaborated in songwriting activities, we identified the potentials and challenges of the songwriting experience and developed a music-sensory substitution system that multimodally presents music in sound as well as visual, and vibrotactile feedback. The proposed system enables mixed-hearing partners to have better collaborative interaction and signing songwriting experience. Consequently, we found that the process of signing songwriting is valued by d/Deaf individuals as a means of musical self-expression and social connecting, and our system has increased their musical engagement while encouraging them to express themselves more through music and sign language.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"193 ","pages":"Article 103382"},"PeriodicalIF":5.3,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142433466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信