International Journal of Human-Computer Studies最新文献

筛选
英文 中文
Limits of speech in connected homes: Experimental comparison of self-reporting tools for human activity recognition 联网家庭中语音的局限性:人类活动识别自我报告工具的实验比较
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-20 DOI: 10.1016/j.ijhcs.2024.103404
Guillaume Levasseur , Kejia Tang , Hugues Bersini
{"title":"Limits of speech in connected homes: Experimental comparison of self-reporting tools for human activity recognition","authors":"Guillaume Levasseur ,&nbsp;Kejia Tang ,&nbsp;Hugues Bersini","doi":"10.1016/j.ijhcs.2024.103404","DOIUrl":"10.1016/j.ijhcs.2024.103404","url":null,"abstract":"<div><div>Data annotation for human activity recognition is a well-known challenge for researchers. In particular, annotation in daily life settings relies on self-reporting tools with unknown accuracy. Speech is a promising interface for activity labeling. In this work, we compare the accuracy of two commercially available tools for annotation: voice diaries and connected buttons. We retrofit the water meters of thirty homes in the USA for infrastructure-mediated sensing. Participants are split into equal groups and receive one of the self-reporting tools. The balanced accuracy metric is transferred from the field of machine learning to the evaluation of the annotation performance. Our results show that connected buttons perform significantly better than the voice diary, with 92% median accuracy and 65% median reporting rate. Using questionnaire answers, we highlight that annotation performance is impacted by habit formation and sentiments toward the annotation tool. The use case for data annotation is to disaggregate water meter data into human activities beyond the point of use. We show that it is feasible with a machine-learning model and the corrected annotations. Finally, we formulate recommendations for the design of studies and intelligent environments around the key ideas of proportionality and immediacy.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"195 ","pages":"Article 103404"},"PeriodicalIF":5.3,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking down barriers: A new approach to virtual museum navigation for people with visual impairments through voice assistants 打破障碍:通过语音助手为视觉障碍者提供虚拟博物馆导航的新方法
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-17 DOI: 10.1016/j.ijhcs.2024.103403
Yeliz Yücel, Kerem Rızvanoğlu
{"title":"Breaking down barriers: A new approach to virtual museum navigation for people with visual impairments through voice assistants","authors":"Yeliz Yücel,&nbsp;Kerem Rızvanoğlu","doi":"10.1016/j.ijhcs.2024.103403","DOIUrl":"10.1016/j.ijhcs.2024.103403","url":null,"abstract":"<div><div>People with visual imparments (PWVI) encounter challenges in accessing cultural, historical, and practical information in a predominantly visual world, limiting their participation in various activities, including visits to museums.Museums, as important centers for exploration and learning, often overlook these accessibility issues.This abstract presents the iMuse Model, an innovative approach to create accessible and inclusive museum environments for them.The iMuse Model centers around the co-design of a prototype voice assistant integrated into Google Home, aimed at enabling remote navigation for PWVI within the Basilica Cistern museum in Turkey.This model consists of a two-layer study.The first layer involves collaboration with PWVI and their sight loss instructors to develop a five level framework tailored to their unique needs and challenges.The second layer focuses on testing this design with 30 people with visual impairments, employing various methodologies, including the Wizard of Oz technique.Our prototype provides inclusive audio descriptions that encompass sensory, emotional, historical, and structural elements, along with spatialized sounds from the museum environment, improving spatial understanding and cognitive map development.Notably, we have developed two versions of the voice assistant: one with a humorous interaction and one with a non-humorous approach. Users expressed a preference for the humorous version, leading to increased interaction, enjoyment, and social learning, as supported by both qualitative and quantitative results.In conclusion, the iMuse Model highlights the potential of co-designed, humor-infused, and culturally sensitive voice assistants.Our model not only aid PWVI in navigating unfamiliar spaces but also enhance their social learning, engagement, and appreciation of cultural heritage within museum environments.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103403"},"PeriodicalIF":5.3,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empathy enhancement through VR: A practice-led design study 通过虚拟现实增强移情能力:以实践为导向的设计研究
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-14 DOI: 10.1016/j.ijhcs.2024.103397
Xina Jiang , Wen Zhou , Jicheng Sun , Shihong Chen , Anthony Fung
{"title":"Empathy enhancement through VR: A practice-led design study","authors":"Xina Jiang ,&nbsp;Wen Zhou ,&nbsp;Jicheng Sun ,&nbsp;Shihong Chen ,&nbsp;Anthony Fung","doi":"10.1016/j.ijhcs.2024.103397","DOIUrl":"10.1016/j.ijhcs.2024.103397","url":null,"abstract":"<div><div>Virtual reality (VR) has been widely acknowledged as a highly effective medium for augmenting empathy, enabling individuals to better comprehend and resonate with the emotions and lived experiences of others. Despite its acknowledged potential, the field lacks clear design guidelines and a systematic framework for creating VR environments for empathy training. In this article, we present a practice-led research project in which we triangulated design research using a paired sample <em>t</em>-test to evaluate and optimize the design guidelines of the empathy-training VR design (EVRD) framework. We evaluated the impact of a VR experience, designed based on the EVRD framework, on emotional, cognitive, and behavioral empathy among Chinese higher education students (n=84). A comprehensive assessment approach, including the Interpersonal Reactivity Index, interviews, system log analysis, and monitoring of donation activities was utilized, to measure changes in empathy before and after the VR intervention. The results validated the EVRD framework and demonstrated that it is a practical and systematic tool for designing a VR that training empathy. The findings of this study provide design insights with regard to (1) the process of VR empathy and (2) how to design “doomed-to-fail” interactions to promote cognitive empathy in VR.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103397"},"PeriodicalIF":5.3,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptions of discriminatory decisions of artificial intelligence: Unpacking the role of individual characteristics 对人工智能歧视性决定的看法:解读个体特征的作用
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-10 DOI: 10.1016/j.ijhcs.2024.103387
Soojong Kim
{"title":"Perceptions of discriminatory decisions of artificial intelligence: Unpacking the role of individual characteristics","authors":"Soojong Kim","doi":"10.1016/j.ijhcs.2024.103387","DOIUrl":"10.1016/j.ijhcs.2024.103387","url":null,"abstract":"<div><div>This study investigates how personal differences (digital self-efficacy, technical knowledge, belief in equality, political ideology) and demographic factors (age, education, and income) are associated with perceptions of artificial intelligence (AI) outcomes exhibiting gender and racial bias and with general attitudes toward AI. Analyses of a large-scale experiment dataset (<em>N</em> = 1,206) indicate that digital self-efficacy and technical knowledge are positively associated with attitudes toward AI, while liberal ideologies are negatively associated with outcome trust, higher negative emotion, and greater skepticism. Furthermore, age and income are closely connected to cognitive gaps in understanding discriminatory AI outcomes. These findings highlight the importance of promoting digital literacy skills and enhancing digital self-efficacy to maintain trust in AI and beliefs in AI usefulness and safety. The findings also suggest that the disparities in understanding problematic AI outcomes may be aligned with economic inequalities and generational gaps in society. Overall, this study sheds light on the socio-technological system in which complex interactions occur between social hierarchies, divisions, and machines that reflect and exacerbate the disparities.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103387"},"PeriodicalIF":5.3,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating augmented reality and LLM for enhanced cognitive support in critical audio communications 将增强现实与 LLM 相结合,增强关键音频通信中的认知支持
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-06 DOI: 10.1016/j.ijhcs.2024.103402
Fang Xu , Tianyu Zhou , Tri Nguyen , Haohui Bao , Christine Lin , Jing Du
{"title":"Integrating augmented reality and LLM for enhanced cognitive support in critical audio communications","authors":"Fang Xu ,&nbsp;Tianyu Zhou ,&nbsp;Tri Nguyen ,&nbsp;Haohui Bao ,&nbsp;Christine Lin ,&nbsp;Jing Du","doi":"10.1016/j.ijhcs.2024.103402","DOIUrl":"10.1016/j.ijhcs.2024.103402","url":null,"abstract":"<div><div>Operation and Maintenance (O&amp;M) missions are often time-sensitive and accuracy-dependent, requiring rapid and precise information processing in noisy, chaotic environments where oral communication can lead to cognitive overload and impaired decision-making. Augmented Reality (AR) and Large Language Models (LLMs) offer potential for enhancing situational awareness and lowering cognitive load by integrating digital visualizations with the physical world and improving dialogue management. However, synthesizing these technologies into a real-time system that effectively aids operators remains a challenge. This study explores the integration of AR and GPT-4, an advanced LLM, in time-sensitive O&amp;M tasks, aiming to enhance situational awareness and manage cognitive load during oral communications. A customized AR system, incorporating the Microsoft HoloLens2 for cognitive monitoring and GPT-4 for decision making assistance, was tested in a human subject experiment with 30 participants. The 2×2 factorial experiment evaluated the effects of AR and LLM assistance on task performance and cognitive load. Results demonstrated significant improvements in task accuracy and reductions in cognitive load, highlighting the effectiveness of AR and LLM integration in supporting O&amp;M missions. These findings emphasize the need for further research to optimize operational strategies in mission critical environments.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103402"},"PeriodicalIF":5.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT and me: First-time and experienced users’ perceptions of ChatGPT’s communicative ability as a dialogue partner 我和 ChatGPT:首次用户和资深用户对 ChatGPT 作为对话伙伴的交流能力的看法
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-04 DOI: 10.1016/j.ijhcs.2024.103400
Iona Gessinger , Katie Seaborn , Madeleine Steeds , Benjamin R. Cowan
{"title":"ChatGPT and me: First-time and experienced users’ perceptions of ChatGPT’s communicative ability as a dialogue partner","authors":"Iona Gessinger ,&nbsp;Katie Seaborn ,&nbsp;Madeleine Steeds ,&nbsp;Benjamin R. Cowan","doi":"10.1016/j.ijhcs.2024.103400","DOIUrl":"10.1016/j.ijhcs.2024.103400","url":null,"abstract":"<div><div>Chatbots like ChatGPT have the potential to produce more natural conversational user interface interactions. Yet, we currently know little about perceptions of ChatGPT as a dialogue partner, and if interaction changes these. Through an online, two-stage, mixed methods study conducted in July 2023, in which first-time and experienced users living in the UK or Ireland engaged in tasks with ChatGPT, we show that interaction improves attitudes towards the system for first-time users, while these attitudes are already positive and stable in experienced users. We further show that first-time users’ perceptions of ChatGPT’s communicative ability (competence, human-likeness, and flexibility) are more dynamic than those of experienced users, although the experienced users’ perceptions also peak post-interaction. When reflecting on their interaction experience with ChatGPT, both groups were positive with little mention of limitations. We discuss the implications of these findings for user perceptions of ChatGPT as a dialogue partner, and highlight the potential risks of uncritical adoption of such technology.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103400"},"PeriodicalIF":5.3,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Traceable teleportation: Improving spatial learning in virtual locomotion 可追踪的远程传送:改进虚拟运动中的空间学习
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-11-02 DOI: 10.1016/j.ijhcs.2024.103399
Ye Jia , Zackary P.T. Sin , Chen Li , Peter H.F. Ng , Xiao Huang , George Baciu , Jiannong Cao , Qing Li
{"title":"Traceable teleportation: Improving spatial learning in virtual locomotion","authors":"Ye Jia ,&nbsp;Zackary P.T. Sin ,&nbsp;Chen Li ,&nbsp;Peter H.F. Ng ,&nbsp;Xiao Huang ,&nbsp;George Baciu ,&nbsp;Jiannong Cao ,&nbsp;Qing Li","doi":"10.1016/j.ijhcs.2024.103399","DOIUrl":"10.1016/j.ijhcs.2024.103399","url":null,"abstract":"<div><div>In virtual reality, point-and-teleport (P&amp;T) is a locomotion technique that is popular for its user-friendliness, lowering workload and mitigating cybersickness. However, most P&amp;T schemes use instantaneous transitions, which has been known to hinder spatial learning. While replacing instantaneous transitions with animated interpolations can address this issue, they may inadvertently induce cybersickness. To counter these deficiencies, we propose <em><strong>Traceable Teleportation (TTP)</strong></em>, an enhanced locomotion technique grounded in a theoretical framework that was designed to improve spatial learning. <em>TTP</em> incorporates two novel features: an <em>Undo-Redo</em> mechanism that facilitates rapid back-and-forth movements, and a <em>Visualized Path</em> that offers additional visual cues. We have conducted a user study via a set of spatial learning tests within a virtual labyrinth to assess the effect of these enhancements on the P&amp;T technique. Our findings indicate that the <em>TTP Undo-Redo</em> design generally facilitates the learning of orientational spatial knowledge without incurring additional cybersickness or diminishing sense of presence.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103399"},"PeriodicalIF":5.3,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AniBalloons: Animated chat balloons as affective augmentation for social messaging and chatbot interaction AniBalloons:将动画聊天气球作为社交信息和聊天机器人互动的情感增强工具
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-10-18 DOI: 10.1016/j.ijhcs.2024.103365
Pengcheng An , Chaoyu Zhang , Haichen Gao , Ziqi Zhou , Yage Xiao , Jian Zhao
{"title":"AniBalloons: Animated chat balloons as affective augmentation for social messaging and chatbot interaction","authors":"Pengcheng An ,&nbsp;Chaoyu Zhang ,&nbsp;Haichen Gao ,&nbsp;Ziqi Zhou ,&nbsp;Yage Xiao ,&nbsp;Jian Zhao","doi":"10.1016/j.ijhcs.2024.103365","DOIUrl":"10.1016/j.ijhcs.2024.103365","url":null,"abstract":"<div><div>Despite being prominent and ubiquitous, message-based communication is limited in nonverbally conveying emotions. Besides emoticons or stickers, messaging users continue seeking richer options for affective communication. Recent research explored using chat-balloons’ shape and color to communicate emotional states. However, little work explored whether and how chat-balloon animations could be designed to convey emotions. We present the design of AniBalloons, 30 chat-balloon animations conveying Joy, Anger, Sadness, Surprise, Fear, and Calmness. Using AniBalloons as a research means, we conducted three studies to assess the animations’ affect recognizability and emotional properties (<span><math><mrow><mi>N</mi><mo>=</mo><mn>40</mn></mrow></math></span>), and probe how animated chat-balloons would influence communication experience in typical scenarios including instant messaging (<span><math><mrow><mi>N</mi><mo>=</mo><mn>72</mn></mrow></math></span>) and chatbot service (<span><math><mrow><mi>N</mi><mo>=</mo><mn>70</mn></mrow></math></span>). Our exploration contributes a set of chat-balloon animations to complement nonverbal affective communication for a range of text-message interfaces, and empirical insights into how animated chat-balloons might mediate particular conversation experiences (e.g., perceived interpersonal closeness, or chatbot personality).</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103365"},"PeriodicalIF":5.3,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring amBiDiguity: UI item direction interpretation by Arabic and Hebrew users 探索 amBiDiguity:阿拉伯语和希伯来语用户对用户界面项目方向的解释
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-10-17 DOI: 10.1016/j.ijhcs.2024.103383
Yulia Goldenberg, Noam Tractinsky
{"title":"Exploring amBiDiguity: UI item direction interpretation by Arabic and Hebrew users","authors":"Yulia Goldenberg,&nbsp;Noam Tractinsky","doi":"10.1016/j.ijhcs.2024.103383","DOIUrl":"10.1016/j.ijhcs.2024.103383","url":null,"abstract":"<div><div>Bidirectional user interfaces serve more than half a billion users worldwide. Despite increasing diversity-driven approaches to interface development, bidirectional interfaces still use UI elements inconsistently. In particular, UI items containing ambiguous information that BiDi users might process both from right-to-left and left-to-right pose a challenge to designers. We use the term amBiDiguous to denote such items and suggest that they are susceptible to ineffective use.</div><div>This paper reports on an empirical study with 1705 Arabic and Hebrew users, in which we collected explicit and implicit data about ambiguous UI items in bidirectional interfaces. We explored the directional interpretation of amBiDiguous UI items and investigated the influence of individual, linguistic, and UI design factors on how people perceive them. The findings suggest a complex picture in which various factors affect ambiguous items’ interpretation. While the analysis indicates that preventing all interpretation errors is probably impossible, a large portion of those errors can be addressed by proper design.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103383"},"PeriodicalIF":5.3,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing speech styles in captions for deaf and hard-of-hearing viewers 为聋人和重听观众可视化字幕中的语言风格
IF 5.3 2区 计算机科学
International Journal of Human-Computer Studies Pub Date : 2024-10-16 DOI: 10.1016/j.ijhcs.2024.103386
SooYeon Ahn , JooYeong Kim , Choonsung Shin , Jin-Hyuk Hong
{"title":"Visualizing speech styles in captions for deaf and hard-of-hearing viewers","authors":"SooYeon Ahn ,&nbsp;JooYeong Kim ,&nbsp;Choonsung Shin ,&nbsp;Jin-Hyuk Hong","doi":"10.1016/j.ijhcs.2024.103386","DOIUrl":"10.1016/j.ijhcs.2024.103386","url":null,"abstract":"<div><div>Speech styles such as extension, emphasis, and pause play an important role in capturing the audience's attention and conveying a message accurately. Unfortunately, it is challenging for Deaf and Hard-of-Hearing (DHH) people to enjoy these benefits when watching lectures with common captions. In this paper, we propose a new caption system that automatically analyzes speech styles from audio and visualizes them using visualization elements such as punctuation, paint-on, color, and boldness. We conducted a comparative study with 26 DHH viewers and found that the proposed caption system enabled them to recognize the speaker's speech style in lectures. As a result, the DHH viewers were able to watch lecture videos more vividly and were more engaged with the lectures. In particular, punctuation can be a practical solution to visualize speech styles and ensure legibility. Participants expressed a desire to use our caption system in their daily lives, providing valuable insights for future sound-visualized caption research.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"194 ","pages":"Article 103386"},"PeriodicalIF":5.3,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信