IEEE Transactions on Human-Machine Systems最新文献

筛选
英文 中文
Call for Papers: IEEE Transactions on Human-Machine Systems 论文征集:IEEE人机系统汇刊
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2026-01-26 DOI: 10.1109/THMS.2026.3656624
{"title":"Call for Papers: IEEE Transactions on Human-Machine Systems","authors":"","doi":"10.1109/THMS.2026.3656624","DOIUrl":"https://doi.org/10.1109/THMS.2026.3656624","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"192-192"},"PeriodicalIF":4.4,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11364046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Systems, Man, and Cybernetics Society Information IEEE系统、人与控制论学会信息
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2026-01-26 DOI: 10.1109/THMS.2026.3651175
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2026.3651175","DOIUrl":"https://doi.org/10.1109/THMS.2026.3651175","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"C3-C3"},"PeriodicalIF":4.4,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11364037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Human-Machine Systems Information for Authors IEEE人机系统信息汇刊
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2026-01-26 DOI: 10.1109/THMS.2026.3651173
{"title":"IEEE Transactions on Human-Machine Systems Information for Authors","authors":"","doi":"10.1109/THMS.2026.3651173","DOIUrl":"https://doi.org/10.1109/THMS.2026.3651173","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"C4-C4"},"PeriodicalIF":4.4,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11364039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Systems, Man, and Cybernetics Society Information IEEE系统、人与控制论学会信息
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2026-01-26 DOI: 10.1109/THMS.2026.3651171
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2026.3651171","DOIUrl":"https://doi.org/10.1109/THMS.2026.3651171","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"C2-C2"},"PeriodicalIF":4.4,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11364045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Readiness Deprivation on Takeover With Varying Time Budget in Conditional Automated Driving Scenarios 条件自动驾驶下准备剥夺对变时间预算接管的影响
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2026-01-05 DOI: 10.1109/THMS.2025.3595269
Hsueh-Yi Lai;Ching-Hao Chou;Pen-Kuei Wang;Tse-Yi Kuo;Yong-Jhih Chen
{"title":"Effects of Readiness Deprivation on Takeover With Varying Time Budget in Conditional Automated Driving Scenarios","authors":"Hsueh-Yi Lai;Ching-Hao Chou;Pen-Kuei Wang;Tse-Yi Kuo;Yong-Jhih Chen","doi":"10.1109/THMS.2025.3595269","DOIUrl":"https://doi.org/10.1109/THMS.2025.3595269","url":null,"abstract":"As vehicle automation levels rise, future automated driving systems can enable drivers to engage in nondriving-related tasks (NDRTs). However, drivers remain responsible for driving safety. In the case of an automated vehicle failure, a driver must disengage from their NDRTs and take control of the vehicle. When fully engaged in NDRTs during Level 3 automation, NDRTs can compete for the resources for driving tasks, thereby depriving cognitive and physical readiness. This study investigated how compromised driver readiness affected takeover performance in situations with varying urgency levels. A simulated driving experiment was conducted with 32 participants in four states of readiness deprivation created through NDRT assignment, and two time budget levels were applied to represent multiple urgency scenarios. First, subjective ratings on readiness deprivation showed that depriving drivers of one form of readiness (i.e., cognitive or physical) adversely affected the other. Furthermore, retaining cognitive readiness may provide greater self-assessed utility. The NDRTs with similar interaction attributes generate comparable readiness deprivation ratings, offering a systematic way to evaluate their impact on takeover. Then, the impact of readiness deprivation on takeover performance varied significantly based on time budgets. With ample time, depriving drivers of physical or full readiness increased takeover time. However, these delayed actions, combined with stable lateral control, suggested a safe takeover strategy aimed at readiness recovery. Conversely, limited time hindered this recovery. Drivers performed takeovers despite impaired readiness, resulting in quicker but often abrupt post-takeover lateral movements. Notably, takeover actions were initiated once both cognitive and physical readiness were achieved, regardless of the time budget.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"171-181"},"PeriodicalIF":4.4,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
P3-FSLNet: A Compact Spatio-Temporal Model With Contrastive Few-Shot Learning for Subject-Independent P300 Detection in Devanagari Script-Based P300 Speller P3-FSLNet:一种基于对比少镜头学习的紧凑时空模型,用于基于Devanagari脚本的P300拼写检测
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2026-01-01 DOI: 10.1109/THMS.2025.3639248
Vibha Bhandari;Narendra D. Londhe;Ghanahshyam B. Kshirsagar
{"title":"P3-FSLNet: A Compact Spatio-Temporal Model With Contrastive Few-Shot Learning for Subject-Independent P300 Detection in Devanagari Script-Based P300 Speller","authors":"Vibha Bhandari;Narendra D. Londhe;Ghanahshyam B. Kshirsagar","doi":"10.1109/THMS.2025.3639248","DOIUrl":"https://doi.org/10.1109/THMS.2025.3639248","url":null,"abstract":"Brain–computer interface (BCI) systems frequently necessitate time-intensive subject-specific calibration, thereby motivating the development of subject-independent P300 detection approaches. Existing methodologies that employ transfer learning and knowledge distillation encounter challenges with limited generalizability due to substantial intersubject variability and constrained data availability. Furthermore, their applicability is often questioned, as most lack comprehensive external validation and cross-script evaluation. To address these limitations, we introduce P3-FSLNet, a few-shot metalearning framework that integrates prototypical networks and contrastive learning within a spatial-temporal convolutional neural network augmented by dual-channel attention for the selection of relevant electroencephalogram channels. Episodic metatraining facilitates the transfer of robust knowledge across different subjects. Evaluated on a self-recorded Devanagari script dataset, P3-FSLNet attains a classification accuracy of 93.17%, surpassing state-of-the-art methods by 1%–14%, while simultaneously reducing trainable parameters by up to 400 times. External validation using English-script datasets from BCI Competition II and III confirms its robustness in cross-subject and cross-script generalization. These findings demonstrate the efficacy of P3-FSLNet and represent a substantial advancement toward the development of script-agnostic P300 spellers that are lightweight, scalable, and conducive to multilingual applications.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"124-134"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An HCAI Methodological Framework: Putting it Into Action to Enable Human-Centered AI HCAI方法框架:将其付诸行动以实现以人为本的AI
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2026-01-01 DOI: 10.1109/THMS.2025.3631590
Wei Xu;Zaifeng Gao;Marvin J. Dainoff
{"title":"An HCAI Methodological Framework: Putting it Into Action to Enable Human-Centered AI","authors":"Wei Xu;Zaifeng Gao;Marvin J. Dainoff","doi":"10.1109/THMS.2025.3631590","DOIUrl":"https://doi.org/10.1109/THMS.2025.3631590","url":null,"abstract":"Human-centered artificial intelligence (HCAI) is a design philosophy that prioritizes humans in the design, development, deployment, and use of AI systems, aiming to maximize AI’s benefits while mitigating its negative impacts. Despite its growing prominence in literature, the lack of methodological guidance for its implementation poses challenges to HCAI practice. To address this gap, this article proposes a comprehensive HCAI methodological framework (HCAI-MF) comprising five key components: HCAI requirement hierarchy, approach and method taxonomy, process, interdisciplinary collaboration approach, and multilevel design paradigms. A case study demonstrates HCAI-MF’s practical implications, while the article also analyzes implementation challenges. Actionable recommendations and a “three-layer” HCAI implementation strategy are provided to address these challenges and guide future evolution of HCAI-MF. HCAI-MF is presented as a systematic and executable methodology capable of overcoming current gaps, enabling effective design, development, deployment, and use of AI systems, and advancing HCAI practice.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"78-94"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Workload for Supervisory Human–Robot Teams: An Initial Analysis of Meta-Learning 管理人机团队的工作量估算:元学习的初步分析
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2025-12-31 DOI: 10.1109/THMS.2025.3640359
Joshua Bhagat Smith;Julie A. Adams
{"title":"Estimating Workload for Supervisory Human–Robot Teams: An Initial Analysis of Meta-Learning","authors":"Joshua Bhagat Smith;Julie A. Adams","doi":"10.1109/THMS.2025.3640359","DOIUrl":"https://doi.org/10.1109/THMS.2025.3640359","url":null,"abstract":"A robust understanding of a human's internal state can greatly improve human–robot teaming, as estimating the human teammates' workload can inform more dynamic robot adaptations. Existing workload estimation methods use standard machine learning techniques to model the relationships between physiological metrics and workload. However, such methods are not sufficient for adaptive systems, as standard machine learning techniques struggle to make accurate workload estimates when the human–robot team performs unknown tasks. A meta-learning-based workload estimation algorithm is introduced and an initial analysis is conducted to show how adapting a machine learning model's parameters using task-specific information can improve result in more accurate workload estimates for unknown tasks.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"68-77"},"PeriodicalIF":4.4,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonauditory Schemes for Universal Information Access: Translating Braille into Vibrotactile Cues for the Blind 通用信息获取的非听觉方案:将盲文翻译成盲人的振动触觉提示
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2025-12-22 DOI: 10.1109/THMS.2025.3631785
Zihan Tang;Aiguo Song
{"title":"Nonauditory Schemes for Universal Information Access: Translating Braille into Vibrotactile Cues for the Blind","authors":"Zihan Tang;Aiguo Song","doi":"10.1109/THMS.2025.3631785","DOIUrl":"https://doi.org/10.1109/THMS.2025.3631785","url":null,"abstract":"Despite the progress in human–computer interaction technology, the interaction methods that visually impaired individuals possess are still rudimentary. The widely used Text-to-Speech technology has issues such as privacy leaks in practical applications, and most of the new interaction designs proposed in recent years have limited application scenarios. Geared toward enriching interaction methods for users with visual impairments, this article explores the potential for translating Braille into vibrotactile cues as a way of conveying universal information. We designed a set of schemes and implemented them based on the vibration motor of a mobile phone. These schemes convert a single Braille character into several highly distinguishable vibration combinations, thereby conveying any information that Braille can express. Experiments were conducted on both sighted and visually impaired participants to evaluate the accuracy and efficiency. With a brief learning period of just 10 minutes, individuals can attain an accuracy rate greater than 95%, and the accuracy degradation remains minimal when playback speeds increase. By employing vibration motors to deliver comprehensive information, this framework shows promise for application in a wider range of technological devices.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"12-21"},"PeriodicalIF":4.4,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech2Blend: A Hybrid Network for Speech-Driven 3-D Facial Animation by Learning Blendshape 通过学习Blendshape实现语音驱动的3d面部动画的混合网络
IF 4.4 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2025-12-22 DOI: 10.1109/THMS.2025.3640719
Lei Wang;Gongbin Chen;Feng Liu;Jiaji Wu;Jun Cheng
{"title":"Speech2Blend: A Hybrid Network for Speech-Driven 3-D Facial Animation by Learning Blendshape","authors":"Lei Wang;Gongbin Chen;Feng Liu;Jiaji Wu;Jun Cheng","doi":"10.1109/THMS.2025.3640719","DOIUrl":"https://doi.org/10.1109/THMS.2025.3640719","url":null,"abstract":"Recent advances in speech-driven facial animation have attracted significant interest across computer graphics, human–computer interaction systems, and immersive virtual reality applications. However, existing methods remain constrained by dependencies on specific reference videos or proprietary face mesh structures, limiting their applicability across diverse production pipelines and reducing compatibility with industry-standard animation workflows. To overcome these fundamental limitations in generalization and deployment flexibility, we propose Speech2Blend—an end-to-end hybrid convolutional-recurrent network that directly learns nonlinear speech-to-blendshape parameter mappings. This novel approach enables markerless speech-driven facial animation generation without restrictive inputs like video references or specialized facial rigs. Trained on the largest available digital human dataset (BEAT) and rigorously evaluated using three benchmark datasets with photorealistic visualization tools, Speech2Blend achieves state-of-the-art performance. It delivers superior audio-visual synchronization through learned temporal dynamics and reduces lip vertex error by 30% compared to existing baseline methods. These advances significantly lower production costs for virtual human speech animation while enabling cross-platform compatibility with common game engines and animation software.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"48-57"},"PeriodicalIF":4.4,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书