IEEE Transactions on Human-Machine Systems最新文献

筛选
英文 中文
2024 Index IEEE Transactions on Human-Machine Systems Vol. 54
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-12-03 DOI: 10.1109/THMS.2024.3509052
{"title":"2024 Index IEEE Transactions on Human-Machine Systems Vol. 54","authors":"","doi":"10.1109/THMS.2024.3509052","DOIUrl":"https://doi.org/10.1109/THMS.2024.3509052","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"819-835"},"PeriodicalIF":3.5,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10774072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142761400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Systems, Man, and Cybernetics Society Information 电气和电子工程师学会系统、人和控制论学会信息
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-11-22 DOI: 10.1109/THMS.2024.3497077
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2024.3497077","DOIUrl":"https://doi.org/10.1109/THMS.2024.3497077","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"C3-C3"},"PeriodicalIF":3.5,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10766344","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Human-Machine Systems Information for Authors 电气和电子工程师学会《人机系统学报》(IEEE Transactions on Human-Machine Systems)为作者提供的信息
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-11-22 DOI: 10.1109/THMS.2024.3497079
{"title":"IEEE Transactions on Human-Machine Systems Information for Authors","authors":"","doi":"10.1109/THMS.2024.3497079","DOIUrl":"https://doi.org/10.1109/THMS.2024.3497079","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"C4-C4"},"PeriodicalIF":3.5,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10766345","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Systems, Man, and Cybernetics Society Information 电气和电子工程师学会系统、人和控制论学会信息
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-11-22 DOI: 10.1109/THMS.2024.3497075
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2024.3497075","DOIUrl":"https://doi.org/10.1109/THMS.2024.3497075","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"C2-C2"},"PeriodicalIF":3.5,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10766348","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Share Your Preprint Research with the World! 与世界分享您的预印本研究成果
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-11-22 DOI: 10.1109/THMS.2024.3503333
{"title":"Share Your Preprint Research with the World!","authors":"","doi":"10.1109/THMS.2024.3503333","DOIUrl":"https://doi.org/10.1109/THMS.2024.3503333","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"818-818"},"PeriodicalIF":3.5,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10766349","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object-Goal Navigation of Home Care Robot Based on Human Activity Inference and Cognitive Memory 基于人类活动推理和认知记忆的家庭护理机器人目标导航
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-10-23 DOI: 10.1109/THMS.2024.3467150
Chien-Ting Chen;Shen Jie Koh;Fu-Hao Chang;Yi-Shiang Huang;Li-Chen Fu
{"title":"Object-Goal Navigation of Home Care Robot Based on Human Activity Inference and Cognitive Memory","authors":"Chien-Ting Chen;Shen Jie Koh;Fu-Hao Chang;Yi-Shiang Huang;Li-Chen Fu","doi":"10.1109/THMS.2024.3467150","DOIUrl":"https://doi.org/10.1109/THMS.2024.3467150","url":null,"abstract":"As older adults' memory and cognitive ability deteriorate, designing a cognitive robot system to find the desired objects for users becomes more critical. Cognitive abilities, such as detecting and memorizing the environment and human activities are crucial in implementing effective human–robot interaction and navigation. In addition, robots must possess language understanding capabilities to comprehend human speech and respond promptly. This research aims to develop a mobile robot system for home care that incorporates human activity inference and cognitive memory to reason about the target object's location and navigate to find it. The method comprises three modules: 1) an object-goal navigation module for mapping the environment, detecting surrounding objects, and navigating to find the target object, 2) a cognitive memory module for recognizing human activity and storing encoded information, and 3) an interaction module to interact with humans and infer the target object's position. By leveraging Big Data, human cues, and a commonsense knowledge graph, the system can efficiently and robustly search for target objects. The effectiveness of the system is validated through both simulated and real-world scenarios.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"808-817"},"PeriodicalIF":3.5,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Human Postures for Manual Material Handling Tasks Using a Conditional Diffusion Model 利用条件扩散模型预测人工材料搬运任务中的人体姿势
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-10-21 DOI: 10.1109/THMS.2024.3472548
Liwei Qing;Bingyi Su;Sehee Jung;Lu Lu;Hanwen Wang;Xu Xu
{"title":"Predicting Human Postures for Manual Material Handling Tasks Using a Conditional Diffusion Model","authors":"Liwei Qing;Bingyi Su;Sehee Jung;Lu Lu;Hanwen Wang;Xu Xu","doi":"10.1109/THMS.2024.3472548","DOIUrl":"https://doi.org/10.1109/THMS.2024.3472548","url":null,"abstract":"Predicting workers' body postures is crucial for effective ergonomic interventions to reduce musculoskeletal disorders (MSDs). In this study, we employ a novel generative approach to predict human postures during manual material handling tasks. Specifically, we implement two distinct network architectures, U-Net and multilayer perceptron (MLP), to build the diffusion model. The model training and testing utilizes a dataset featuring 35 full-body anatomical landmarks collected from 25 participants engaged in a variety of lifting tasks. In addition, we compare our models with two conventional generative networks (conditional generative adversarial network and conditional variational autoencoder) for comprehensive analysis. Our results show that the U-Net model performs well in predicting posture similarity [root-mean-square error (RMSE) of key-point coordinates = 5.86 cm; and RMSE of joint angle coordinates = 13.67\u0000<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>\u0000], while the MLP model leads to higher posture variability (e.g., standard deviation of joint angles = 4.49\u0000<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>\u0000/4.18\u0000<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>\u0000 for upper arm flexion/extension joints). Moreover, both generative models demonstrate reasonable prediction validity (RMSE of segment lengths are within 4.83 cm). Overall, our proposed diffusion models demonstrate good similarity and validity in predicting lifting postures, while also providing insights into the inherent variability of constrained lifting postures. This novel use of diffusion models shows potential for tailored posture prediction in common occupational environments, representing an advancement in motion synthesis and contributing to workplace design and MSD risk mitigation.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"723-732"},"PeriodicalIF":3.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Augmented Intelligence Perspective on Human-in-the-Loop Reinforcement Learning: Review, Concept Designs, and Future Directions 人类在圈强化学习的增强智能视角:回顾、概念设计和未来方向
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-10-18 DOI: 10.1109/THMS.2024.3467370
Kok-Lim Alvin Yau;Yasir Saleem;Yung-Wey Chong;Xiumei Fan;Jer Min Eyu;David Chieng
{"title":"The Augmented Intelligence Perspective on Human-in-the-Loop Reinforcement Learning: Review, Concept Designs, and Future Directions","authors":"Kok-Lim Alvin Yau;Yasir Saleem;Yung-Wey Chong;Xiumei Fan;Jer Min Eyu;David Chieng","doi":"10.1109/THMS.2024.3467370","DOIUrl":"https://doi.org/10.1109/THMS.2024.3467370","url":null,"abstract":"Augmented intelligence (AuI) is a concept that combines human intelligence (HI) and artificial intelligence (AI) to leverage their respective strengths. While AI typically aims to replace humans, AuI integrates humans into machines, recognizing their irreplaceable role. Meanwhile, human-in-the-loop reinforcement learning (HITL-RL) is a semisupervised algorithm that integrates humans into the traditional reinforcement learning (RL) algorithm, enabling autonomous agents to gather inputs from both humans and environments, learn, and select optimal actions across various environments. Both AuI and HITL-RL are still in their infancy. Based on AuI, we propose and investigate three separate concept designs for HITL-RL: \u0000<italic>HI-AI</i>\u0000, \u0000<italic>AI-HI</i>\u0000, and \u0000<italic>parallel-HI-and-AI</i>\u0000 approaches, each differing in the order of HI and AI involvement in decision making. The literature on AuI and HITL-RL offers insights into integrating HI into existing concept designs. A preliminary study in an Atari game offers insights for future research directions. Simulation results show that human involvement maintains RL convergence and improves system stability, while achieving approximately similar average scores to traditional \u0000<inline-formula><tex-math>$Q$</tex-math></inline-formula>\u0000-learning in the game. Future research directions are proposed to encourage further investigation in this area.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"762-777"},"PeriodicalIF":3.5,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Model Cross-Stream Learning for Self-Supervised Human Action Recognition 用于自监督人类动作识别的跨模型跨流学习
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-10-17 DOI: 10.1109/THMS.2024.3467334
Mengyuan Liu;Hong Liu;Tianyu Guo
{"title":"Cross-Model Cross-Stream Learning for Self-Supervised Human Action Recognition","authors":"Mengyuan Liu;Hong Liu;Tianyu Guo","doi":"10.1109/THMS.2024.3467334","DOIUrl":"https://doi.org/10.1109/THMS.2024.3467334","url":null,"abstract":"Considering the instance-level discriminative ability, contrastive learning methods, including MoCo and SimCLR, have been adapted from the original image representation learning task to solve the self-supervised skeleton-based action recognition task. These methods usually use multiple data streams (i.e., joint, motion, and bone) for ensemble learning, meanwhile, how to construct a discriminative feature space within a single stream and effectively aggregate the information from multiple streams remains an open problem. To this end, this article first applies a new contrastive learning method called bootstrap your own latent (BYOL) to learn from skeleton data, and then formulate SkeletonBYOL as a simple yet effective baseline for self-supervised skeleton-based action recognition. Inspired by SkeletonBYOL, this article further presents a cross-model and cross-stream (CMCS) framework. This framework combines cross-model adversarial learning (CMAL) and cross-stream collaborative learning (CSCL). Specifically, CMAL learns single-stream representation by cross-model adversarial loss to obtain more discriminative features. To aggregate and interact with multistream information, CSCL is designed by generating similarity pseudolabel of ensemble learning as supervision and guiding feature generation for individual streams. Extensive experiments on three datasets verify the complementary properties between CMAL and CSCL and also verify that the proposed method can achieve better results than state-of-the-art methods using various evaluation protocols.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"743-752"},"PeriodicalIF":3.5,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optical See-Through Head-Mounted Display With Mitigated Parallax-Related Registration Errors: A User Study Validation 可减轻视差相关注册错误的光学透视头戴式显示器:用户研究验证
IF 3.5 3区 计算机科学
IEEE Transactions on Human-Machine Systems Pub Date : 2024-10-15 DOI: 10.1109/THMS.2024.3468019
Nadia Cattari;Fabrizio Cutolo;Vincenzo Ferrari
{"title":"Optical See-Through Head-Mounted Display With Mitigated Parallax-Related Registration Errors: A User Study Validation","authors":"Nadia Cattari;Fabrizio Cutolo;Vincenzo Ferrari","doi":"10.1109/THMS.2024.3468019","DOIUrl":"https://doi.org/10.1109/THMS.2024.3468019","url":null,"abstract":"For an optical see-through (OST) augmented reality (AR) head-mounted display (HMD) to assist in performing high-precision activities in the peripersonal space, a fundamental requirement is the correct spatial registration between the virtual information and the real environment. This registration can be achieved through a calibration procedure involving the parameterization of the virtual rendering camera via an eye-replacement camera that observes a calibration pattern rendered onto the OST display. In a previous feasibility study, we demonstrated and proved, with the same eye-replacement camera used for the calibration, that, in the case of an OST display with a focal plane close to the user's working distance, there is no need for prior-to-use viewpoint-specific calibration refinements obtained through eye-tracking cameras or additional alignment-based calibration steps. The viewpoint parallax-related AR registration error is indeed submillimetric within a reasonable range of depths around the display focal plane. This article confirms, through a user study based on a monocular virtual-to-real alignment task, that this finding is accurate and usable. In addition, we found that by performing the alignment-free calibration procedure via a high-resolution camera, the AR registration accuracy is substantially improved compared with that of other state-of-the-art approaches, with an error lower than 1mm over a notable range of distances. These results demonstrate the safe usability of OST HMDs for high-precision task guidance in the peripersonal space.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"668-677"},"PeriodicalIF":3.5,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10718696","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信