Debasish Nath;Neha Singh;Onika Banduni;Aprajita Parial;M. V. Padma Srivastava;Venugopalan Y. Vishnu;Amit Mehndiratta
{"title":"Variable Handle-Resistance Based Joystick for Post-stroke Neurorehabilitation Training of Hand and Wrist in Upper Extremities","authors":"Debasish Nath;Neha Singh;Onika Banduni;Aprajita Parial;M. V. Padma Srivastava;Venugopalan Y. Vishnu;Amit Mehndiratta","doi":"10.1109/THMS.2024.3486123","DOIUrl":"https://doi.org/10.1109/THMS.2024.3486123","url":null,"abstract":"The objective was to modulate the resistance of a hand-held device, e.g., joystick, for customizing a rehabilitative therapeutic patient-centric virtual environment protocol. Two similar sets of springs (each set having three springs with graded rigidness) were customized to increase the handle-resistance. The springs were experimentally calibrated to determine individual spring-constant value. The amount of exerted force values during joystick movements were standardized in a cohort of healthy subjects (<italic>n</i> = 15). Coefficient of variation (CV) was calculated to determine the variability among healthy subjects. Further, five (<italic>n</i> = 5) patients with stroke were enrolled in this pilot study and performed three separate virtual reality sessions using different springs. Task-performance metrics, i.e., time to complete, trajectory smoothness, and relative error, were evaluated for each of the levels. The values of spring-constants as determined experimentally were found to be 1.34 × 10<sup>3</sup> ± 16.1, 2.23 × 10<sup>3</sup> ± 29.8, and 6.47 × 10<sup>3</sup> ± 470.9 N/m for springs with increased rigidity, respectively. The mean force values for different joystick movements were observed to be increasing linearly with increasing spring-rigidity. The calculated CV ≤ 14% indicated the variability in the recorded force values of healthy subjects. Increased task-performance metrics and visual analog scale-fatigue scores for session 2 and 3 as compared to session1, indicated increasing task difficulty at session 2 and 3.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 1","pages":"93-101"},"PeriodicalIF":3.5,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhang Qin;Yinghui Zhang;Hongjun Wang;Zhipeng Luo;Chongshou Li;Tianrui Li
{"title":"Multidimensional Scaling Orienting Discriminative Co-Representation Learning","authors":"Zhang Qin;Yinghui Zhang;Hongjun Wang;Zhipeng Luo;Chongshou Li;Tianrui Li","doi":"10.1109/THMS.2024.3483848","DOIUrl":"https://doi.org/10.1109/THMS.2024.3483848","url":null,"abstract":"Co-representation, which co-represents samples and features, has been widely used in various machine learning tasks, such as document clustering, gene expression analysis, and recommendation systems. It not only reveals the cluster structure of both samples and features, but also reveals the sample–feature correlation. Given a tabular data matrix, co-representation usually exhibits as the co-occurrence structures of rows and columns. However, identifying such structured patterns in complex real-world data can be very challenging. To address this problem, we propose an unsupervised discriminative co-representation learning model based on multidimensional scaling (DCLMDS). The main novelty is that DCLMDS introduces a co-representation learning term to ensure the discriminability between co-occurrence structures. As a result, the co-representation learned by DCLMDS contains richer information of the underlying correlation between samples and features within data. This could subsequently enhance the capacity of machines and systems for processing complex real-world information more proficiently. Furthermore, inspired by the fuzzy set theory, we integrate fuzzy membership degree that can accurately capture the uncertainty within data, thus enabling DCLMDS to learn a more effective co-representation in a soft manner. To evaluate the performance of DCLMDS, we conduct extensive experiments on 18 datasets, and the results demonstrate that DCLMDS can generate both accurate and discriminative co-representation, which well meets our desired outcomes.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 1","pages":"71-82"},"PeriodicalIF":3.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Man I Wu;Brian S. Baum;Harvey Edwards;Leia Stirling
{"title":"Effect of an Imperfect Algorithm on Human Gait Strategies With an Active Ankle Exoskeleton","authors":"Man I Wu;Brian S. Baum;Harvey Edwards;Leia Stirling","doi":"10.1109/THMS.2024.3407984","DOIUrl":"https://doi.org/10.1109/THMS.2024.3407984","url":null,"abstract":"Lower-limb active exoskeletons may experience errors in operational settings due to imperfect algorithms, which may impact users' trust in the system and the human-exoskeleton fluency (the coordination of actions between the human and exoskeleton). In this study, we introduced pseudorandom catch trials (errors) in 1.68% of all strides, where an expected exoskeleton torque was not applied for a single stride, to understand the immediate and time-dependent responses to missed actuations. Participants (N = 15) completed a targeted stepping task while walking with a bilateral powered ankle exoskeleton. Human-exoskeleton fluency and trust were inferred from task performance (step accuracy), step characteristics (step length and width), muscle activity, and lower limb joint kinematics. Reductions in ankle plantarflexion during catch trials suggest user adaptation to the exoskeleton. Hip flexion and muscle activity were modulated to mitigate effects of the loss of exoskeleton torque and reduced plantarflexion during catch trials to support task accuracy and maintain step characteristics. Trust was not impacted by this level of error, as there were no significant differences in task performance or gait characteristics over time. Understanding the interactions between human-exoskeleton fluency, task accuracy, and gait strategies will support exoskeleton controller development. Future work will investigate various levels of actuation reliability to understand the transition where performance and trust are affected.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 1","pages":"1-9"},"PeriodicalIF":3.5,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chien-Ting Chen;Shen Jie Koh;Fu-Hao Chang;Yi-Shiang Huang;Li-Chen Fu
{"title":"Object-Goal Navigation of Home Care Robot Based on Human Activity Inference and Cognitive Memory","authors":"Chien-Ting Chen;Shen Jie Koh;Fu-Hao Chang;Yi-Shiang Huang;Li-Chen Fu","doi":"10.1109/THMS.2024.3467150","DOIUrl":"https://doi.org/10.1109/THMS.2024.3467150","url":null,"abstract":"As older adults' memory and cognitive ability deteriorate, designing a cognitive robot system to find the desired objects for users becomes more critical. Cognitive abilities, such as detecting and memorizing the environment and human activities are crucial in implementing effective human–robot interaction and navigation. In addition, robots must possess language understanding capabilities to comprehend human speech and respond promptly. This research aims to develop a mobile robot system for home care that incorporates human activity inference and cognitive memory to reason about the target object's location and navigate to find it. The method comprises three modules: 1) an object-goal navigation module for mapping the environment, detecting surrounding objects, and navigating to find the target object, 2) a cognitive memory module for recognizing human activity and storing encoded information, and 3) an interaction module to interact with humans and infer the target object's position. By leveraging Big Data, human cues, and a commonsense knowledge graph, the system can efficiently and robustly search for target objects. The effectiveness of the system is validated through both simulated and real-world scenarios.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"808-817"},"PeriodicalIF":3.5,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting Human Postures for Manual Material Handling Tasks Using a Conditional Diffusion Model","authors":"Liwei Qing;Bingyi Su;Sehee Jung;Lu Lu;Hanwen Wang;Xu Xu","doi":"10.1109/THMS.2024.3472548","DOIUrl":"https://doi.org/10.1109/THMS.2024.3472548","url":null,"abstract":"Predicting workers' body postures is crucial for effective ergonomic interventions to reduce musculoskeletal disorders (MSDs). In this study, we employ a novel generative approach to predict human postures during manual material handling tasks. Specifically, we implement two distinct network architectures, U-Net and multilayer perceptron (MLP), to build the diffusion model. The model training and testing utilizes a dataset featuring 35 full-body anatomical landmarks collected from 25 participants engaged in a variety of lifting tasks. In addition, we compare our models with two conventional generative networks (conditional generative adversarial network and conditional variational autoencoder) for comprehensive analysis. Our results show that the U-Net model performs well in predicting posture similarity [root-mean-square error (RMSE) of key-point coordinates = 5.86 cm; and RMSE of joint angle coordinates = 13.67\u0000<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>\u0000], while the MLP model leads to higher posture variability (e.g., standard deviation of joint angles = 4.49\u0000<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>\u0000/4.18\u0000<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>\u0000 for upper arm flexion/extension joints). Moreover, both generative models demonstrate reasonable prediction validity (RMSE of segment lengths are within 4.83 cm). Overall, our proposed diffusion models demonstrate good similarity and validity in predicting lifting postures, while also providing insights into the inherent variability of constrained lifting postures. This novel use of diffusion models shows potential for tailored posture prediction in common occupational environments, representing an advancement in motion synthesis and contributing to workplace design and MSD risk mitigation.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"723-732"},"PeriodicalIF":3.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kok-Lim Alvin Yau;Yasir Saleem;Yung-Wey Chong;Xiumei Fan;Jer Min Eyu;David Chieng
{"title":"The Augmented Intelligence Perspective on Human-in-the-Loop Reinforcement Learning: Review, Concept Designs, and Future Directions","authors":"Kok-Lim Alvin Yau;Yasir Saleem;Yung-Wey Chong;Xiumei Fan;Jer Min Eyu;David Chieng","doi":"10.1109/THMS.2024.3467370","DOIUrl":"https://doi.org/10.1109/THMS.2024.3467370","url":null,"abstract":"Augmented intelligence (AuI) is a concept that combines human intelligence (HI) and artificial intelligence (AI) to leverage their respective strengths. While AI typically aims to replace humans, AuI integrates humans into machines, recognizing their irreplaceable role. Meanwhile, human-in-the-loop reinforcement learning (HITL-RL) is a semisupervised algorithm that integrates humans into the traditional reinforcement learning (RL) algorithm, enabling autonomous agents to gather inputs from both humans and environments, learn, and select optimal actions across various environments. Both AuI and HITL-RL are still in their infancy. Based on AuI, we propose and investigate three separate concept designs for HITL-RL: \u0000<italic>HI-AI</i>\u0000, \u0000<italic>AI-HI</i>\u0000, and \u0000<italic>parallel-HI-and-AI</i>\u0000 approaches, each differing in the order of HI and AI involvement in decision making. The literature on AuI and HITL-RL offers insights into integrating HI into existing concept designs. A preliminary study in an Atari game offers insights for future research directions. Simulation results show that human involvement maintains RL convergence and improves system stability, while achieving approximately similar average scores to traditional \u0000<inline-formula><tex-math>$Q$</tex-math></inline-formula>\u0000-learning in the game. Future research directions are proposed to encourage further investigation in this area.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"762-777"},"PeriodicalIF":3.5,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Model Cross-Stream Learning for Self-Supervised Human Action Recognition","authors":"Mengyuan Liu;Hong Liu;Tianyu Guo","doi":"10.1109/THMS.2024.3467334","DOIUrl":"https://doi.org/10.1109/THMS.2024.3467334","url":null,"abstract":"Considering the instance-level discriminative ability, contrastive learning methods, including MoCo and SimCLR, have been adapted from the original image representation learning task to solve the self-supervised skeleton-based action recognition task. These methods usually use multiple data streams (i.e., joint, motion, and bone) for ensemble learning, meanwhile, how to construct a discriminative feature space within a single stream and effectively aggregate the information from multiple streams remains an open problem. To this end, this article first applies a new contrastive learning method called bootstrap your own latent (BYOL) to learn from skeleton data, and then formulate SkeletonBYOL as a simple yet effective baseline for self-supervised skeleton-based action recognition. Inspired by SkeletonBYOL, this article further presents a cross-model and cross-stream (CMCS) framework. This framework combines cross-model adversarial learning (CMAL) and cross-stream collaborative learning (CSCL). Specifically, CMAL learns single-stream representation by cross-model adversarial loss to obtain more discriminative features. To aggregate and interact with multistream information, CSCL is designed by generating similarity pseudolabel of ensemble learning as supervision and guiding feature generation for individual streams. Extensive experiments on three datasets verify the complementary properties between CMAL and CSCL and also verify that the proposed method can achieve better results than state-of-the-art methods using various evaluation protocols.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"743-752"},"PeriodicalIF":3.5,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optical See-Through Head-Mounted Display With Mitigated Parallax-Related Registration Errors: A User Study Validation","authors":"Nadia Cattari;Fabrizio Cutolo;Vincenzo Ferrari","doi":"10.1109/THMS.2024.3468019","DOIUrl":"https://doi.org/10.1109/THMS.2024.3468019","url":null,"abstract":"For an optical see-through (OST) augmented reality (AR) head-mounted display (HMD) to assist in performing high-precision activities in the peripersonal space, a fundamental requirement is the correct spatial registration between the virtual information and the real environment. This registration can be achieved through a calibration procedure involving the parameterization of the virtual rendering camera via an eye-replacement camera that observes a calibration pattern rendered onto the OST display. In a previous feasibility study, we demonstrated and proved, with the same eye-replacement camera used for the calibration, that, in the case of an OST display with a focal plane close to the user's working distance, there is no need for prior-to-use viewpoint-specific calibration refinements obtained through eye-tracking cameras or additional alignment-based calibration steps. The viewpoint parallax-related AR registration error is indeed submillimetric within a reasonable range of depths around the display focal plane. This article confirms, through a user study based on a monocular virtual-to-real alignment task, that this finding is accurate and usable. In addition, we found that by performing the alignment-free calibration procedure via a high-resolution camera, the AR registration accuracy is substantially improved compared with that of other state-of-the-art approaches, with an error lower than 1mm over a notable range of distances. These results demonstrate the safe usability of OST HMDs for high-precision task guidance in the peripersonal space.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"668-677"},"PeriodicalIF":3.5,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10718696","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine Learning for Human–Machine Systems With Advanced Persistent Threats","authors":"Long Chen;Wei Zhang;Yanqing Song;Jianguo Chen","doi":"10.1109/THMS.2024.3439625","DOIUrl":"https://doi.org/10.1109/THMS.2024.3439625","url":null,"abstract":"This article conducts a thorough exploration of the implications of machine learning (ML) in conjunction with human–machine systems within the military domain. It scrutinizes the strategic development efforts of ML by pertinent institutions, particularly in the context of military applications and the domain of advanced persistent threats. Prominent nations have delineated a technical trajectory for the integration of ML into their military frameworks. To bolster the structure and efficacy of their various military branches and units, there has been a concentrated deployment of numerous ML research endeavors. These initiatives encompass the study of sophisticated ML algorithms and the acceleration of artificial intelligence technology adaptation for intelligence processing, autonomous platforms, command and control infrastructures, and weapons systems. Forces across the globe are actively embedding ML technologies into a range of platforms-terrestrial, naval, aerial, space-faring, and cybernetic. This integration spans weaponry, networks, cognitive operations, and additional systems. Furthermore, this article reviews the incorporation within the sphere of military human–machine interaction in the Russia–Ukraine conflict. In this war, cyber human–machine interaction has become a pivotal arena of contention between Russia and Ukraine, with key levers that influence the conflict's course. In addition, the article examines the adoption of ML in prospective military functions such as, operations, intelligence gathering, networking, logistics, identification protocols, healthcare, data analysis trends, and other critical areas marked by current developments and trajectories. It also proffers a series of recommendations for the future integration of ML to inform strategic direction and research.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"753-761"},"PeriodicalIF":3.5,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hang Su;Francesco Jamal Sheiban;Wen Qi;Salih Ertug Ovur;Samer Alfayad
{"title":"A Bioinspired Virtual Reality Toolkit for Robot-Assisted Medical Application: BioVRbot","authors":"Hang Su;Francesco Jamal Sheiban;Wen Qi;Salih Ertug Ovur;Samer Alfayad","doi":"10.1109/THMS.2024.3462416","DOIUrl":"https://doi.org/10.1109/THMS.2024.3462416","url":null,"abstract":"The increasingly pervasive usage of robotic surgery not only calls for advances in clinical application but also implies high availability for preliminary medical education using virtual reality. Virtual reality is currently upgrading medical education by presenting complicated medical information in an immersive and interactive way. A system that allows multiple users to observe and operate via simulated surgical platforms using wearable devices has become an efficient solution for teaching where a real surgical platform is not available. This article developed a bioinspired virtual reality toolkit (BioVRbot) for education and training in robot-assisted minimally invasive surgery. It allows multiple users to manipulate the robots working on cooperative virtual surgery using bioinspired control. The virtual reality scenario is implemented using unity and can be observed with independent virtual reality headsets. A MATLAB server is designed to manage robot motion planning of incremental teleoperation compliance with the remote center of motion constraints. Wearable sensorized gloves are adopted for continuous control of the tooltip and the gripper. Finally, the practical use of the developed surgical virtual system is demonstrated with cooperative operation tasks. It could be further spread into the classroom for preliminary education of robot-assisted surgery for early-stage medical students.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"688-697"},"PeriodicalIF":3.5,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}