{"title":"Situated Interpretation and Data: Explainability to Convey Machine Misalignment","authors":"Dane Anthony Morey;Michael F. Rayo","doi":"10.1109/THMS.2023.3334988","DOIUrl":"https://doi.org/10.1109/THMS.2023.3334988","url":null,"abstract":"Explainable AI must simultaneously help people understand the world, the AI, and when the AI is misaligned to the world. We propose \u0000<italic>situated interpretation and data</i>\u0000 (SID) as a design technique to satisfy these requirements. We trained two machine learning algorithms, one transparent and one opaque, to predict future patient events that would require an emergency response team (ERT) mobilization. An SID display combined the outputs of the two algorithms with patient data and custom annotations to implicitly convey the alignment of the transparent algorithm to the underlying data. SID displays were shown to 30 nurses with 10 actual patient cases. Nurses reported their concern level (1–10) and intended response (1–4) for each patient. For all cases where the algorithms predicted no ERT (correctly or incorrectly), nurses correctly differentiated ERT from non-ERT in both concern and response. For all cases where the algorithms predicted an ERT, nurses differentiated ERT from non-ERT in response, but not concern. Results also suggest that nurses’ reported urgency was unduly influenced by misleading algorithm guidance in cases where the algorithm overpredicted and underpredicted the future ERT. However, nurses reported concern that was as or more appropriate than the predictions in 8 of 10 cases and differentiated ERT from non-ERT cases \u0000<italic>better</i>\u0000 than \u0000<italic>both</i>\u0000 algorithms, even the more accurate opaque algorithm, when the two predictions conflicted. Therefore, SID appears a promising design technique to reduce, but not eliminate, the negative impacts of misleading opaque and transparent algorithms.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139654818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crescencio Bravo;Rafael Duque;Ana I. Molina;Jesús Gallardo
{"title":"Modeling Awareness Requirements in Groupware: From Cards to Diagrams","authors":"Crescencio Bravo;Rafael Duque;Ana I. Molina;Jesús Gallardo","doi":"10.1109/THMS.2023.3332592","DOIUrl":"https://doi.org/10.1109/THMS.2023.3332592","url":null,"abstract":"Up to now, groupware has enjoyed a certain stability in terms of the users’ technical requirements, being the awareness dimension one of its key services to provide usability and improve collaboration. Nonetheless, currently, groupware technologies are being stressed: on the one hand, the pandemic of COVID-19 has greatly driven the massive use of groupware tools to overcome physical distancing; on the other hand, the new digital worlds (with disruptive devices, changing paradigms, and growing productive needs) are introducing new collaboration settings. This, and the fact that software engineering methods are not paying enough attention to the awareness, makes us concentrate on facilitating its design. Thus, we have created a visual modeling technique, based on a conceptual framework, to be used by the developers of groupware systems to describe awareness requirements. This visual language, called the awareness description diagrams, has been validated in some experimental activities. The results obtained show that this is a valid technique in order to model the awareness support, that it is useful and understandable for groupware engineers, and that the visual representation is preferred to a more textual one in terms of expressiveness.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10348027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139654594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guoyang Liu;Yiming Wen;Janet H. Hsiao;Di Zhang;Lan Tian;Weidong Zhou
{"title":"EEG-Based Familiar and Unfamiliar Face Classification Using Filter-Bank Differential Entropy Features","authors":"Guoyang Liu;Yiming Wen;Janet H. Hsiao;Di Zhang;Lan Tian;Weidong Zhou","doi":"10.1109/THMS.2023.3332209","DOIUrl":"https://doi.org/10.1109/THMS.2023.3332209","url":null,"abstract":"The face recognition of familiar and unfamiliar people is an essential part of our daily lives. However, its neural mechanism and relevant electroencephalography (EEG) features are still unclear. In this study, a new EEG-based familiar and unfamiliar faces classification method is proposed. We record the multichannel EEG with three different face-recall paradigms, and these EEG signals are temporally segmented and filtered using a well-designed filter-bank strategy. The filter-bank differential entropy is employed to extract discriminative features. Finally, the support vector machine (SVM) with Gaussian kernels serves as the robust classifier for EEG-based face recognition. In addition, the F-score is employed for feature ranking and selection, which helps to visualize the brain activation in time, frequency, and spatial domains, and contributes to revealing the neural mechanism of face recognition. With feature selection, the highest mean accuracy of 74.10% can be yielded in face-recall paradigms over ten subjects. Meanwhile, the analysis of results indicates that the EEG-based classification performance of face recognition will be significantly affected when subjects lie. The time–frequency topographical maps generated according to feature importance suggest that the delta band in the prefrontal region correlates to the face recognition task, and the brain response pattern varies from person to person. The present work demonstrates the feasibility of developing an efficient and interpretable brain–computer interface for EEG-based face recognition.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139654893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classifying Human Manual Control Behavior Using LSTM Recurrent Neural Networks","authors":"Rogier Versteeg;Daan M. Pool;Max Mulder","doi":"10.1109/THMS.2023.3327145","DOIUrl":"https://doi.org/10.1109/THMS.2023.3327145","url":null,"abstract":"This article discusses a long short-term memory (LSTM) recurrent neural network that uses raw time-domain data obtained in compensatory tracking tasks as input features for classifying (the adaptation of) human manual control with single- and double-integrator controlled element dynamics. Data from two different experiments were used to train and validate the LSTM classifier, including investigating effects of several key data preprocessing settings. The model correctly classifies human control behavior (cross-experiment validation accuracy 96%) using short 1.6-s data windows. To achieve this accuracy, it is found crucial to scale/standardize the input feature data and use a combination of input signals that includes the tracking error and human control output. A possible online application of the classifier was tested on data from a third experiment with time-varying and slightly different controlled element dynamics. The results show that the LSTM classification is still successful, which makes it a promising online technique to rapidly detect adaptations in human control behavior.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139654929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shengting Cao;Mansoo Ko;Chih-Ying Li;David Brown;Xuefeng Wang;Fei Hu;Yu Gan
{"title":"Single-Belt Versus Split-Belt: Intelligent Treadmill Control via Microphase Gait Capture for Poststroke Rehabilitation","authors":"Shengting Cao;Mansoo Ko;Chih-Ying Li;David Brown;Xuefeng Wang;Fei Hu;Yu Gan","doi":"10.1109/THMS.2023.3327661","DOIUrl":"https://doi.org/10.1109/THMS.2023.3327661","url":null,"abstract":"Stroke is the leading long-term disability and causes a significant financial burden associated with rehabilitation. In poststroke rehabilitation, individuals with hemiparesis have a specialized demand for coordinated movement between the paretic and the nonparetic legs. The split-belt treadmill can effectively facilitate the paretic leg by slowing down the belt speed for that leg while the patient is walking on a split-belt treadmill. Although studies have found that split-belt treadmills can produce better gait recovery outcomes than traditional single-belt treadmills, the high cost of split-belt treadmills is a significant barrier to stroke rehabilitation in clinics. In this article, we design an AI-based system for the single-belt treadmill to make it act like a split-belt by adjusting the belt speed instantaneously according to the patient's microgait phases. This system only requires a low-cost RGB camera to capture human gait patterns. A novel microgait classification pipeline model is used to detect gait phases in real time. The pipeline is based on self-supervised learning that can calibrate the anchor video with the real-time video. We then use a ResNet-LSTM module to handle temporal information and increase accuracy. A real-time filtering algorithm is used to smoothen the treadmill control. We have tested the developed system with 34 healthy individuals and four stroke patients. The results show that our system is able to detect the gait microphase accurately and requires less human annotation in training, compared to the ResNet50 classifier. Our system “Splicer” is boosted by AI modules and performs comparably as a split-belt system, in terms of timely varying left/right foot speed, creating a hemiparetic gait in healthy individuals, and promoting paretic side symmetry in force exertion for stroke patients. This innovative design can potentially provide cost-effective rehabilitation treatment for hemiparetic patients.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138633853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human–Robot Interaction Video Sequencing Task (HRIVST) for Robot's Behavior Legibility","authors":"Silvia Rossi;Alessia Coppola;Mariachiara Gaita;Alessandra Rossi","doi":"10.1109/THMS.2023.3327132","DOIUrl":"10.1109/THMS.2023.3327132","url":null,"abstract":"People's acceptance and trust in robots are a direct consequence of people's ability to infer and predict the robot's behavior. However, there is no clear consensus on how the legibility of a robot's behavior and explanations should be assessed. In this work, the construct of the Theory of Mind (i.e., the ability to attribute mental states to others) is taken into account and a computerized version of the theory of mind picture sequencing task is presented. Our tool, called the human–robot interaction (HRI) video sequencing task (HRIVST), evaluates the legibility of a robot's behavior toward humans by asking them to order short videos to form a logical sequence of the robot's actions. To validate the proposed metrics, we recruited a sample of 86 healthy subjects. Results showed that the HRIVST has good psychometric properties and is a valuable tool for assessing the legibility of robot behaviors. We also evaluated the effects of symbolic explanations, the presence of a person during the interaction, and the humanoid appearance. Results showed that the interaction condition had no effect on the legibility of the robot's behavior. In contrast, the combination of humanoid robots and explanations seems to result in a better performance of the task.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10317817","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135661089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mouth Cavity Visual Analysis Based on Deep Learning for Oropharyngeal Swab Robot Sampling","authors":"Qing Gao;Zhaojie Ju;Yongquan Chen;Tianwei Zhang;Yuquan Leng","doi":"10.1109/THMS.2023.3309256","DOIUrl":"10.1109/THMS.2023.3309256","url":null,"abstract":"The visual analysis of the mouth cavity plays a significant role in the pathogen specimen sampling and disease diagnosis of the mouth cavity. Aiming at performance defects of general detectors based on deep learning in detecting mouth cavity components, this article proposes a mouth cavity analysis network (MCNet), which is an instance segmentation method with spatial features, and a mouth cavity dataset (MCData), which is the first available dataset for mouth cavity detecting and segmentation. First, given the lack of a mouth cavity image dataset, the MCData for detecting and segmenting key parts in the mouth cavity was developed for model training and testing. Second, the MCNet was designed based on the mask region-based convolutional neural network. To improve the performance of feature extraction, a parallel multiattention module was designed. Besides, to solve low detection accuracy of small-sized objects, a multiscale region proposal network structure was designed. Then, the mouth cavity spatial structure features were introduced, and the detection confidence could be refined to increase the detection accuracy. The MCNet achieved 81.5% detection accuracy and 78.1% segmentation accuracy (intersection over union = 0.50:0.95) on the MCData. Comparative experiments with the MCData showed that the proposed MCNet outperformed state-of-the-art approaches with the task of mouth cavity instance segmentation. In addition, the MCNet has been used in an oropharyngeal swab robot for COVID-19 oropharyngeal sampling.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135319108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Oropharynx Visual Detection by Using a Multi-Attention Single-Shot Multibox Detector for Human–Robot Collaborative Oropharynx Sampling","authors":"Qing Gao;Yongquan Chen;Zhaojie Ju","doi":"10.1109/THMS.2023.3324664","DOIUrl":"10.1109/THMS.2023.3324664","url":null,"abstract":"The pandemic of COVID-19 has increased the demand for the oropharynx sampling robots. For an automatic oropharynx sampling, detection and localization of the oropharynx objects are essential. First, in response to the small-object and real-time needs of visual oropharynx detection, a lightweight multi-attention single-shot multibox detector (MASSD) method is designed. This method can effectively improve the detection accuracy of oropharynx sampling regions, especially small regions, while ensuring sufficient speed by introducing spatial attention, channel attention, and feature fusion mechanisms into the single-shot multibox detector. Second, the proposed MASSD is applied to an oropharyngeal swab (OP-swab) robot system to detect oropharynx sampling regions and conduct autonomous sampling. In the experiment, training and validation based on a custom oropharynx dataset verify the effectiveness and efficiency of the proposed MASSD. The detection accuracy can reach 81.3% of mean average precision@0.5:0.95 at 104 frames per second and the application experiment on the OP-swab robot system performs oropharynx sampling with 100% success accuracy in human–robot collaboration strategy.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135319104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Impact of Time-to-Collision Constraint and Head Gaze on Usability for Robot Navigation in a Corridor","authors":"Guilhem Buisan;Nathan Compan;Loïc Caroux;Aurélie Clodic;Ophélie Carreras;Camille Vrignaud;Rachid Alami","doi":"10.1109/THMS.2023.3314894","DOIUrl":"10.1109/THMS.2023.3314894","url":null,"abstract":"Navigation of robots among humans is still an open problem, especially in confined locations (e.g. narrow corridors, doors). This article aims at finding how an anthropomorphic robot, like a PR2 robot with a height of 1.33 m, should behave when crossing a human in a narrow corridor in order to increase its usability. Two experiments studied how a combination of robot head behavior and navigation strategy can enhance robot legibility. Experiment 1 aimed to measure where a pedestrian looks when crossing another pedestrian, comparing the nature of the pedestrian: human or a robot. Based on the results of this experiment and the literature, we then designed a robot behavior exhibiting mutual manifestness by both modifying its trajectory to be more legible, and using its head to glance at the human. Experiment 2 evaluated this behavior in real situations of pedestrians crossing a robot. The visual behavior and user experience of pedestrians were assessed. The first experiment revealed that humans primarily look at the robot's head just before crossing. The second experiment showed that when crossing a human in a narrow corridor, both modifying the robot trajectory and glancing at the human is necessary to significantly increase the usability of the robot. We suggest using mutual manifestness is crucial for an anthropomorphic robot when crossing a human in a corridor. It should be conveyed both by altering the trajectory and by showing the robot awareness of the human presence through the robot head motion. Small changes in robot trajectory and manifesting robot perception of the human via a user identified robot head can avoid users' hesitation and feeling of threat.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136304138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikunj A. Bhagat;Gerard E. Francisco;Jose L. Contreras-Vidal
{"title":"A State-Space Control Approach for Tracking Isometric Grip Force During BMI Enabled Neuromuscular Stimulation","authors":"Nikunj A. Bhagat;Gerard E. Francisco;Jose L. Contreras-Vidal","doi":"10.1109/THMS.2023.3316185","DOIUrl":"10.1109/THMS.2023.3316185","url":null,"abstract":"Sixty percent of elderly hand movements involve grasping, which is unarguably why grasp restoration is a major component of upper-limb rehabilitation therapy. Neuromuscular electrical stimulation is effective in assisting grasping, but challenges around patient engagement and control, as well as poor movement regulation due to fatigue and muscle nonlinearity continue to hinder its adoption for clinical applications. In this study, we integrate an electroencephalography-based brain–machine interface (BMI) with closed-loop neuromuscular stimulation to restore grasping and evaluate its performance using an isometric force tracking task. After three sessions, it was concluded that the normalized tracking error during closed-loop stimulation using a state-space feedback controller (25 ± 15%), was significantly smaller than conventional open-loop stimulation (31 ± 24%), (\u0000<italic>F</i>\u0000 (748.03, 1) = 23.22, \u0000<italic>p</i>\u0000 < 0.001). Also, the impaired study participants were able to achieve a BMI classification accuracy of 65 ± 10% while able-bodied participants achieved 57 ± 18% accuracy, which suggests the proposed closed-loop system is more capable of engaging patients for rehabilitation. These findings demonstrate the multisession performance of model-based feedback-controlled stimulation, without requiring frequent reconfiguration.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136304145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}