{"title":"Design of a Robotic System Featured With High Operation Transparency for Quantifying Arm Impedance During Ultrasound Scanning","authors":"Baoshan Niu, Dapeng Yang, Yangjunjian Zhou, Le Zhang, Qi Huang, Yikun Gu","doi":"10.1109/thms.2024.3442537","DOIUrl":"https://doi.org/10.1109/thms.2024.3442537","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsung-Han Yang, Yi-Chun Du, Cheng-Bin Xu, Wei-Siang Ciou
{"title":"Development of a MR Training System for Living Donor Liver Transplantation Using Simulated Liver Phantom and ICP Tracking Technology","authors":"Tsung-Han Yang, Yi-Chun Du, Cheng-Bin Xu, Wei-Siang Ciou","doi":"10.1109/thms.2024.3450689","DOIUrl":"https://doi.org/10.1109/thms.2024.3450689","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Layered Modeling of Affective, Perception, and Visual Properties: Optimizing Structure With Genetic Algorithm","authors":"Shuhei Watanabe;Takahiko Horiuchi","doi":"10.1109/THMS.2024.3434573","DOIUrl":"10.1109/THMS.2024.3434573","url":null,"abstract":"To design the “Kansei value” aspect of a product, it is useful to design multilayered relationships of perceptual and affective responses via the physical or psychophysical properties of the product. However, because they are qualitative and ambiguous, designing a model is time-consuming. Moreover, the design was conducted by hypothesis and trial-and-error by the experimenter. In this article, we developed a method to automatically construct several semioptimal structures by applying a genetic algorithm to model design based on structural equation modeling, using the results of image measurement and subjective evaluation experiments on various material samples. Under set convergence conditions, the method constructed statistically optimized structures that represent the relationships among adjectives describing perception and affective, and the properties. A semantic validation was performed to determine the final model. As a result, the proposed method could be used to construct a model that can be interpreted as semantically and statistically superior compared to methods in related studies. A unique feature of this article was the use of the physical and psychophysical properties obtained by measurements in the construction of a multilayer model. Also, the advantage of this method is that it can be used to construct important structures that may be overlooked.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To Err is Automation: Can Trust be Repaired by the Automated Driving System After its Failure?","authors":"Peng Liu;Yueying Chu;Guanqun Wang;Zhigang Xu","doi":"10.1109/THMS.2024.3434680","DOIUrl":"10.1109/THMS.2024.3434680","url":null,"abstract":"Failures of the automated driving system (ADS) in automated vehicles (AVs) can damage driver–ADS cooperation (e.g., causing trust damage) and traffic safety. Researchers suggest infusing a human-like ability, active trust repair, into automated systems, to mitigate broken trust and other negative impacts resulting from their failures. Trust repair is regarded as a key ergonomic design in automated systems. Trust repair strategies (e.g., apology) are examined and supported by some evidence in controlled environments, however, rarely subjected to empirical evaluations in more naturalistic environments. To fill this gap, we conducted a test track study, invited participants (\u0000<italic>N</i>\u0000 = 257) to experience an ADS failure, and tested the influence of the ADS’ trust repair on trust and other psychological responses. Half of participants (\u0000<italic>n</i>\u0000 = 128) received the ADS’ verbal message (consisting of apology, explanation, and promise) by a human voice (\u0000<italic>n</i>\u0000 = 63) or by Apple's Siri (\u0000<italic>n</i>\u0000 = 65) after its failure. We measured seven psychological responses to AVs and ADS [e.g., trust and behavioral intention (BI)]. We found that both strategies cannot repair damaged trust. The human-voice-repair strategy can to some degree mitigate other detrimental influences (e.g., reductions in BI) resulting from the ADS failure, but this effect is only notable among participants without substantial driving experience. It points to the importance of conducting ecologically valid and field studies for validating human-like trust repair strategies in human–automation interaction and of developing trust repair strategies specific to safety-critical situations.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating Perceived Mental Workload From Eye-Tracking Data Based on Benign Anisocoria","authors":"Suvodip Chakraborty;Peter Kiefer;Martin Raubal","doi":"10.1109/THMS.2024.3432864","DOIUrl":"10.1109/THMS.2024.3432864","url":null,"abstract":"From the initial phases of human–computer interaction, where the computer was unaware of the users' mental states, we are now progressing toward cognition-aware user interfaces. One crucial cognitive state considered by research on cognition-aware user interfaces is the cognitive load. Eye-tracking has been suggested as one particularly unobtrusive method for estimating cognitive load. Although the accuracy of cognitive load detection has improved in recent work, it is still insufficient for cognition-aware user interfaces, which require high accuracy for getting accepted by the user. This article introduces two new eye-tracking metrics for estimating perceived cognitive load based on benign anisocoria (BA). Unlike previous pupil-based metrics, our metrics are based on pupil size asymmetry between the left and right eye. As a case study, we illustrate the effectiveness of the proposed metrics on a recently published eye-tracking dataset recorded under laboratory conditions. The results show that our proposed features based on BA can improve the performance of classifiers for detecting the perceived mental workload associated with an \u0000<inline-formula><tex-math>$N$</tex-math></inline-formula>\u0000-back test. The best classification accuracy was 84.24% while the classification accuracy in the absence of the proposed features was 81.91% for the light gradient boosting classifier.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10629234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Responses to External Repeated Perturbations Vary With Time Intervals","authors":"Dongwon Kim;Jong-Moon Hwang","doi":"10.1109/THMS.2024.3426302","DOIUrl":"10.1109/THMS.2024.3426302","url":null,"abstract":"It is generally accepted that stimuli with different time intervals lead to different contributions of relevant learning substrates. We investigated postural responses in men to trains of perturbations with different time intervals. A total of 12 male volunteers with no neurological deficits (age: 33.33±3.12 S.D.) experienced a sequence of perturbations. Two sequences of perturbations by a translational plate to still standing participants in the anterior-posterior direction were designed and administered: the first sequence consisted of 24 repeated perturbations with an interval of 5 s, while the second sequence consisted of ones with an interval of 2.5 s. A perturbation of a smaller magnitude was inserted into each sequence as a catch trial. We found that while a shorter interval and a longer interval both led to the learning effect (P < 0.05), a shorter interval results in a stiff strategy, presumably through muscle cocontraction, minimizing the degree of body sway. The learned motor response continued after experiencing the catch trial (P < 0.05). The results imply that stimulus intervals could lead to a different adaptation mechanism in the neuromotor system in the way to regain postural stability. Also, our results suggest that training for male individuals with repetition of postural perturbations with a shorter time interval leads to a stiff strategy with a greater degree. A stiff strategy could lower the ability to cope with unexpected postural threats, possibly leading to falls.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141886117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ARMedicalSketch: Exploring 3D Sketching for Medical Image Using True 2D-3D Interlinked Visualization and Interaction","authors":"Nan Zhang;Tianqi Huang;Hongen Liao","doi":"10.1109/THMS.2024.3432735","DOIUrl":"10.1109/THMS.2024.3432735","url":null,"abstract":"In traditional clinical practice, doctors often have to deal with 3D information based on 2D-displayed medical images. There is a considerable mismatch between the 2D and 3D dimensions in image interaction during clinical diagnosis, making image manipulation challenging and time-consuming. In this study, we explored 3D sketching for medical images using true 2D-3D interlinked visualization and interaction, presenting a novel AR environment named ARMedicalSketch. It supports image display enhancement preprocessing and 3D interaction tasks for original 3D medical images. Our interaction interface, based on 3D autostereoscopic display technology, provides both floating 3D display and 2D tablet display while enabling glasses-free visualization. We presented a method of 2D-3D interlinked visualization and interaction, employing synchronized projection visualization and a virtual synchronized interactive plane to establish an integrated relationship between 2D and 3D displays. Additionally, we utilized gesture sensors and a 2D touch tablet to capture the user's hand information for convenient interaction. We constructed the prototype and conducted a user study involving 23 students and 2 clinical experts. The controlled study compared our proposed system with a 2D display prototype, showing enhanced efficiency in interacting with medical images while maintaining 2D interaction accuracy, particularly in tasks involving strong 3D spatial correlation. In the future, we aim to further enhance the interaction precision and application scenarios of ARMedicalSketch.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141886116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emotion Recognition of Playing Musicians From EEG, ECG, and Acoustic Signals","authors":"Luca Turchet;Barry O'Sullivan;Rupert Ortner;Christoph Guger","doi":"10.1109/THMS.2024.3430327","DOIUrl":"10.1109/THMS.2024.3430327","url":null,"abstract":"This article investigated the automatic recognition of felt and musically communicated emotions using electroencephalogram (EEG), electrocardiogram (ECG), and acoustic signals, which were recorded from eleven musicians instructed to perform music in order to communicate happiness, sadness, relaxation, and anger. Musicians' self-reports indicated that the emotions they musically expressed were highly consistent with those they actually felt. Results showed that the best classification performances, in a subject-dependent classification using a KNN classifier were achieved by using features derived from both the EEG and ECG (with an accuracy of 98.11%). Which was significantly more accurate than using ECG features alone, but was not significantly more accurate than using EEG features alone. The use of acoustic features alone or in combination with EEG and/or ECG features did not lead to better performances than those achieved with EEG plus ECG or EEG alone. Our results suggest that emotion detection of playing musicians, both felt and musically communicated, when coherent, can be classified in a more reliable way using physiological features than involving acoustic features. The reported machine learning results are a step toward the development of affective brain–computer interfaces capable of automatically inferring the emotions of a playing musician in real-time.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10620218","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141886137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jongkil Jay Jeong;Syed Wajid Ali Shah;Ashish Nanda;Robin Doss;Mohammad Nosouhi;Jeb Webb
{"title":"User Characteristics and Their Impact on the Perceived Usable Security of Physical Authentication Devices","authors":"Jongkil Jay Jeong;Syed Wajid Ali Shah;Ashish Nanda;Robin Doss;Mohammad Nosouhi;Jeb Webb","doi":"10.1109/THMS.2024.3421538","DOIUrl":"10.1109/THMS.2024.3421538","url":null,"abstract":"Physical authentication devices (PADs) offer a higher level of security than other authentication technologies commonly used in multifactor authentication (MFA) schemes because they are much less vulnerable to attack. However, PAD uptake remains significantly lower than that for SMS and app-based approaches, accounting for only 10% of all authentication technologies currently being utilized in MFA. Prior studies indicate that the primary reason for this low adoption rate is due to negative users' perceptions and attitudes toward the usability of PADs; many of these studies often skew toward a particular set of users (e.g., young university students, etc.), often creating a bias toward what usable security entails. To address this limitation, we have formulated an original research methodology that segments users into specific groups based on their user characteristics (i.e., age, education, and experience) and examines how each group defines usability and ranks their preferences regarding certain security features. Based on a survey of 410 participants, our results indicate that there are indeed different usable security preferences for each user group, and we, therefore, provide recommendations on how existing PADs might be enhanced to support usability and improve adoption rates.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141772570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}