{"title":"基于脑机接口和触觉反馈的人机交互行为序列框架的设计与实现","authors":"Sudip Hazra, Shane Whitaker, P. Shiakolas","doi":"10.1115/1.4062341","DOIUrl":null,"url":null,"abstract":"\n In assistive robotics, research in Brain-Computer-Interface aims to understand human intent with the goal to enhance Human-Robot-Interaction. In this research, a framework to enable a person with an upper limb disability to use an assistive system and maintain self-reliance is introduced and its implementation and evaluation are discussed. The framework interlinks functional components and establishes a behavioral sequence to operate the assistive system in three stages; action classification, verification, and execution. An action is classified based on identified human intent and verified through haptic and/or visual feedback before execution. The human intent is conveyed through facial expressions and verification through head movements. The interlinked functional components are an EEG sensing device, a head movement recorder, a dual-purpose glove, a visual feedback environment, and a robotic arm. The ability of the system to recognize a facial expression, time required to respond using head movements, convey information through vibrotactile feedback effects, and the ability to follow the established behavioral sequence are evaluated. Based on the evaluation, personalized training data set should be used to calibrate facial expression recognition and define the time required to respond during verification. Custom vibrotactile effects were effective in conveying system information to the user. Initial evaluation of the developed framework using three volunteers exhibited a 100% success rate in their ability to follow the behavioral sequence and control the system providing confidence to recruit more volunteers to identify and address improvements and expand the operational capability of the framework.","PeriodicalId":73734,"journal":{"name":"Journal of engineering and science in medical diagnostics and therapy","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Design and Implementation of a Behavioral Sequence Framework for Human-robot Interaction Utilizing Brain-computer Interface and Haptic Feedback\",\"authors\":\"Sudip Hazra, Shane Whitaker, P. Shiakolas\",\"doi\":\"10.1115/1.4062341\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n In assistive robotics, research in Brain-Computer-Interface aims to understand human intent with the goal to enhance Human-Robot-Interaction. In this research, a framework to enable a person with an upper limb disability to use an assistive system and maintain self-reliance is introduced and its implementation and evaluation are discussed. The framework interlinks functional components and establishes a behavioral sequence to operate the assistive system in three stages; action classification, verification, and execution. An action is classified based on identified human intent and verified through haptic and/or visual feedback before execution. The human intent is conveyed through facial expressions and verification through head movements. The interlinked functional components are an EEG sensing device, a head movement recorder, a dual-purpose glove, a visual feedback environment, and a robotic arm. The ability of the system to recognize a facial expression, time required to respond using head movements, convey information through vibrotactile feedback effects, and the ability to follow the established behavioral sequence are evaluated. Based on the evaluation, personalized training data set should be used to calibrate facial expression recognition and define the time required to respond during verification. Custom vibrotactile effects were effective in conveying system information to the user. Initial evaluation of the developed framework using three volunteers exhibited a 100% success rate in their ability to follow the behavioral sequence and control the system providing confidence to recruit more volunteers to identify and address improvements and expand the operational capability of the framework.\",\"PeriodicalId\":73734,\"journal\":{\"name\":\"Journal of engineering and science in medical diagnostics and therapy\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of engineering and science in medical diagnostics and therapy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1115/1.4062341\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of engineering and science in medical diagnostics and therapy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/1.4062341","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Design and Implementation of a Behavioral Sequence Framework for Human-robot Interaction Utilizing Brain-computer Interface and Haptic Feedback
In assistive robotics, research in Brain-Computer-Interface aims to understand human intent with the goal to enhance Human-Robot-Interaction. In this research, a framework to enable a person with an upper limb disability to use an assistive system and maintain self-reliance is introduced and its implementation and evaluation are discussed. The framework interlinks functional components and establishes a behavioral sequence to operate the assistive system in three stages; action classification, verification, and execution. An action is classified based on identified human intent and verified through haptic and/or visual feedback before execution. The human intent is conveyed through facial expressions and verification through head movements. The interlinked functional components are an EEG sensing device, a head movement recorder, a dual-purpose glove, a visual feedback environment, and a robotic arm. The ability of the system to recognize a facial expression, time required to respond using head movements, convey information through vibrotactile feedback effects, and the ability to follow the established behavioral sequence are evaluated. Based on the evaluation, personalized training data set should be used to calibrate facial expression recognition and define the time required to respond during verification. Custom vibrotactile effects were effective in conveying system information to the user. Initial evaluation of the developed framework using three volunteers exhibited a 100% success rate in their ability to follow the behavioral sequence and control the system providing confidence to recruit more volunteers to identify and address improvements and expand the operational capability of the framework.