{"title":"基于口型的腹腔镜摄像头指令","authors":"Juan B. Gómez, F. Prieto, T. Redarce","doi":"10.1109/ROSE.2008.4669177","DOIUrl":null,"url":null,"abstract":"In this paper a method for the automatic command of three degrees of freedom of a robot using mouth gestures is presented. The method uses a normalized version of the a component from the CIELAB color space in order to detect the mouth region and to segment the mouth structures. A set of features extracted from the segmented regions serve to model a set of activation signals, who control the transitions between states in a state machine that translates gestures into actions. Results and conclusions of the behavior of the system are presented.","PeriodicalId":331909,"journal":{"name":"2008 International Workshop on Robotic and Sensors Environments","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Towards a mouth gesture based laparoscope camera command\",\"authors\":\"Juan B. Gómez, F. Prieto, T. Redarce\",\"doi\":\"10.1109/ROSE.2008.4669177\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper a method for the automatic command of three degrees of freedom of a robot using mouth gestures is presented. The method uses a normalized version of the a component from the CIELAB color space in order to detect the mouth region and to segment the mouth structures. A set of features extracted from the segmented regions serve to model a set of activation signals, who control the transitions between states in a state machine that translates gestures into actions. Results and conclusions of the behavior of the system are presented.\",\"PeriodicalId\":331909,\"journal\":{\"name\":\"2008 International Workshop on Robotic and Sensors Environments\",\"volume\":\"62 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 International Workshop on Robotic and Sensors Environments\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROSE.2008.4669177\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 International Workshop on Robotic and Sensors Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROSE.2008.4669177","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards a mouth gesture based laparoscope camera command
In this paper a method for the automatic command of three degrees of freedom of a robot using mouth gestures is presented. The method uses a normalized version of the a component from the CIELAB color space in order to detect the mouth region and to segment the mouth structures. A set of features extracted from the segmented regions serve to model a set of activation signals, who control the transitions between states in a state machine that translates gestures into actions. Results and conclusions of the behavior of the system are presented.