{"title":"基于Takagi-Sugeno模糊神经网络控制器的图像视觉伺服","authors":"Miao Hao, Zeng-qi Sun, Masakazu Fujii","doi":"10.1109/ISIC.2007.4450860","DOIUrl":null,"url":null,"abstract":"In this paper, a Takagi-Sugeno fuzzy neural network controller (TS-FNNC) based image based visual servoing (IBVS) method is proposed. Firstly, the eigenspace based image compression method is explored which is chosen as the global feature transformation method. After that, the inner structure, performance and training method of T-S neural network controller are discussed respectively. Besides, the whole architecture of the TS-FNNC is investigated. No artificial mark is needed in the visual servoing process. No priori knowledge of the robot kinetics and dynamics or camera calibration is needed. The method is implemented and validated on a Motoman UP6 based eye-in-hand platform and the experimental results are also reported in the end.","PeriodicalId":184867,"journal":{"name":"2007 IEEE 22nd International Symposium on Intelligent Control","volume":"143 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Image Based Visual Servoing Using Takagi-Sugeno Fuzzy Neural Network Controller\",\"authors\":\"Miao Hao, Zeng-qi Sun, Masakazu Fujii\",\"doi\":\"10.1109/ISIC.2007.4450860\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a Takagi-Sugeno fuzzy neural network controller (TS-FNNC) based image based visual servoing (IBVS) method is proposed. Firstly, the eigenspace based image compression method is explored which is chosen as the global feature transformation method. After that, the inner structure, performance and training method of T-S neural network controller are discussed respectively. Besides, the whole architecture of the TS-FNNC is investigated. No artificial mark is needed in the visual servoing process. No priori knowledge of the robot kinetics and dynamics or camera calibration is needed. The method is implemented and validated on a Motoman UP6 based eye-in-hand platform and the experimental results are also reported in the end.\",\"PeriodicalId\":184867,\"journal\":{\"name\":\"2007 IEEE 22nd International Symposium on Intelligent Control\",\"volume\":\"143 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 IEEE 22nd International Symposium on Intelligent Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISIC.2007.4450860\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE 22nd International Symposium on Intelligent Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIC.2007.4450860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Image Based Visual Servoing Using Takagi-Sugeno Fuzzy Neural Network Controller
In this paper, a Takagi-Sugeno fuzzy neural network controller (TS-FNNC) based image based visual servoing (IBVS) method is proposed. Firstly, the eigenspace based image compression method is explored which is chosen as the global feature transformation method. After that, the inner structure, performance and training method of T-S neural network controller are discussed respectively. Besides, the whole architecture of the TS-FNNC is investigated. No artificial mark is needed in the visual servoing process. No priori knowledge of the robot kinetics and dynamics or camera calibration is needed. The method is implemented and validated on a Motoman UP6 based eye-in-hand platform and the experimental results are also reported in the end.