{"title":"Neuroplanners for hand/eye coordination","authors":"D. H. Graf, W. LaLonde","doi":"10.1109/IJCNN.1989.118296","DOIUrl":null,"url":null,"abstract":"The authors generalize a previously described architecture, which they now call a neuroplanner, and apply it to an extension of the problem it was initially designed to solve-the target-directed control of a robot arm in an obstacle-cluttered workspace. By target directed they mean that the arm can position its end-effector at the point of gaze specified by a pair of stereo targetting cameras. Hence, the system is able to 'touch the point targetted by its eyes. The new design extends the targetting system to an articulated camera platform-the equivalent of the human eye-head-neck system. This permits the robot to solve the inverse problem: given the current configuration of the arm, the system is able to reorient the camera platform to focus on the end-effector. Because of obstacles, the camera platform will generally have to peer around obstacles that block its view. Hence the new system is able to move the eye-head-neck system to see the hand.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International 1989 Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1989.118296","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21
Abstract
The authors generalize a previously described architecture, which they now call a neuroplanner, and apply it to an extension of the problem it was initially designed to solve-the target-directed control of a robot arm in an obstacle-cluttered workspace. By target directed they mean that the arm can position its end-effector at the point of gaze specified by a pair of stereo targetting cameras. Hence, the system is able to 'touch the point targetted by its eyes. The new design extends the targetting system to an articulated camera platform-the equivalent of the human eye-head-neck system. This permits the robot to solve the inverse problem: given the current configuration of the arm, the system is able to reorient the camera platform to focus on the end-effector. Because of obstacles, the camera platform will generally have to peer around obstacles that block its view. Hence the new system is able to move the eye-head-neck system to see the hand.<>