{"title":"人类感觉运动控制的仿生知觉学习","authors":"Masaki Nakada, Honglin Chen, Demetri Terzopoulos","doi":"10.1109/CVPRW.2018.00257","DOIUrl":null,"url":null,"abstract":"We introduce a biomimetic simulation framework for human perception and sensorimotor control. Our framework features a biomechanically simulated musculoskeletal human model actuated by numerous skeletal muscles, with two human-like eyes whose retinas contain spatially nonuniform distributions of photoreceptors. Its prototype sensorimotor system comprises a set of 20 automatically-trained deep neural networks (DNNs), half of which comprise the neuromuscular motor control subsystem, whereas the other half are devoted to the visual perception subsystem. Directly from the photoreceptor responses, 2 perception DNNs control eye and head movements, while 8 DNNs extract the perceptual information needed to control the arms and legs. Thus, driven exclusively by its egocentric, active visual perception, our virtual human is capable of learning efficient, online visuomotor control of its eyes, head, and four limbs to perform a nontrivial task involving the foveation and visual persuit of a moving target object coupled with visually-guided reaching actions to intercept the incoming target.","PeriodicalId":150600,"journal":{"name":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Learning Biomimetic Perception for Human Sensorimotor Control\",\"authors\":\"Masaki Nakada, Honglin Chen, Demetri Terzopoulos\",\"doi\":\"10.1109/CVPRW.2018.00257\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce a biomimetic simulation framework for human perception and sensorimotor control. Our framework features a biomechanically simulated musculoskeletal human model actuated by numerous skeletal muscles, with two human-like eyes whose retinas contain spatially nonuniform distributions of photoreceptors. Its prototype sensorimotor system comprises a set of 20 automatically-trained deep neural networks (DNNs), half of which comprise the neuromuscular motor control subsystem, whereas the other half are devoted to the visual perception subsystem. Directly from the photoreceptor responses, 2 perception DNNs control eye and head movements, while 8 DNNs extract the perceptual information needed to control the arms and legs. Thus, driven exclusively by its egocentric, active visual perception, our virtual human is capable of learning efficient, online visuomotor control of its eyes, head, and four limbs to perform a nontrivial task involving the foveation and visual persuit of a moving target object coupled with visually-guided reaching actions to intercept the incoming target.\",\"PeriodicalId\":150600,\"journal\":{\"name\":\"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPRW.2018.00257\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2018.00257","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning Biomimetic Perception for Human Sensorimotor Control
We introduce a biomimetic simulation framework for human perception and sensorimotor control. Our framework features a biomechanically simulated musculoskeletal human model actuated by numerous skeletal muscles, with two human-like eyes whose retinas contain spatially nonuniform distributions of photoreceptors. Its prototype sensorimotor system comprises a set of 20 automatically-trained deep neural networks (DNNs), half of which comprise the neuromuscular motor control subsystem, whereas the other half are devoted to the visual perception subsystem. Directly from the photoreceptor responses, 2 perception DNNs control eye and head movements, while 8 DNNs extract the perceptual information needed to control the arms and legs. Thus, driven exclusively by its egocentric, active visual perception, our virtual human is capable of learning efficient, online visuomotor control of its eyes, head, and four limbs to perform a nontrivial task involving the foveation and visual persuit of a moving target object coupled with visually-guided reaching actions to intercept the incoming target.