{"title":"虚拟人仿生视觉的深度学习","authors":"Masaki Nakada, Honglin Chen, Demetri Terzopoulos","doi":"10.1145/3225153.3225161","DOIUrl":null,"url":null,"abstract":"Future generations of advanced, autonomous virtual humans will likely require artificial vision systems that more accurately model the human biological vision system. With this in mind, we propose a strongly biomimetic model of visual perception within a novel framework for human sensorimotor control. Our framework features a biomechanically simulated, musculoskeletal human model actuated by numerous skeletal muscles, with two human-like eyes whose retinas have spatially nonuniform distributions of photoreceptors not unlike biological retinas. The retinal photoreceptors capture the scene irradiance that reaches them, which is computed using ray tracing. Within the sensory subsystem of our model, which continuously operates on the photoreceptor outputs, are 10 automatically-trained, deep neural networks (DNNs). A pair of DNNs drive eye and head movements, while the other 8 DNNs extract the sensory information needed to control the arms and legs. Thus, exclusively by means of its egocentric, active visual perception, our biomechanical virtual human learns, by synthesizing its own training data, efficient, online visuomotor control of its eyes, head, and limbs to perform tasks involving the foveation and visual pursuit of target objects coupled with visually-guided reaching actions to intercept the moving targets.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Deep learning of biomimetic visual perception for virtual humans\",\"authors\":\"Masaki Nakada, Honglin Chen, Demetri Terzopoulos\",\"doi\":\"10.1145/3225153.3225161\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Future generations of advanced, autonomous virtual humans will likely require artificial vision systems that more accurately model the human biological vision system. With this in mind, we propose a strongly biomimetic model of visual perception within a novel framework for human sensorimotor control. Our framework features a biomechanically simulated, musculoskeletal human model actuated by numerous skeletal muscles, with two human-like eyes whose retinas have spatially nonuniform distributions of photoreceptors not unlike biological retinas. The retinal photoreceptors capture the scene irradiance that reaches them, which is computed using ray tracing. Within the sensory subsystem of our model, which continuously operates on the photoreceptor outputs, are 10 automatically-trained, deep neural networks (DNNs). A pair of DNNs drive eye and head movements, while the other 8 DNNs extract the sensory information needed to control the arms and legs. Thus, exclusively by means of its egocentric, active visual perception, our biomechanical virtual human learns, by synthesizing its own training data, efficient, online visuomotor control of its eyes, head, and limbs to perform tasks involving the foveation and visual pursuit of target objects coupled with visually-guided reaching actions to intercept the moving targets.\",\"PeriodicalId\":185507,\"journal\":{\"name\":\"Proceedings of the 15th ACM Symposium on Applied Perception\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 15th ACM Symposium on Applied Perception\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3225153.3225161\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 15th ACM Symposium on Applied Perception","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3225153.3225161","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep learning of biomimetic visual perception for virtual humans
Future generations of advanced, autonomous virtual humans will likely require artificial vision systems that more accurately model the human biological vision system. With this in mind, we propose a strongly biomimetic model of visual perception within a novel framework for human sensorimotor control. Our framework features a biomechanically simulated, musculoskeletal human model actuated by numerous skeletal muscles, with two human-like eyes whose retinas have spatially nonuniform distributions of photoreceptors not unlike biological retinas. The retinal photoreceptors capture the scene irradiance that reaches them, which is computed using ray tracing. Within the sensory subsystem of our model, which continuously operates on the photoreceptor outputs, are 10 automatically-trained, deep neural networks (DNNs). A pair of DNNs drive eye and head movements, while the other 8 DNNs extract the sensory information needed to control the arms and legs. Thus, exclusively by means of its egocentric, active visual perception, our biomechanical virtual human learns, by synthesizing its own training data, efficient, online visuomotor control of its eyes, head, and limbs to perform tasks involving the foveation and visual pursuit of target objects coupled with visually-guided reaching actions to intercept the moving targets.