Jinxin Yang, Xin Hu, Yufei Zhao, Qi Xu, Wen-Chi Yang
{"title":"Modeling Brain-like Association Among Focal Visual Objects by a Bipartite Mesh","authors":"Jinxin Yang, Xin Hu, Yufei Zhao, Qi Xu, Wen-Chi Yang","doi":"10.1109/ICCICC50026.2020.9450256","DOIUrl":null,"url":null,"abstract":"The challenge of traditional visual recognition tasks has long fallen on the segmentation of objects in two-dimensional images, whereas it is less an issue in human visual learning with the help of stereo vision and physical touches. In this kind of configuration, object classification and landmark matching are fundamentally based on the semantic similarity from inputs to conceptual prototypes in memory. Here we propose a brain-inspired cognition model that deals with visual learning tasks after the focal objects have been distinguished from their backgrounds. We designed a bipartite mesh to implement visual cognition on human faces. This mesh resolves facial landmarks into point clouds in a unique semantic space, where facial characteristics can be perceived and classified through the comparison with prototypes in the memorized ontology. These face prototypes are updatable online, and landmark matching between prototypes in the vicinity is feasible through a direct mapping between relative positions within their point clouds. Besides, the association between distant prototypes in the semantic space can be realized by a sequence of matching processes on intermediaries in memory. Our findings suggest a concise framework for simulating human visual learning mechanisms that well execute one-shot learning, online learning, and analogical reasoning, at the same time subject to certain brain-like constraints such as oblivion and lack of analogical cues between two dissimilar concepts.","PeriodicalId":212248,"journal":{"name":"2020 IEEE 19th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 19th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCICC50026.2020.9450256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The challenge of traditional visual recognition tasks has long fallen on the segmentation of objects in two-dimensional images, whereas it is less an issue in human visual learning with the help of stereo vision and physical touches. In this kind of configuration, object classification and landmark matching are fundamentally based on the semantic similarity from inputs to conceptual prototypes in memory. Here we propose a brain-inspired cognition model that deals with visual learning tasks after the focal objects have been distinguished from their backgrounds. We designed a bipartite mesh to implement visual cognition on human faces. This mesh resolves facial landmarks into point clouds in a unique semantic space, where facial characteristics can be perceived and classified through the comparison with prototypes in the memorized ontology. These face prototypes are updatable online, and landmark matching between prototypes in the vicinity is feasible through a direct mapping between relative positions within their point clouds. Besides, the association between distant prototypes in the semantic space can be realized by a sequence of matching processes on intermediaries in memory. Our findings suggest a concise framework for simulating human visual learning mechanisms that well execute one-shot learning, online learning, and analogical reasoning, at the same time subject to certain brain-like constraints such as oblivion and lack of analogical cues between two dissimilar concepts.