{"title":"Target Tracking, Navigation, and Decision-Making","authors":"S. Grossberg","doi":"10.1093/oso/9780190070557.003.0009","DOIUrl":null,"url":null,"abstract":"This chapter explains why and how tracking of objects moving relative to an observer, and visual optic flow navigation of an observer relative to the world, are controlled by complementary cortical streams through MT--MSTv and MT+-MSTd, respectively. Target tracking uses subtractive processing of visual signals to extract an object’s bounding contours as they move relative to a background. Navigation by optic flow uses additive processing of an entire scene to derive properties such as an observer’s heading, or self-motion direction, as it moves through the scene. The chapter explains how the aperture problem for computing heading in natural scenes is solved in MT+-MSTd using a hierarchy of processing stages that is homologous to the one that solves the aperture problem for computing motion direction in MT--MSTv. Both use feedback which obeys the ART Matching Rule to select final perceptual representations and choices. Compensation for eye movements using corollary discharge, or efference copy, signals enables an accurate heading direction to be computed. Neurophysiological data about heading direction are quantitatively simulated. Log polar processing by the cortical magnification factor simplifies computation of motion direction. This space-variant processing is maximally position invariant due to the cortical choice of network parameters. How smooth pursuit occurs, and is maintained during accurate tracking, is explained. Goal approach and obstacle avoidance are explained by attractor-repeller networks. Gaussian peak shifts control steering to a goal, as well as peak shift and behavioral contrast during operant conditioning, and vector decomposition during the relative motion of object parts.","PeriodicalId":370230,"journal":{"name":"Conscious Mind, Resonant Brain","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conscious Mind, Resonant Brain","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/oso/9780190070557.003.0009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This chapter explains why and how tracking of objects moving relative to an observer, and visual optic flow navigation of an observer relative to the world, are controlled by complementary cortical streams through MT--MSTv and MT+-MSTd, respectively. Target tracking uses subtractive processing of visual signals to extract an object’s bounding contours as they move relative to a background. Navigation by optic flow uses additive processing of an entire scene to derive properties such as an observer’s heading, or self-motion direction, as it moves through the scene. The chapter explains how the aperture problem for computing heading in natural scenes is solved in MT+-MSTd using a hierarchy of processing stages that is homologous to the one that solves the aperture problem for computing motion direction in MT--MSTv. Both use feedback which obeys the ART Matching Rule to select final perceptual representations and choices. Compensation for eye movements using corollary discharge, or efference copy, signals enables an accurate heading direction to be computed. Neurophysiological data about heading direction are quantitatively simulated. Log polar processing by the cortical magnification factor simplifies computation of motion direction. This space-variant processing is maximally position invariant due to the cortical choice of network parameters. How smooth pursuit occurs, and is maintained during accurate tracking, is explained. Goal approach and obstacle avoidance are explained by attractor-repeller networks. Gaussian peak shifts control steering to a goal, as well as peak shift and behavioral contrast during operant conditioning, and vector decomposition during the relative motion of object parts.