{"title":"活跃的市场细分。","authors":"Ajay Mishra, Yiannis Aloimonos","doi":"10.1142/S0219843609001784","DOIUrl":null,"url":null,"abstract":"<p><p>The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.</p>","PeriodicalId":50319,"journal":{"name":"International Journal of Humanoid Robotics","volume":null,"pages":null},"PeriodicalIF":0.9000,"publicationDate":"2009-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/S0219843609001784","citationCount":"42","resultStr":"{\"title\":\"Active Segmentation.\",\"authors\":\"Ajay Mishra, Yiannis Aloimonos\",\"doi\":\"10.1142/S0219843609001784\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.</p>\",\"PeriodicalId\":50319,\"journal\":{\"name\":\"International Journal of Humanoid Robotics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2009-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1142/S0219843609001784\",\"citationCount\":\"42\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Humanoid Robotics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1142/S0219843609001784\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Humanoid Robotics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1142/S0219843609001784","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ROBOTICS","Score":null,"Total":0}
The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.
期刊介绍:
The International Journal of Humanoid Robotics (IJHR) covers all subjects on the mind and body of humanoid robots. It is dedicated to advancing new theories, new techniques, and new implementations contributing to the successful achievement of future robots which not only imitate human beings, but also serve human beings. While IJHR encourages the contribution of original papers which are solidly grounded on proven theories or experimental procedures, the journal also encourages the contribution of innovative papers which venture into the new, frontier areas in robotics. Such papers need not necessarily demonstrate, in the early stages of research and development, the full potential of new findings on a physical or virtual robot.
IJHR welcomes original papers in the following categories:
Research papers, which disseminate scientific findings contributing to solving technical issues underlying the development of humanoid robots, or biologically-inspired robots, having multiple functionality related to either physical capabilities (i.e. motion) or mental capabilities (i.e. intelligence)
Review articles, which describe, in non-technical terms, the latest in basic theories, principles, and algorithmic solutions
Short articles (e.g. feature articles and dialogues), which discuss the latest significant achievements and the future trends in robotics R&D
Papers on curriculum development in humanoid robot education
Book reviews.