{"title":"An algebraic method for detecting three-dimensional objects in a picture","authors":"G. Marola, A. Vaccarelli","doi":"10.1109/ICPR.1992.201589","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201589","url":null,"abstract":"An algorithm is proposed for the recognition of polyhedral objects. It is effective for objects that are rotated, scaled or overlapping. It uses only one orthographic projection. The algorithm is not based on the numerical solution of nonlinear equations and does not require any a priori knowledge. It is based on simple algebraic equations, which are easy to solve and do not require time-consuming numerical procedures.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124592503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computing discontinuity-preserved image flow","authors":"S. Raghavan, N. Gupta, L. Kanal","doi":"10.1109/ICPR.1992.201672","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201672","url":null,"abstract":"Relative motion between a camera and the objects generates a time-varying optical array of changing intensities. Several algorithms have been proposed in the recent past to compute image motion from an image sequence. The primary difficulty with these approaches is that in the presence of multiple objects crossing each other in a scene, the accuracy of velocity estimates computed by these algorithms at locations near the boundaries of the objects is far from satisfactory. The authors present a robust technique to overcome this difficulty which preserves discontinuities in image motion across boundaries in a graceful manner. One of the major advantages of this algorithm is its ability to precisely detect points of discontinuity in image velocity.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123501583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Elevation modeling and motion tracking using a binocular camera system with seven degrees of freedom","authors":"W. Pölzleitner, G. Paar","doi":"10.1109/ICPR.1992.201499","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201499","url":null,"abstract":"The autonomous navigation of a spacecraft during descent and landing using a computer vision system has been recognized as a key to the feasibility of interplanetary missions. One possible approach for navigation is to compute an elevation model and high level description of the planetary body during orbit, and use this information jointly with real-time dynamic range estimation and tracking to navigate the spacecraft. The authors describe a vision system that is used to navigate the spacecraft during descent. The final goal is to position a roving vehicle on a selected landing site and position it on its trajectory. The methods studied should be applicable to the landing phase and later on also in the autonomous motion of the rover on the surface.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116794355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Part segmentation for object recognition","authors":"Liang-Hua Chen, Wei-Chung Lin","doi":"10.1109/ICPR.1992.201556","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201556","url":null,"abstract":"Addresses the problem of segmenting objects into parts using stereo images. There are three components in the part segmentation process: surface segmentation, region grouping, and volumetric models recovery/segmentation. The surface segmentation process segments the image into a set of regions such that each region represents a surface smooth in depth. The region grouping process merges the segmented regions into meaningful parts. Finally the process of volumetric model recovery/segmentation phase recovers the part model and segments that part into smaller parts if necessary. Since multi-shape models have been used to drive the part segmentation process, one can capture more geometric properties of the object and the application domain of the approach is broader than that of the previous approaches.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116862936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual observation under uncertainty as a discrete event process","authors":"T. Sobh, R. Bajcsy","doi":"10.1109/ICPR.1992.201592","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201592","url":null,"abstract":"The problem of development and implementation of a discrete event dynamic system observer for a moving agent is considered. The authors present a modeling approach for the visual system and its observer, where the 'events' are defined as ranges on parameter subsets. In particular, the proposed system is used for observing a manipulation process, where a robot hand manipulates an object. the hand/object interaction is recognised over time and a stabilizing observer is constructed. The resulting robot arm behavior is constructed as a hybrid intelligent mechanism. The work examines closely the possibilities for errors, mistakes and uncertainties in the manipulation system, observer construction process and event identification mechanisms. Some results from a sequence of a peg-in-hole operation are documented.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128974322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structure from motion using an active vision paradigm","authors":"F. Chaumette, S. Boukir","doi":"10.1109/ICPR.1992.201503","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201503","url":null,"abstract":"A method for the reconstruction and localization of geometrical primitives using active dynamic vision is presented. The approach is based on the use of the interaction matrix related to the visual data describing a primitive. Next, active vision is considered by computing adequate camera motions with a control law in closed-loop with respect to visual data. Simulation results on the localization of a sphere are presented and show that active vision can to a large extent improve the accuracy of the structure estimation.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130202264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Kondo, Tan Xu, H. Tanaka, M. Sangwarasil, K. Atsuta
{"title":"Shape and source from shading using zero crossings","authors":"S. Kondo, Tan Xu, H. Tanaka, M. Sangwarasil, K. Atsuta","doi":"10.1109/ICPR.1992.201617","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201617","url":null,"abstract":"Proposes a new estimation method of 3D shape of an object and the illuminant direction from a single image using the zero crossings under the assumption that the illuminant direction and strength, and the object surface albado are unknown. In most practical cases the illuminant direction is unknown. For these cases, a new theory was proposed by M.J. Brooks and B.K.P. Horn (1985) where the illuminant direction and 3D shape are simultaneously estimated from a single image. This theory, however, can not be applied to an object whose surface is not convex. The authors propose a new theory which has no such drawback.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123730029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Randomized Hough transform applied to translational and rotational motion analysis","authors":"H. Kälviäinen, E. Oja, Lei Xu","doi":"10.1109/ICPR.1992.201650","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201650","url":null,"abstract":"A method has been developed to calculate 2-D motion in a sequence of time-varying images. The method, called motion detection using randomized Hough transform (MDRHT), is based on the randomized Hough transform (RHT). The RHT decreases considerably the time consumption and memory requirements of the Hough transform. The idea of the MDRHT is to pick randomly point pairs from two images and calculate the translation with them. The points can be e.g. edge points of the original images. This approach can avoid difficulties of standard segmentation methods like overlapping and covering, and has the advantages provided by the RHT. The method can be generalized by picking more than two points. After a brief review of the RHT applied to motion detection, the extended algorithm to calculate both translation and rotation is represented in this paper.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130528086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust disparity estimation in terrain modeling for spacecraft navigation","authors":"G. Paar, W. Pölzleitner","doi":"10.1109/ICPR.1992.201666","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201666","url":null,"abstract":"Navigation and imagery in the orbit, descent, and landing phases during an interplanetary mission require methods that are able to derive the elevation map of a planetary body using remote sensing tools. The authors propose stereovision techniques for this task. An algorithm for correspondence matching, which is one of the crucial steps in automatic terrain modeling, is introduced. It uses well known pyramid-based data structures, but is novel in its direct application of methods from statistical pattern recognition. Feature vectors for correspondence matching and feature selection techniques are used to find optimal features. These include grey-level statistics (mean variance) as well as more sophisticated features derived from operators like local frequency edge gradient or, as an extension, Moravec-, Gabor- or Fourier-features. The applicability of the algorithm in the remote sensing scenario of interplanetary missions is verified using a mockup simulation of the Martian surface.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128789266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient image understanding based on the Markov random field model and error backpropagation network","authors":"Il Y. Kim, H. Yang","doi":"10.1109/ICPR.1992.201595","DOIUrl":"https://doi.org/10.1109/ICPR.1992.201595","url":null,"abstract":"Image labeling is a process of recognizing each segmented region, properly exploiting the properties of the regions and the spatial relationships between regions. In some sense, image labeling is an optimization process of indexing regions using the constraints as to the scene knowledge. This paper further investigates a method of efficiently labeling images using the Markov random field (MRF). MRF model is defined on the region adjacency graph and the labeling is then optimally determined using simulated annealing. The MRF model parameters are automatically estimated using the error backpropagation network. The authors analyze the proposed method through experiments using the real natural scene images.<<ETX>>","PeriodicalId":410961,"journal":{"name":"[1992] Proceedings. 11th IAPR International Conference on Pattern Recognition","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126426881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}