CVGIP: Image Understanding最新文献

筛选
英文 中文
S+-Trees: An Efficient Structure for the Representation of Large Pictures S+-树:一种高效的大图片表示结构
CVGIP: Image Understanding Pub Date : 1994-05-01 DOI: 10.1006/ciun.1994.1018
Dejonge W., Scheuermann P., Schijf A.
{"title":"S+-Trees: An Efficient Structure for the Representation of Large Pictures","authors":"Dejonge W.,&nbsp;Scheuermann P.,&nbsp;Schijf A.","doi":"10.1006/ciun.1994.1018","DOIUrl":"https://doi.org/10.1006/ciun.1994.1018","url":null,"abstract":"<div><p>We are concerned in this paper with the efficient encoding and manipulation of pixel trees that are resident on secondary devices. We introduce a new structure, the S<sup>+</sup>-tree, that consists of a paged linear treecode representation of the picture (data) and an index whose entries represent separators among some of the leafcodes implicitly embedded in the linear representation. Our scheme combines the advantages of treecode and leafcode representations by offering the space efficiency of DR-expressions and the indexing capabilities of B<sup>+</sup>-trees, thus permitting easy sequential and random access to a compact representation of pictorial data. We describe an algorithm which encodes our structure from an ordered list of black leafcodes. The paged structure of the S<sup>+</sup>-tree, whereby each data page is a self-contained tree, enables to design an efficient random access search algorithm to find the color of a given region that corresponds to a quadrant or semi-quadrant. The search algorithm is non-recursive in nature and it can be optimized to work bytewise instead of bitwise. We also present an efficient method for performing translation operations on large pictures stored on secondary devices and illustrate its efficiency with the S<sup>+</sup>-treestructure.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"59 3","pages":"Pages 265-280"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91761294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
On Topology Preservation in 3D Thinning 三维减薄中拓扑保持的研究
CVGIP: Image Understanding Pub Date : 1994-05-01 DOI: 10.1006/ciun.1994.1023
Ma C.M.
{"title":"On Topology Preservation in 3D Thinning","authors":"Ma C.M.","doi":"10.1006/ciun.1994.1023","DOIUrl":"10.1006/ciun.1994.1023","url":null,"abstract":"<div><p>Topology preservation is a major concern of parallel thinning algorithms for 2D and 3D binary images. To prove that a parallel thinning algorithm preserves topology, one must show that it preserves topology for all possible images. But it would be difficult to check all images, since there are too many possible images. Efficient sufficient conditions which can simplify such proofs for the 2D case were proposed by Ronse [<em>Discrete Appl. Math.</em> 21, 1988, 69-79]. By Ronse′s results, a 2D parallel thinning algorithm can be proved to be topology preserving by checking a rather small number of configurations. This paper establishes sufficient conditions for 3D parallel thinning algorithms to preserve topology.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"59 3","pages":"Pages 328-339"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51091609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 138
Model-based multiresolution motion estimation in noisy images 噪声图像中基于模型的多分辨率运动估计
CVGIP: Image Understanding Pub Date : 1994-05-01 DOI: 10.1006/CIUN.1994.1021
Wooi-Boon Goh, G. Martin
{"title":"Model-based multiresolution motion estimation in noisy images","authors":"Wooi-Boon Goh, G. Martin","doi":"10.1006/CIUN.1994.1021","DOIUrl":"https://doi.org/10.1006/CIUN.1994.1021","url":null,"abstract":"Abstract It is argued that accurate optical flow can only be determined if problems such as local motion ambiguity, motion segmentation, and occlusion detection are simultaneously addressed. To meet this requirement, a new multiresolution region-growing algorithm is proposed. This algorithm consists of a region-growing process which is able to segment the flow field in an image into homogeneous regions which are consistent with a linear affine flow model. To ensure stability and robustness in the presence of noise, this region-growing process is implemented within the hierarchical framework of a spatial lowpass pyramid. The results of applying this algorithm to both natural and synthetic image sequences are presented.","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"1 1","pages":"307-319"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83161070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Model-Based Multiresolution Motion Estimation in Noisy Images 基于模型的噪声图像多分辨率运动估计
CVGIP: Image Understanding Pub Date : 1994-05-01 DOI: 10.1006/ciun.1994.1021
Goh W.B., Martin G.R.
{"title":"Model-Based Multiresolution Motion Estimation in Noisy Images","authors":"Goh W.B.,&nbsp;Martin G.R.","doi":"10.1006/ciun.1994.1021","DOIUrl":"https://doi.org/10.1006/ciun.1994.1021","url":null,"abstract":"<div><p>It is argued that accurate optical flow can only be determined if problems such as local motion ambiguity, motion segmentation, and occlusion detection are simultaneously addressed. To meet this requirement, a new multiresolution region-growing algorithm is proposed. This algorithm consists of a region-growing process which is able to segment the flow field in an image into homogeneous regions which are consistent with a linear affine flow model. To ensure stability and robustness in the presence of noise, this region-growing process is implemented within the hierarchical framework of a spatial lowpass pyramid. The results of applying this algorithm to both natural and synthetic image sequences are presented.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"59 3","pages":"Pages 307-319"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91761291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Shape from Shading with Perspective Projection 形状从阴影与透视投影
CVGIP: Image Understanding Pub Date : 1994-03-01 DOI: 10.1006/ciun.1994.1013
Lee K.M., Kuo C.C.J.
{"title":"Shape from Shading with Perspective Projection","authors":"Lee K.M.,&nbsp;Kuo C.C.J.","doi":"10.1006/ciun.1994.1013","DOIUrl":"10.1006/ciun.1994.1013","url":null,"abstract":"<div><p>Most conventional SFS (shape from shading) algorithms have been developed under the assumption of orthographic projection. However, the assumption is not valid when an object is not far away from the camera and, therefore, it causes severe reconstruction error in many real applications. In this research, we develop a new iterative algorithm for recovering surface heights from shaded images obtained with perspective projection. By dividing an image into a set of nonoverlapping triangular domains and approximating a smooth surface by the union of triangular surface patches, we can relate image brightness in the image plane directly to surface nodal heights in the world space via a linearized reflectance map based on the perspective projection model. To determine the surface height, we consider the minimization of a cost functional defined to be the sum of squares of the brightness error by solving a system of equations parameterized by nodal heights. Furthermore, we apply a successive linearization scheme in which the linearization of the reflectance map is performed with respect to surface nodal heights obtained from the previous iteration so that the approximation error of the reflectance map is reduced and accuracy of the reconstructed surface is improved iteratively. The proposed method reconstructs surface heights directly and does not require any additional integrability constraint. Simulation results for synthetic and real images are demonstrated to show the performance and efficiency of our new method.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"59 2","pages":"Pages 202-212"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51092060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Human Face Recognition and the Face Image Set′s Topology 人脸识别与人脸图像集拓扑
CVGIP: Image Understanding Pub Date : 1994-03-01 DOI: 10.1006/ciun.1994.1017
Bichsel M., Pentland A.P.
{"title":"Human Face Recognition and the Face Image Set′s Topology","authors":"Bichsel M.,&nbsp;Pentland A.P.","doi":"10.1006/ciun.1994.1017","DOIUrl":"10.1006/ciun.1994.1017","url":null,"abstract":"<div><p>If we consider an <em>n</em> × <em>n</em> image as an <em>n</em><sup>2</sup>-dimensional vector, then images of faces can be considered as points in this <em>n</em><sup>2</sup>-dimensional image space. Our previous studies of physical transformations of the face, including translation, small rotations, and illumination changes, showed that the set of face images consists of relatively simple connected subregions in image space. Consequently linear matching techniques can be used to obtain reliable face recognition. However, for more general transformations, such as large rotations or scale changes, the face subregions become highly non-convex. We have therefore developed a scale-space matching technique that allows us to take advantage of knowledge about important geometrical transformations and about the topology of the face subregion in image space. While recognition of faces is the focus of this paper, the algorithm is sufficiently general to be applicable to a large variety of object recognition tasks</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"59 2","pages":"Pages 254-261"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1017","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51092068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 135
Calibration of a Computer Controlled Robotic Vision Sensor with a Zoom Lens 带变焦镜头的计算机控制机器人视觉传感器的标定
CVGIP: Image Understanding Pub Date : 1994-03-01 DOI: 10.1006/ciun.1994.1015
Tarabanis K., Tsai R.Y., Goodman D.S.
{"title":"Calibration of a Computer Controlled Robotic Vision Sensor with a Zoom Lens","authors":"Tarabanis K.,&nbsp;Tsai R.Y.,&nbsp;Goodman D.S.","doi":"10.1006/ciun.1994.1015","DOIUrl":"https://doi.org/10.1006/ciun.1994.1015","url":null,"abstract":"<div><p>Active vision sensors are increasingly being employed in vision systems for their greater flexibility. For example, vision sensors in hand-eye configurations with computer controllable lenses (e.g., zoom lenses) can be set to values which satisfy the sensing situation at hand. For such applications, it is essential to determine the mapping between the parameters that can actually be controlled in a reconfigurable vision system (e.g., the robot arm pose, the zoom setting of the lens) and the higher-level viewpoint parameters that must be set to desired values (e.g., the viewpoint location, focal length). In this paper we present calibration techniques to determine this mapping. In addition, we discuss how to use these relationships in order to achieve the desired values of the viewpoint parameters by setting the controllable parameters to the appropriate values. The sensor setup that is considered consists of a camera in a hand-eye arrangement equipped with a lens that has zoom, focus, and aperture control. The calibration techniques are applied to the H6 × 12.5R Fujinon zoom lens and the experimental results are shown.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"59 2","pages":"Pages 226-241"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91635595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Inverting an Illumination Model from Range and Intensity Maps 从范围和强度图反演照明模型
CVGIP: Image Understanding Pub Date : 1994-03-01 DOI: 10.1006/ciun.1994.1012
Kay G., Caelli T.
{"title":"Inverting an Illumination Model from Range and Intensity Maps","authors":"Kay G.,&nbsp;Caelli T.","doi":"10.1006/ciun.1994.1012","DOIUrl":"10.1006/ciun.1994.1012","url":null,"abstract":"<div><p>We propose a solution to the problem of determining surface material properties from range and intensity data using a simplified version of the Torrance-Sparrow illumination model. The solution uses the photometric stereo method and regularization to invert the model equations at each point on a surface. Assuming a convex surface, one range map, and four or more intensity maps obtained using point light sources, we classify the surface into nonhighlight regions, specular highlight regions, and rank-deficient regions. This classification allows the appropriate solution method to be applied to each region. In nonhighlighted regions we use linear least squares, in highlight regions, nonlinear separable least squares with regularization, and in rank-deficient regions, interpolation. The solution consists of the values of the three parameters of the illumination model at each point on the surface. We believe this technique to be a useful adjunct to recently reported noncontact modeling systems. These systems have been designed to build computer graphics models automatically from real objects by determining surface geometry, surface relief texture, and material properties. Our technique greatly enhances the modeling of material properties. The paper concludes with a number of examples of the method applied to synthetic and real images, and a discussion of possibilities for future systems.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"59 2","pages":"Pages 183-201"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86952115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Extracting Topographic Terrain Features from Elevation Maps 从高程图中提取地形地形特征
CVGIP: Image Understanding Pub Date : 1994-03-01 DOI: 10.1006/ciun.1994.1011
Kweon I.S., Kanade T.
{"title":"Extracting Topographic Terrain Features from Elevation Maps","authors":"Kweon I.S.,&nbsp;Kanade T.","doi":"10.1006/ciun.1994.1011","DOIUrl":"10.1006/ciun.1994.1011","url":null,"abstract":"<div><p>Some applications such as the autonomous navigation in natural terrain and the automation of map making process require high-level scene descriptions as well as geometrical representation of the natural terrain environments. In this paper, we present methods for building high level terrain descriptions, referred to as topographic maps, by extracting terrain features like \"peaks,\" \"pits,\" \"ridges,\" and \"ravines\" from the contour map. The resulting topographic map contains the location and type of terrain features as well as the ground topography. We present new algorithms for extracting topographic maps consisting of topographic features (peaks, pits, ravines, and ridges) and contour maps. We develop new definitions for those topographic features based on the contour map. We build a contour map from an elevation map and generate the connectivity tree of all regions separated by the contours. We use this connectivity tree, called a <em>topographic change tree</em>, to extract the topographic features. Experimental results on a digital elevation model (DEM) supports our definitions for topographic features and the approach.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"59 2","pages":"Pages 171-182"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88242874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 176
Shadow Segmentation and Classification in a Constrained Environment 约束环境下的阴影分割与分类
CVGIP: Image Understanding Pub Date : 1994-03-01 DOI: 10.1006/ciun.1994.1014
Jiang C.X., Ward M.O.
{"title":"Shadow Segmentation and Classification in a Constrained Environment","authors":"Jiang C.X.,&nbsp;Ward M.O.","doi":"10.1006/ciun.1994.1014","DOIUrl":"10.1006/ciun.1994.1014","url":null,"abstract":"<div><p>A shadow identification and classification method for real images is developed in this paper. The method is based on the extensive analysis of shadow intensity and shadow geometry in an environment with simple objects and a single area light source. The procedure for identifying shadows is divided into three processes: low level, middle level, and high level. The low level process extracts dark regions from images. Dark regions contain both shadows and surfaces with low reflectance. The middle level process performs feature analysis on dark regions, including detecting vertices on the outlines of dark regions, identifying penumbrae in dark regions. classifying the subregions in dark regions as self-shadows or cast shadows, and finding object regions adjacent to dark regions. The high level process integrates the infonnation derived from the previous processes and confirms shadows among the dark regions.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"59 2","pages":"Pages 213-225"},"PeriodicalIF":0.0,"publicationDate":"1994-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74485893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信