4th IEEE Southwest Symposium on Image Analysis and Interpretation最新文献

筛选
英文 中文
MPEG-1 super-resolution decoding for the analysis of video still images 用于分析视频静止图像的MPEG-1超分辨率解码
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839563
K. J. Erickson, R. Schultz
{"title":"MPEG-1 super-resolution decoding for the analysis of video still images","authors":"K. J. Erickson, R. Schultz","doi":"10.1109/IAI.2000.839563","DOIUrl":"https://doi.org/10.1109/IAI.2000.839563","url":null,"abstract":"A digital image sequence coded at low bitrate using a motion-compensated video compression standard should contain little data redundancy. However, the success of a particular super-resolution enhancement algorithm is predicted on super-resolution overlap (i.e., redundancy) of moving objects from frame-to-frame. If an MPEG-1 bitstream is coded at a relatively high bitrate (e.g., a compression ratio of 15:1), enough data redundancy exists within the bitstream to successfully perform super-resolution enhancement within the decoder. Empirical results are presented in which decoded pictures from MPEG-1 bitstreams containing both global scene transformations and independent object notion are integrated to generate Bayesian high-resolution video still (HRVS) images. It is shown that additional spatial details can be extracted by integrating several motion-compensated coded pictures, provided that a large number of subpixel-resolution overlaps-such as those captured by a reconnaissance airplane or surveillance satellite-are present among the original digitized video frames.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134454071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Pairwise Markov random fields and its application in textured images segmentation 成对马尔可夫随机场及其在纹理图像分割中的应用
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839581
W. Pieczynski, A. Tebbache
{"title":"Pairwise Markov random fields and its application in textured images segmentation","authors":"W. Pieczynski, A. Tebbache","doi":"10.1109/IAI.2000.839581","DOIUrl":"https://doi.org/10.1109/IAI.2000.839581","url":null,"abstract":"The use of random fields, which allows one to take into account the spatial interaction among random variables in complex systems, is a frequent tool in numerous problems of statistical image processing, like segmentation or edge detection. In statistical image segmentation, the model is generally defined by the probability distribution of the class field, which is assumed to be a Markov field, and the probability distributions of the observations field conditional to the class field. In such models the segmentation of textured images is difficult to perform and one has to resort to some model approximations. The originality of our contribution is to consider the Markovianity of the pair (class field observations field). We obtain a different model; in particular, the class field is not necessarily a Markov field. The model proposed makes possible the use of Bayesian methods like MPM or MAP to segment textured images with no model approximations. In addition, the textured images can be corrupted with correlated noise. Some first simulations to validate the model proposed are also presented.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124717849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
A integrated color-spatial image representation and the similar image retrieval 一种集成的彩色空间图像表示和相似图像检索方法
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839617
Yuehu Liu, S. Ozawa
{"title":"A integrated color-spatial image representation and the similar image retrieval","authors":"Yuehu Liu, S. Ozawa","doi":"10.1109/IAI.2000.839617","DOIUrl":"https://doi.org/10.1109/IAI.2000.839617","url":null,"abstract":"In this paper, we focus on similar image retrieval based on color-content. A two-phase matching strategy is adopted for achieving speed and accuracy of the proposed retrieval: the first phase of matching is intended to produce a candidate image list which are similar to the sample image using the global color histogram; the second phase of matching acts as a filter on the list obtained by the first phase, deleting dissimilar images using the spatial distribution of colors in the image. It is common knowledge that most of the images are dominated by a small number of colors, called dominant colors. We propose dominant color matrices for representing the spatial distribution of dominant colors in an image, which can serve as a basis for measuring the spatial similarity between images. In order to calculate the spatial similarity on dominant color matrices, we use two-dimensional DP matching technology extended from conventional DP matching. In addition, we have also applied the proposed approach to a range of color images, and obtained positive results.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124778962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Viewpoint selection - a classifier independent learning approach 视点选择-一种分类器独立学习方法
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839601
F. Deinzer, Joachim Denzler, H. Niemann
{"title":"Viewpoint selection - a classifier independent learning approach","authors":"F. Deinzer, Joachim Denzler, H. Niemann","doi":"10.1109/IAI.2000.839601","DOIUrl":"https://doi.org/10.1109/IAI.2000.839601","url":null,"abstract":"This paper deals with an aspect of active object recognition for improving the classification and localization results by choosing optimal next views at an object. The knowledge of \"good\" next views at an object is learned automatically and unsupervised from the results of the used classifier. For that purpose methods of reinforcement learning are used in combination with numerical optimization. The major advantages of the presented approach are its classifier-independence and that the approach does not require a priori assumptions about the objects. The presented results for synthetically generated images show that our approach is well suited for choosing optimal views at objects.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122887717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Quantitative measurements in geometrically correct representations of coronary vessels in 3-D and 4-D 三维和四维冠状血管几何正确表示的定量测量
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839611
M. Olszewski, R. M. Long, S. C. Mitchell, A. Wahle, M. Sonka
{"title":"Quantitative measurements in geometrically correct representations of coronary vessels in 3-D and 4-D","authors":"M. Olszewski, R. M. Long, S. C. Mitchell, A. Wahle, M. Sonka","doi":"10.1109/IAI.2000.839611","DOIUrl":"https://doi.org/10.1109/IAI.2000.839611","url":null,"abstract":"Two major imaging modalities are frequently used in current cardiovascular practice: biplane X-ray angiography and intravascular ultrasound (IVUS). We have previously developed a methodology for three-dimensional reconstruction of coronary vessels via a fusion of two-dimensional data from the two imaging modalities. Our data fusion technique provides a tool for accurate three- and four-dimensional measurements in coronary arteries in vivo. This paper documents recent additions to enable accurate three-dimensional volumetric and four-dimensional velocity measurements. Accurate three-dimensional volumetric measurements are of great use to the study of vessel diseases, such as atherosclerosis, while vessel velocities are of great importance when determining adequate sampling rates in the design of new imaging hardware. Validation of both methods has been performed in computer simulations, yielding minimal errors. The quantification of the vessel velocity has been tested on routine patient data, and provided results that were both consistent and in accordance with physiology.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129936376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Characterization of skin lesion texture in diffuse reflectance spectroscopic images 漫反射光谱图像中皮肤病变纹理的表征
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839589
M. Mehrübeoglu, N. Kehtarnavaz, G. Marquez, Lihong V. Wang
{"title":"Characterization of skin lesion texture in diffuse reflectance spectroscopic images","authors":"M. Mehrübeoglu, N. Kehtarnavaz, G. Marquez, Lihong V. Wang","doi":"10.1109/IAI.2000.839589","DOIUrl":"https://doi.org/10.1109/IAI.2000.839589","url":null,"abstract":"This paper examines various texture features extracted from skin lesion images obtained by using diffuse reflectance spectroscopic imaging. Different image texture features have been applied to such images to separate precancerous from benign cases. These features are extracted based on the co-occurrence matrix, wavelet decomposition , fractal signature, and granulometric approaches. The results so far indicate that fractal and wavelet-based features are effective in distinguishing precancerous from benign cases.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124957690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Speckle noise filtering for restoration of coherent shear beam images 相干剪切光束图像的散斑噪声滤波复原
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839593
Mark P. Wilson, S. Mitra, T. Krile
{"title":"Speckle noise filtering for restoration of coherent shear beam images","authors":"Mark P. Wilson, S. Mitra, T. Krile","doi":"10.1109/IAI.2000.839593","DOIUrl":"https://doi.org/10.1109/IAI.2000.839593","url":null,"abstract":"Shear beam imaging is a coherent reflective imaging technique, which inherently contains signal-dependent speckle noise with characteristic glints. The removal of the speckle using a minimal number of frames while preserving the glints is the major concern of this paper. In the past, several methods have been used to eliminate this noise. The complexity of the algorithms ranged from a simple nonlinear median filter to complex linear and nonlinear models. Here, fast morphological and wavelet filters are proposed and are shown to remove speckle better than the previous methods. The morphological filters are nonlinear in nature and computationally efficient, thus making them quite attractive. The glint is preserved by its segmentation from each frame prior to speckle removal. This paper describes the morphological, wavelet, and segmentation techniques used and discusses the current results.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"313 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132844857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head segmentation and head orientation in 3D space for pose estimation of multiple people 三维空间中多人姿态估计的头部分割与头部定位
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839598
Sangho Park, J. Aggarwal
{"title":"Head segmentation and head orientation in 3D space for pose estimation of multiple people","authors":"Sangho Park, J. Aggarwal","doi":"10.1109/IAI.2000.839598","DOIUrl":"https://doi.org/10.1109/IAI.2000.839598","url":null,"abstract":"We present an algorithm for establishing head orientations of multiple persons in 3D space. Using multiple features from grayscale images (i.e., binary blobs, silhouette contours, and intensity distributions), our algorithm achieves foreground separation, head segmentation, and head-orientation classification, respectively. The information is then combined to form an integrated representation about how the heads of multiple persons are configured in 3D space in order to describe their relative position. The algorithm classifies each head orientation, ranging from 0 to 360 degrees of rotation on a horizontal plane, into eight classes by using a moment-based method. The algorithm can be easily extended to video sequences of image frames for describing how head poses change over time in relation to each person involved in a scene. Experimental results are presented and illustrated.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121412771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Tubular objects network detection from 3D images 三维图像中管状物体的网络检测
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839579
Nicolas Flasque, M. Desvignes, M. Revenu, J. Constans
{"title":"Tubular objects network detection from 3D images","authors":"Nicolas Flasque, M. Desvignes, M. Revenu, J. Constans","doi":"10.1109/IAI.2000.839579","DOIUrl":"https://doi.org/10.1109/IAI.2000.839579","url":null,"abstract":"We present an approach to the tree representation of a tubular object network. The full 3D tracking algorithm for a single tubular structure is detailed. Detection of bifurcations by a connectivity approach is then exposed. We show subvoxel accuracy and reliable orientation estimation for the tracking process on synthetic images. Bifurcations are also well detected on a complex synthetic image. Finally, applications of this method to real 3D medical images are shown. The method is particularly suited for processing magnetic resonance angiography of the brain and neck.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116806637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A method for calculation of visual information on image 一种计算图像视觉信息的方法
4th IEEE Southwest Symposium on Image Analysis and Interpretation Pub Date : 2000-04-02 DOI: 10.1109/IAI.2000.839595
F. Kobayashi, M. Tomita, T. Ozeki
{"title":"A method for calculation of visual information on image","authors":"F. Kobayashi, M. Tomita, T. Ozeki","doi":"10.1109/IAI.2000.839595","DOIUrl":"https://doi.org/10.1109/IAI.2000.839595","url":null,"abstract":"In this paper, a method to calculate the value of visual information in an image is proposed. The step number of perceptible brightness levels and the division number of minimum perceptible areas at photopic vision are obtained from the visual properties. These values correspond to the length and the kind of the sign in information theory. First, the capacity of visual information in an image is calculated. Then, the amount of visual information in the image is calculated. We applied the method to an actual image and calculated the value of visual information that an observer receives from a visual object. Consequently, the capacity of the visual information in an image can be obtained quantitatively using the proposed method.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124149749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信