2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)最新文献

筛选
英文 中文
Breast cancer detection using spectral probable feature on thermography images 利用热成像图像的光谱可能特征检测乳腺癌
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779961
Rozita Rastghalam, H. Pourghassem
{"title":"Breast cancer detection using spectral probable feature on thermography images","authors":"Rozita Rastghalam, H. Pourghassem","doi":"10.1109/IRANIANMVIP.2013.6779961","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779961","url":null,"abstract":"Thermography is a noninvasive, non-radiating, fast, and painless imaging technique that is able to detect breast tumors much earlier than the traditional mammography methods. In this paper, a novel breast cancer detection algorithm based on spectral probable features is proposed to separate healthy and pathological cases during breast cancer screening. Gray level co-occurrence matrix is made from image spectrum to obtain spectral co-occurrence feature. However, this feature is not sufficient separately. To extract directional and probable features from image spectrum, this matrix is optimized and defined as a feature vector. By asymmetry analysis, left and right breast feature vectors are compared in which certainly, more similarity in these two vectors implies healthy breasts. Our method is implemented on various breast thermograms that are generated by different thermography centers. Our algorithm is evaluated on different similarity measures such as Euclidean distance, correlation and chi-square. The obtained results show effectiveness of our proposed algorithm.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131731258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A new feature extraction method from dental X-ray images for human identification 一种用于人体识别的牙齿x射线图像特征提取新方法
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6780018
Faranak Shamsafar
{"title":"A new feature extraction method from dental X-ray images for human identification","authors":"Faranak Shamsafar","doi":"10.1109/IRANIANMVIP.2013.6780018","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780018","url":null,"abstract":"Using dental radiography is an alternative approach to identify a deceased person, especially in cases that other biometric traits cannot be handled. This paper proposes a new method for feature extraction from dental radiography images to identify people. First, dental works are segmented in the X-ray images using image processing techniques. Then, radius vector function and support function are extracted for each segmented region. These functions are independent of image translation. The presented algorithm modifies both functions to be invariant under image rotation as well. Also, by normalizing the functions, the problems due to image scale variations can be solved. Image translation, rotation and scale variations are basic challenges when dental features are compared in spatial domain. Experiments prove suitable recognition accuracy in the proposed approach which does not require teeth alignment at the matching level.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131766817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Robust watershed segmentation of moving shadows using wavelets 基于小波的运动阴影鲁棒分水岭分割
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6780015
E. Shabaninia, A. Naghsh-Nilchi
{"title":"Robust watershed segmentation of moving shadows using wavelets","authors":"E. Shabaninia, A. Naghsh-Nilchi","doi":"10.1109/IRANIANMVIP.2013.6780015","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780015","url":null,"abstract":"Segmentation of moving objects in a video sequence is a primary mission of many computer vision tasks. However, shadows extracted along with the objects can result in large errors in object localization and recognition. We propose a novel method of moving shadow detection using wavelets and watershed segmentation algorithm, which can effectively separate the cast shadow of moving objects in a scene obtained from a video sequence. The wavelet transform is used to de-noise and enhance edges of foreground image, and to obtain an enhanced version of gradient image. Then, the watershed transform is applied to the gradient image to segment different parts of object including shadows. Finally a post-processing exertion is accommodated to mark segmented parts with chromacity close to the background reference as shadows. Experimental results on two datasets prove the efficiency and robustness of the proposed approach.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130993486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Texture classification using dominant gradient descriptor 基于优势梯度描述符的纹理分类
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779958
Maryam Mokhtari, Parvin Razzaghi, S. Samavi
{"title":"Texture classification using dominant gradient descriptor","authors":"Maryam Mokhtari, Parvin Razzaghi, S. Samavi","doi":"10.1109/IRANIANMVIP.2013.6779958","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779958","url":null,"abstract":"Texture classification is an important part of many object recognition algorithms. In this paper, a new approach to texture classification is proposed. Recently, local binary pattern (LBP) has been widely used in texture classification. In conventional LBP, directional statistical features and color information are not considered. To extract color information of textures, we have used color LBP. Also, to consider directional statistical features, we proposed the concept of histogram of dominant gradient (HoDG). In HoDG, the image is divided into blocks. Then the dominant gradient orientation of each block of image is extracted. Histogram of dominant gradients of blocks is used to describe edges and orientations of the texture image. By coupling the color LBP with HoDG, a new rotation invariant texture classification method is presented. Experimental results on the CUReT database show that our proposed method is superior to comparable algorithms.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Nonrigid registration of breast MR images using residual complexity similarity measure 基于残差复杂度相似度的乳腺MR图像非刚性配准
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779987
Azam Hamidi Nekoo, A. Ghaffari, E. Fatemizadeh
{"title":"Nonrigid registration of breast MR images using residual complexity similarity measure","authors":"Azam Hamidi Nekoo, A. Ghaffari, E. Fatemizadeh","doi":"10.1109/IRANIANMVIP.2013.6779987","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779987","url":null,"abstract":"Elimination of motion artifact in breast MR images is a significant issue in pre-processing step before utilizing images for diagnostic applications. Breast MR Images are affected by slow varying intensity distortions as a result of contrast agent enhancement. Thus a nonrigid registration algorithm considering this effect is needed. Traditional similarity measures such as sum of squared differences and cross correlation, ignore the mentioned distortion. Therefore, efficient registration is not obtained. Residual complexity is a similarity measure that considers spatially varying intensity distortions by maximizing sparseness of the residual image. In this research, the results obtained by applying nonrigid registration based on residual complexity, sum of squared differences and cross correlation similarity measures are demonstrated which show more robustness and accuracy of RC comparing with other similarity measures for breast MR images.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133766137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion of SPECT and MRI images using back and fore ground information 利用背景和前景信息融合SPECT和MRI图像
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779984
Behzad Nobariyan, S. Daneshvar, M. Hosseinzadeh
{"title":"Fusion of SPECT and MRI images using back and fore ground information","authors":"Behzad Nobariyan, S. Daneshvar, M. Hosseinzadeh","doi":"10.1109/IRANIANMVIP.2013.6779984","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779984","url":null,"abstract":"Perception and diagnosis of disorders by using single photon emission computed tomography (SPECT) image is difficult since this image does not contain anatomical Information. In the studies it is tried to make innovation SPECT image by magnetic resonance imaging (MRI) and image fusion methods. So the fused image is obtained involving functional and anatomical information. MRI image shows tissue brain anatomy and it has high spatial resolution without functional information. SPECT shows brain function and it has low spatial resolution. Fusion of SPECT and MRI images leads to a high spatial resolution image. The fused image with desired specifications consists in spatial and spectral distortions. Substitution methods such as IHS and Multi-resolution fusion methods such as wavelet transform preserve spatial and spectral information respectively. In This article we present a method that preserves both spatial and spectral information well and minimizes distortions of fused images relative to other methods.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114557492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Geometric modeling of the wavelet coefficients for image watermarking 图像水印小波系数的几何建模
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779944
Mohammad Hamghalam, S. Mirzakuchaki, M. Akhaee
{"title":"Geometric modeling of the wavelet coefficients for image watermarking","authors":"Mohammad Hamghalam, S. Mirzakuchaki, M. Akhaee","doi":"10.1109/IRANIANMVIP.2013.6779944","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779944","url":null,"abstract":"In this paper, a robust image watermarking method based on geometric modeling is presented. Eight samples of wavelet approximation coefficients on each image block are utilized to construct two line segments in the 2-D space. We change the angle formed between these line segments for data embedding. Geometrical tools are used to solve the tradeoff between the transparency and robustness of the watermark data. Due to embedding in the angle between two line segments, the proposed scheme has high robustness against the gain attacks. In addition, using the low frequency components of the image blocks for data embedding, high robustness against noise and compression attacks has been achieved. Experimental results confirm the validity of the theoretical analyses given in the paper and show the superiority of the proposed method against common attacks, such as Gaussian filtering, median filtering and scaling attacks.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117148085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Facial expression recognition using sparse coding 基于稀疏编码的面部表情识别
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779968
Maryam Abdolali, M. Rahmati
{"title":"Facial expression recognition using sparse coding","authors":"Maryam Abdolali, M. Rahmati","doi":"10.1109/IRANIANMVIP.2013.6779968","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779968","url":null,"abstract":"In this paper a sparse coding approach is proposed. Due to the similarity of the frequency and orientation representations of Gabor filters and those of the human visual system, we have used Gabor filters in the step of creating the dictionary. It has been shown that not all Gabor filters in a typical Gabor bank is necessary and efficient in facial expression recognition. Also we proposed a voting system in the test phase of algorithm to find the best matching expression. The well known JAFFE database has been used to evaluate the proposed method and our experimental results show encouraging results within the mentioned database.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116417927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
An intelligent and real-time system for plate recognition under complicated conditions 复杂条件下的智能实时车牌识别系统
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779988
Mohammad Salahshoor, A. Broumandnia, M. Rastgarpour
{"title":"An intelligent and real-time system for plate recognition under complicated conditions","authors":"Mohammad Salahshoor, A. Broumandnia, M. Rastgarpour","doi":"10.1109/IRANIANMVIP.2013.6779988","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779988","url":null,"abstract":"Vehicle Plate Recognition (VPR) algorithm in images and videos usually consists of the following three steps: 1) Region extraction of the plate (plate localization), 2) characters segmentation of the plate 3) Recognition of each character. This paper presents new methods for real-time plate recognition in each step. We used a Detector for the Blue Area (DBA) to locate the plate, Averaging of White Pixels in Objects (AWPO) for the character segmentation, then of method the Euclidian distance and template matching for character recognition after training. This system used 250 vehicle images with different backgrounds and non-uniform conditions. The proposed system is robust against challenges such as illumination and distance changes, and different angles between camera and vehicle, the presence of shadow, scratches and dirt on the plates. The accuracy rate for the three stages are 91.6% 89% and 95.09% respectively. The real-time recognition of plates for vehicles is 2.3 seconds, too.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117194036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
HaFT: A handwritten Farsi text database 一个手写的波斯语文本数据库
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779956
Reza Safabaksh, A. Ghanbarian, Golnaz Ghiasi
{"title":"HaFT: A handwritten Farsi text database","authors":"Reza Safabaksh, A. Ghanbarian, Golnaz Ghiasi","doi":"10.1109/IRANIANMVIP.2013.6779956","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779956","url":null,"abstract":"Standard databases provide for evaluation and comparison of various pattern recognition techniques by different researchers; thus they are essential for the advance of research. There are different handwritten databases in various languages, but there is not a large standard database of handwritten text for the evaluation of different algorithms for writer identification and verification in Farsi. This paper introduces a large handwritten Farsi text database called HaFT. The database contains 1800 gray scale images of unconstrained text written by 600 writers. Each participant gave three separate eight-line samples of his handwriting, each of which was written at a different time on a separate sheet. HaFT is presented in several versions each including different lengths of text and using identical or different writing instruments. A new measure, called CVM, is defined which effectively reflects the size of handwriting and thus the content volume of a given text image. This database is designed for training and testing Farsi writer identification and verification using handwritten text. In addition, the database can also be used in training and testing handwritten Farsi text segmentation and recognition algorithms. HaFT is available for research use.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信