2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)最新文献

筛选
英文 中文
Texture features based on the use of the hough transform and income inequality metrics 纹理特征基于使用霍夫变换和收入不平等指标
H. Elsaid, G. Thomas, Dexter Williams
{"title":"Texture features based on the use of the hough transform and income inequality metrics","authors":"H. Elsaid, G. Thomas, Dexter Williams","doi":"10.1109/IPTA.2017.8310094","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310094","url":null,"abstract":"Texture analysis of digital images has potential applications to image segmentation and classification. The quality of texture features can significantly determine the outcome of these two important applications. For images that consist of textures defined as patterns of straight lines, the use of features extracted using the Gray Level Co-occurrence Matrix (GLCM) is a popular choice. For each line that has a particular slope, one has to define a different predicate so that the matrix can capture that particular part of the texture. On the other hand, the Hough transform is a popular technique that detects lines that appear at different angles. We proposed an innovative way to extract texture information from the Hough accumulator using four income inequality metrics for patterns consisting of lines at different angles. We showed that when compared to four common texture metrics extracted from the GLCM, these new features can offer better quality. We used a feature selection algorithm and a classification example to illustrate the results obtained using these new income inequality texture metrics.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122116145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Comparison of CNN and MLP classifiers for algae detection in underwater pipelines CNN与MLP分类器在水下管道藻类检测中的比较
E. Medina, M. R. Petraglia, J. Gomes, A. Petraglia
{"title":"Comparison of CNN and MLP classifiers for algae detection in underwater pipelines","authors":"E. Medina, M. R. Petraglia, J. Gomes, A. Petraglia","doi":"10.1109/IPTA.2017.8310098","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310098","url":null,"abstract":"Artificial neural networks, such as the multilayer perceptron (MLP), have been increasingly employed in various applications. Recently, deep neural networks, specially convolutional neural networks (CNN), have received considerable attention due to their ability to extract and represent high-level abstractions in data sets. This article describes a vision inspection system based on deep learning and computer vision algorithms for detection of algae in underwater pipelines. The proposed algorithm comprises a CNN or a MLP network, followed by a post-processing stage operating in spatial and temporal domains, employing clustering of neighboring detection positions and a region interception framebuffer. The performances of MLP, employing different descriptors, and CNN classifiers are compared in real-world scenarios. It is shown that the post-processing stage considerably decreases the number of false positives, resulting in an accuracy rate of 99.39%.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116903698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Towards light-compensated saliency prediction for omnidirectional images 全向图像的光补偿显著性预测
Sourodeep Biswas, Sid Ahmed Fezza, M. Larabi
{"title":"Towards light-compensated saliency prediction for omnidirectional images","authors":"Sourodeep Biswas, Sid Ahmed Fezza, M. Larabi","doi":"10.1109/IPTA.2017.8310119","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310119","url":null,"abstract":"Omnidirectional or 360-degree images is becoming very popular in many applications and several challenges are raised because of the nature and the representation of the data. The saliency prediction for such a content opens the door to many problems linked to the geometric distortions, lighting variation, … In this paper, we propose a saliency model taking advantage of the large literature of 2D saliency and offering three major adjustments related to the nature of 360-degree images : 1) illumination normalization to account for the variability of lighting over the scene, 2) distortion compensation to handle the conversion problem from the sphere to the equi-rectangular representation, and 3) equator bias to incorporate the perceptual property according to which the human gaze is biased towards the equator line. The obtained results showed an improvement of the performance of the 2D saliency when using the above adjustments for omnidirectional images.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129111439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Correlation-based 2D registration method for single particle cryo-EM images 基于相关的低温电镜单粒子图像二维配准方法
N. A. Anoshina, A. Krylov, D. Sorokin
{"title":"Correlation-based 2D registration method for single particle cryo-EM images","authors":"N. A. Anoshina, A. Krylov, D. Sorokin","doi":"10.1109/IPTA.2017.8310125","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310125","url":null,"abstract":"The amount of image data generated in single particle cryo-electron microscopy (cryo-EM) is huge. This technique is based on the reconstruction of the 3D model of a particle using its 2D projections. The most common way to reduce the noise in particle projection images is averaging. The essential step before the averaging is the alignment of projections. In this work, we propose a fast 2D rigid registration approach for alignment of particle projections in single particle cryo-EM. We used cross-correlation in Fourier domain combined with polar transform to find the rotation angle invariant to the shift between the images. For translation vector estimation we used a fast version of upsampled image correlation. Our approach was evaluated on specifically created synthetic dataset. An experimental comparison with a widely used in existing software iterative method has been performed. In addition, it was successfully applied to a real dataset from the Electron Microscopy Data Bank (EMDB).","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124728016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Single parameter post-processing method for image deblurring 图像去模糊的单参数后处理方法
A. Krylov, A. Nasonov, Yakov Pchelintsev
{"title":"Single parameter post-processing method for image deblurring","authors":"A. Krylov, A. Nasonov, Yakov Pchelintsev","doi":"10.1109/IPTA.2017.8310093","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310093","url":null,"abstract":"Numerous algorithms exist for the problem of deconvolution of blurred images. But due to ill-posed nature of deconvolution, many images still remain blurry after deblurring. An edge sharpening algorithm is proposed in the paper to further improve the quality of blurry images in edge areas. The method is based on pixel grid warping, its main idea is to move pixels in the direction of the nearest image edges. Warping allows to make edges sharper while keeping textured areas almost intact. Experimental analysis for different optical blur models is performed to optimize the parameters of the proposed method and to show its effectiveness.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"306 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120942504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Image matching using GPT correlation associated with simplified HOG patterns 使用GPT相关性与简化HOG模式相关联的图像匹配
Shizhi Zhang, T. Wakahara, Yukihiko Yamashita
{"title":"Image matching using GPT correlation associated with simplified HOG patterns","authors":"Shizhi Zhang, T. Wakahara, Yukihiko Yamashita","doi":"10.1109/IPTA.2017.8310122","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310122","url":null,"abstract":"GAT (Global Affine Transformation) and GPT (Global Projection Transformation) matchings proposed by Wakahara and Yamashita calculate the optimal AT (affine transformation) and PT (2D projection transformation), respectively. These image matching criteria realize deformation-tolerant matchings by maximizing the normalized cross-correlation between a template and a GAT/GPT-superimposed image. In order to shorten the calculation time, Wakahara and Yamashita also proposed the acceleration algorithms for GAT/GPT matchings. Later on, Wakahara et al. proposed the enhanced GPT matching to calculate the optimal PT parameters simultaneously which overcomes the incompatibility during the matching process. Zhang et al. figured out that these matching techniques do not take account of the conservation of the L2 norm, and introduced norm normalization factors that realize accurate and stable matchings. All these correlation-based matching techniques are well suited for “whole-to-whole” image matching, but are weak in “whole-to-part” image matching being cursed by complex backgrounds and noise. This research firstly proposes simplified HOG patterns for the enhanced GPT matching with norm normalization to obtain the robustness against noise and background. Secondly, this research also proposes the acceleration algorithm for the proposed matching criterion by creating several reference tables. Experiments using the Graffiti dataset show that the proposed method exhibits an outstanding matching ability compared with the original GPT correlation matching and the well-known combination of SURF feature descriptor and RANSAC algorithm. Furthermore, the computational complexity of the proposed method is significantly reduced below double figures via the acceleration algorithm.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116576183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An automatic detection of helmeted and non-helmeted motorcyclist with license plate extraction using convolutional neural network 基于卷积神经网络的车牌自动检测方法
J. Mistry, Aashish Kumar Misraa, Meenu Agarwal, Ayushi Vyas, Vishal M. Chudasama, Kishor P. Upla
{"title":"An automatic detection of helmeted and non-helmeted motorcyclist with license plate extraction using convolutional neural network","authors":"J. Mistry, Aashish Kumar Misraa, Meenu Agarwal, Ayushi Vyas, Vishal M. Chudasama, Kishor P. Upla","doi":"10.1109/IPTA.2017.8310092","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310092","url":null,"abstract":"Detection of helmeted and non-helmeted motorcyclist is mandatory now-a-days in order to ensure the safety of riders on the road. However, due to many constraints such as poor video quality, occlusion, illumination, and other varying factors it becomes very difficult to detect them accurately. In this paper, we introduce an approach for automatic detection of helmeted and non-helmeted motorcyclist using convolutional neural network (CNN). During the past several years, the advancements in deep learning models have drastically improved the performance of object detection. One such model is YOLOv2 [1] which combines both classification and object detection in a single architecture. Here, we use YOLOv2 at two different stages one after another in order to improve the helmet detection accuracy. At the first stage, YOLOv2 model is used to detect different objects in the test image. Since this model is trained on COCO dataset, it can detect all classes of the COCO dataset. In the proposed approach, we use detection of person class instead of motorcycle in order to increase the accuracy of helmet detection in the input image. The cropped images of detected persons are used as input to second YOLOv2 stage which was trained on our dataset of helmeted images. The non-helmeted images are processed further to extract license plate by using OpenALPR. In the proposed approach, we use two different datasets i.e., COCO and helmet datasets. We tested the potential of our approach on different helmeted and non-helmeted images. Experimental results show that the proposed method performs better when compared to other existing approaches with 94.70% helmet detection accuracy.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134383050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Unsupervised data analysis for virus detection with a surface plasmon resonance sensor 用表面等离子体共振传感器检测病毒的无监督数据分析
Dominic Siedhoff, M. Strauch, V. Shpacovitch, D. Merhof
{"title":"Unsupervised data analysis for virus detection with a surface plasmon resonance sensor","authors":"Dominic Siedhoff, M. Strauch, V. Shpacovitch, D. Merhof","doi":"10.1109/IPTA.2017.8310145","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310145","url":null,"abstract":"We propose an unsupervised approach for virus detection with a biosensor based on surface plasmon resonance. A column-based non-negative matrix factorisation (NNCX) serves to select virus candidate time series from the spatio-temporal data. The candidates are then separated into true virus adhesions and false positive NNCX responses by fitting a constrained virus model function. In the evaluation on ground truth data, our unsupervised approach compares favourably to a previously published supervised approach that requires more parameters.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115382426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Improving a deep learning based RGB-D object recognition model by ensemble learning 基于集成学习改进深度学习的RGB-D对象识别模型
Andreas Aakerberg, Kamal Nasrollahi, Thomas Heder
{"title":"Improving a deep learning based RGB-D object recognition model by ensemble learning","authors":"Andreas Aakerberg, Kamal Nasrollahi, Thomas Heder","doi":"10.1109/IPTA.2017.8310101","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310101","url":null,"abstract":"Augmenting RGB images with depth information is a well-known method to significantly improve the recognition accuracy of object recognition models. Another method to improve the performance of visual recognition models is ensemble learning. However, this method has not been widely explored in combination with deep convolutional neural network based RGB-D object recognition models. Hence, in this paper, we form different ensembles of complementary deep convolutional neural network models, and show that this can be used to increase the recognition performance beyond existing limits. Experiments on the Washington RGB-D Object Dataset show that our best performing ensemble improves the recognition performance with 0.7% compared to using the baseline model alone.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115303969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Sample-based regularization for support vector machine classification 基于样本的正则化支持向量机分类
D. Tran, Muhammad-Adeel Waris, M. Gabbouj, Alexandros Iosifidis
{"title":"Sample-based regularization for support vector machine classification","authors":"D. Tran, Muhammad-Adeel Waris, M. Gabbouj, Alexandros Iosifidis","doi":"10.1109/IPTA.2017.8310103","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310103","url":null,"abstract":"In this paper, we propose a new regularization scheme for the well-known Support Vector Machine (SVM) classifier that operates on the training sample level. The proposed approach is motivated by the fact that Maximum Margin-based classification defines decision functions as a linear combination of the selected training data and, thus, the variations on training sample selection directly affect generalization performance. We show that the exploitation of the proposed regularization scheme is well motivated and intuitive. Experimental results show that the proposed regularization scheme outperforms standard SVM in human action recognition tasks as well as classical recognition problems.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信