2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)最新文献

筛选
英文 中文
An automated method for realistic face simulation and facial landmark annotation and its application to active appearance models 一种人脸真实感仿真与人脸地标标注的自动化方法及其在活动外观模型中的应用
M. Kopaczka, C. Hensel, D. Merhof
{"title":"An automated method for realistic face simulation and facial landmark annotation and its application to active appearance models","authors":"M. Kopaczka, C. Hensel, D. Merhof","doi":"10.1109/IPTA.2016.7820979","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820979","url":null,"abstract":"Algorithms for facial landmark detection in real-world images require manually annotated training databases. However, the task of selecting or creating the images and annotating the data is extremely time-consuming, leaving researchers with the options of investing significant amounts of time for creating annotated images optimized for the given task or resigning from creating such hand-labeled databases and to use one of the few publicly available annotated datasets with potentially limited applicability to the given problem. To allow for an alternative, we introduce a method for automatically generating realistic synthetic face images and accompanying facial landmark annotations. The proposed approach extends the automation capabilities of a commercial face modeling tool and allows large-scale generation of faces that fulfill user-defined requirements. As an additional feature, full facial landmark annotations can be computed during the generation procedure, reducing the amount of manual work required to generate a full training set to a few interactions in a graphical user interface. We describe the generation procedure in detail and demonstrate that the simulated images can be used for advanced computer vision tasks, namely training of an active appearance model that allows the detection of facial landmarks in real-world photographs.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132729453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prototype-based class-specific nonlinear subspace learning for large-scale face verification 基于原型的类特定非线性子空间学习用于大规模人脸验证
Alexandros Iosifidis, M. Gabbouj
{"title":"Prototype-based class-specific nonlinear subspace learning for large-scale face verification","authors":"Alexandros Iosifidis, M. Gabbouj","doi":"10.1109/IPTA.2016.7820988","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820988","url":null,"abstract":"In this paper, we describe a face verification method which is based on non-linear class-specific discriminant subspace learning. We follow the Kernel Spectral Regression approach to this end and employ a prototype-based approximate kernel regression scheme in order to scale the method for large-scale nonlinear discriminant learning. Experiments on two publicly available facial image databases show the effectiveness of the proposed approach, since it scales well with the data size and outperforms related approaches.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115593834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Stain separation in digital bright field histopathology 数字亮场组织病理学中的染色分离
L. Astola
{"title":"Stain separation in digital bright field histopathology","authors":"L. Astola","doi":"10.1109/IPTA.2016.7820956","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820956","url":null,"abstract":"Digital pathology employs images that were acquired by imaging thin tissue samples through a microscope. The preparation of a sample from a biopt to the glass slide entering the imaging device is done manually introducing large variability in the samples to be imaged. For visible contrast it is necessary to stain the samples prior to imaging. Different stains attach to different compounds elucidating the different cellular structures. Towards automatic analysis and for visual comparability there is a need to standardize the images to obtain consistent appearances regardless of the potential differences in sample preparation. A standard approach is to unmix the the various stains computationally, normalize each separate stain image and to recombine these. This paper describes a modification to a standard blind method for stain normalization. The performance is quantified in terms of annotated expert data. Theoretical analysis is presented to rationalize the new approach.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124521635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Characterization of hematologic malignancies based on discrete orthogonal moments 基于离散正交矩的恶性血液病表征
R. Nava, Germán González, J. Kybic, B. Escalante-Ramírez
{"title":"Characterization of hematologic malignancies based on discrete orthogonal moments","authors":"R. Nava, Germán González, J. Kybic, B. Escalante-Ramírez","doi":"10.1109/IPTA.2016.7821039","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821039","url":null,"abstract":"During the last decade leukemia and lymphomas have been a hot topic in the biomedical area. Their diagnosis is a time-consuming task that, in many cases, delays treatments. On the other hand, discrete orthogonal moments (DOMs) are a tool recently introduced in biomedical image analysis. Here, we propose a combination of DOMs to help in the diagnosis of leukemia and lymphomas. We classify the IICBU2008-lymphoma dataset that includes three hematologic malignancies: chronic lymphocytic leukemia, follicular lymphoma, and mantle cell lymphoma. Our methodology analyzes such diseases in the hema-toxylin and eosin color space. We also include feature analysis to preserve the most discriminating characteristics of the malignant tissues. Finally, the classification of the samples is performed with kernel Fisher discriminant analysis. The accuracy is 93.85%. The results show the proposal could be useful in different biomedical applications.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114614196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance evaluation of a statistical and a neural network model for nonrigid shape-based registration 基于非刚性形状的配准统计模型和神经网络模型的性能评价
A. Psarrou, A. Angelopoulou, M. Mentzelopoulos, J. G. Rodríguez
{"title":"Performance evaluation of a statistical and a neural network model for nonrigid shape-based registration","authors":"A. Psarrou, A. Angelopoulou, M. Mentzelopoulos, J. G. Rodríguez","doi":"10.1109/IPTA.2016.7820990","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820990","url":null,"abstract":"Shape-based registration methods frequently encounters in the domains of computer vision, image processing and medical imaging. The registration problem is to find an optimal transformation/mapping between sets of rigid or non-rigid objects and to automatically solve for correspondences. In this paper we present a comparison of two different probabilistic methods, the entropy and the growing neural gas network (GNG), as general feature-based registration algorithms. Using entropy shape modelling is performed by connecting the point sets with the highest probability of curvature information, while with GNG the points sets are connected using nearest-neighbour relationships derived from competitive hebbian learning. In order to compare performances we use different levels of shape deformation starting with a simple shape 2D MRI brain ventricles and moving to more complicated shapes like hands. Results both quantitatively and qualitatively are given for both sets.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117324491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An accurate eye localization approach for smart embedded system 一种面向智能嵌入式系统的眼睛精确定位方法
Zhaoqiang Xia, Wenhao Zhang, Fang Tan, Xiaoyi Feng, A. Hadid
{"title":"An accurate eye localization approach for smart embedded system","authors":"Zhaoqiang Xia, Wenhao Zhang, Fang Tan, Xiaoyi Feng, A. Hadid","doi":"10.1109/IPTA.2016.7821006","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821006","url":null,"abstract":"Eye localization is a vital procedure in many applications, such as face recognition and gaze tracking, and can further facilitate related procedures. Although many works have been devoted to localizing eyes in frontal facial images, most approaches cannot work effectively and efficiently in smart embedded systems (e.g., the vehicle system). In this paper, we propose an accurate eye localization approach for smart embedded systems. An illumination normalization procedure with the perception based model is utilized to remove the illumination effects of facial images. Then the integral projection method is employed to localize the candidate positions of eyes. The support vector machine (SVM) classifiers are trained with the spacial and intensity information to verify these candidates rapidly with compact 3-dimensional features. Based on the output of SVMs, the two candidates with top scores are determined as the final accurate eye positions. Extensive experiments on the extended Yale B, AR and ORL face datasets demonstrate that the proposed approach achieves good accuracy and fast computation results for localizing eyes.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116824384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Microaneurysm detection in retinal images using an ensemble classifier 用集成分类器检测视网膜图像中的微动脉瘤
M. Habib, R. Welikala, A. Hoppe, C. Owen, A. Rudnicka, S. Barman
{"title":"Microaneurysm detection in retinal images using an ensemble classifier","authors":"M. Habib, R. Welikala, A. Hoppe, C. Owen, A. Rudnicka, S. Barman","doi":"10.1109/IPTA.2016.7820998","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820998","url":null,"abstract":"Diabetic Retinopathy (DR) is one of the leading causes of blindness amongst the working age population. The presence of microaneurysms (MA) in retinal images is a pathognomonic sign of DR. In this work we have presented a novel combination of algorithms applied to a public dataset for automated detection of MA in colour fundus images of the retina. The proposed technique first detects an initial set of candidates using a Gaussian Matched filter and then classifies the initial set of candidates in order to reduce the number of false positives. A Random Forest ensemble classifier using a set of 79 features (the most common features used within literature) was used for classification. Our proposed algorithm was evaluated on a subset of 20 images from the MESSIDOR dataset. We show that the use of the Random Forest classifier with the 79 features improves the sensitivity of the detection, compared to using a K-Nearest Neighbours classifier that has been proposed in other techniques. In addition, the Random Forest is capable of ranking features according to their importance. We have ranked the 79 features according to their importance. This ranking provides an insight into the most important features that are necessary for discriminating true MA candidates from spurious objects. Eccentricity, aspect ratio and moments are found to be among the important features.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128555900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
On semantic image segmentation using deep convolutional neural network with shortcuts and easy class extension 基于深度卷积神经网络的语义图像分割研究
Chunlai Wang, Lukas Mauch, Ze Guo, Bin Yang
{"title":"On semantic image segmentation using deep convolutional neural network with shortcuts and easy class extension","authors":"Chunlai Wang, Lukas Mauch, Ze Guo, Bin Yang","doi":"10.1109/IPTA.2016.7821005","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821005","url":null,"abstract":"In this paper we examine the use of deep convolutional neural networks for semantic image segmentation, which separates an input image into multiple regions corresponding to predefined object classes. We use an encoder-decoder structure and aim to improve it in convergence speed and segmentation accuracy by adding shortcuts between network layers. Besides, we investigate how to extend an already trained model to other new object classes. We propose a new strategy for class extension with only little training data and class labels. In the experiments we use two street scene datasets to demonstrate the strength of shortcuts, to study the contextual information encoded in the learned model and to show the effectiveness of our class extension method.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127888355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Fast feature matching for detailed point cloud generation 快速特征匹配的详细点云生成
Daniel Berjón, R. Pagés, F. Morán
{"title":"Fast feature matching for detailed point cloud generation","authors":"Daniel Berjón, R. Pagés, F. Morán","doi":"10.1109/IPTA.2016.7820978","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820978","url":null,"abstract":"Structure from motion is a very popular technique for obtaining three-dimensional point cloud-based reconstructions of objects from un-organised sets of images by analysing the correspondences between feature points detected in those images. However, the point clouds stemming from usual feature point extractors such as SIFT are frequently too sparse for reliable surface recovery. In this paper we show that alternate feature descriptors such as A-KAZE, which provide denser coverage of images, yield better results and more detailed point clouds. Unfortunately, the use of a dramatically increased number of points per image poses a computational challenge. We propose a technique based on epipolar geometry restrictions to significantly cut down on processing time and an efficient implementation thereof on a GPU.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127992413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
CNN transfer learning for the automated diagnosis of celiac disease 用于腹腔疾病自动诊断的CNN迁移学习
Georg Wimmer, A. Vécsei, A. Uhl
{"title":"CNN transfer learning for the automated diagnosis of celiac disease","authors":"Georg Wimmer, A. Vécsei, A. Uhl","doi":"10.1109/IPTA.2016.7821020","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821020","url":null,"abstract":"In this work, four well known convolutional neural networks (CNNs) that were pretrained on the ImageNet database are applied for the computer assisted diagnosis of celiac disease based on endoscopic images of the duodenum. The images are classified using three different transfer learning strategies and a experimental setup specifically adapted for the classification of endoscopic imagery. The CNNs are either used as fixed feature extractors without any fine-tuning to our endoscopic celiac disease image database or they are fine-tuned by training either all layers of the CNN or by fine-tuning only the fully connected layers. Classification is performed by the CNN SoftMax classifier as well as linear support vector machines. The CNN results are compared with the results of four state-of-the-art image representations. We will show that fine-tuning all the layers of the nets achieves the best results and outperforms the comparison approaches.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116716021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信