2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)最新文献

筛选
英文 中文
3D Segmentation of Residual Thyroid Tissue Using Constrained Region Growing and Voting Strategies 基于约束区域生长和投票策略的残留甲状腺组织三维分割
Guoqing Bao, Chaojie Zheng, Panli Li, Hui Cui, Xiuying Wang, Shaoli Song, Gang Huang, D. Feng
{"title":"3D Segmentation of Residual Thyroid Tissue Using Constrained Region Growing and Voting Strategies","authors":"Guoqing Bao, Chaojie Zheng, Panli Li, Hui Cui, Xiuying Wang, Shaoli Song, Gang Huang, D. Feng","doi":"10.1109/DICTA.2017.8227384","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227384","url":null,"abstract":"The measurement of residual thyroid tissue after thyroidectomy is crucial for the precise quantification of thyroid cancer treatment. Accurate residual thyroid tissue segmentation from CT images is challenging due to the indistinct tissue boundary. We propose a vote-in & vote-out region propagation model for residual thyroid tissue segmentation which incorporates global and local constraints and two voting strategies. The constraints were initially estimated from the given seeds and adaptively adjusted during the propagation process. The voting strategies were developed to decrease the opportunities of merging unexpected voxels around the uncertain boundaries. The experiment results over clinical patient studies demonstrated that the proposed method significantly improved the segmentation accuracy in terms of spatial overlap and shape similarity. Our method achieved an average Volume Overlap Error of 14.44±7.55 %, Relative Volume Difference of 9.42±20.31 %, Average Surface Distance of 0.12±0.05 mm and Maximum Surface Distance of 1.34±0.62 mm, with an average computation time of 2.68 seconds.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"2007 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125615968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automatic Student Verification System Utilising Off-Line Thai Name Components 利用离线泰国姓名组件的自动学生验证系统
Hemmaphan Suwanwiwat, Abhijit Das, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein
{"title":"An Automatic Student Verification System Utilising Off-Line Thai Name Components","authors":"Hemmaphan Suwanwiwat, Abhijit Das, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein","doi":"10.1109/DICTA.2017.8227406","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227406","url":null,"abstract":"This research proposed an automatic student identification and verification system utilising off-line Thai name components. The Thai name components consist of first and last names. Dense texture-based feature descriptors were able to yield encouraging results when applied to different handwritten text recognition scenarios. As a result, the authors employed such features in investigating their performance on Thai name component verification system. In this research, Dense-Local Binary Pattern, Dense-Local Directional Pattern, and Local Binary Pattern combined with Local Directional Pattern were employed. A base-line shape/feature i.e. Hidden Markov Model (HMM) was also utilised in this study. As there is no dataset on Thai name verification in the literature, a dataset is proposed for a Thai name verification system. The name component samples were collected from high school students. It consists of 8,400 name components (first and last names) from 100 students. Each student provided 60 genuine name components, and each of the name components was forged by 12 other students. An encouraging result was found employing the above-mentioned features on the proposed dataset.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"41 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125750465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Light-Field Camera Calibration from Raw Images 从原始图像标定光场相机
Charles-Antoine Noury, Céline Teulière, M. Dhome
{"title":"Light-Field Camera Calibration from Raw Images","authors":"Charles-Antoine Noury, Céline Teulière, M. Dhome","doi":"10.1109/DICTA.2017.8227459","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227459","url":null,"abstract":"This paper presents a new calibration method for lenslet-based plenoptic cameras. While most existing approaches require the computation of sub-aperture images or depth maps which quality depends on some calibration parameters, the proposed process uses the raw image directly. We detect micro-images containing checkerboard corners and use a pattern registration method to estimate their positions with subpixelic accuracy. We present a more complete geometrical model than previous work composed of 16 intrinsic parameters. This model relates 3D points to their corresponding image projections. We introduce a new cost function based on reprojection errors of both checkerboard corners and micro-lenses centers in the raw image space. After the initialization process, all intrinsic and extrinsic parameters are refined with a non-linear optimization. The proposed method is validated in simulation as well as on real images.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114268449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Material Based Boundary Detection in Hyperspectral Images 高光谱图像中基于材料的边界检测
S. Al-khafaji, Ali Zia, J. Zhou, Alan Wee-Chung Liew
{"title":"Material Based Boundary Detection in Hyperspectral Images","authors":"S. Al-khafaji, Ali Zia, J. Zhou, Alan Wee-Chung Liew","doi":"10.1109/DICTA.2017.8227462","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227462","url":null,"abstract":"Boundary detection in hyperspectral image (HSI) is a challenging task due to high data dimensionality and the that is distributed over the spectral bands. For this reason, there is a dearth of research on boundary detection in HSI. In this paper, we propose a spectral-spatial feature based statistical co-occurrence method for this task. We adopt probability density function (PDF) to estimate the co-occurrence of features at neighboring pixel pairs. Such cooccurrence is rare at the boundary and repeated within a region. To fully explore the material information embedded in HSI, joint spectral-spatial features are extracted at each pixel. The PDF values are then used to construct an affinity matrix for all pixels. After that, a spectral clustering algorithm is applied on the affinity matrix to produce boundaries. Our algorithm is evaluated on a dataset of real-world HSIs and compared with two alternative approaches. The results show that the proposed method is very effective in exploring object boundaries from HSI images.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125173049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Character Recognition via a Compact Convolutional Neural Network 基于紧凑卷积神经网络的字符识别
Haifeng Zhao, Yong Hu, Jinxia Zhang
{"title":"Character Recognition via a Compact Convolutional Neural Network","authors":"Haifeng Zhao, Yong Hu, Jinxia Zhang","doi":"10.1109/DICTA.2017.8227414","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227414","url":null,"abstract":"Optical Character Recognition (OCR) in the scanned documents has been a well-studied problem in the past. However, when these characters come from the natural scenes, it becomes a much more challenging problem, as there exist many difficulties in these images, e.g., illumination variance, cluttered backgrounds, geometry distortion. In this paper, we propose to use a deep learning method that based on the convolutional neural networks to recognize this kind of characters and word in the scene images. Based on the original VGG-Net, we focus on how to make a compact architecture on this net, and get both the character and word recognition results under the same framework. We conducted several experiments on the benchmark datasets of the natural scene images. The experiments has shown that our method can achieve the state-of-art performance and at the same time has a more compact representation.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126970584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A Pixel-Based Skin Segmentation in Psoriasis Images Using Committee of Machine Learning Classifiers 基于像素的银屑病图像的机器学习分类器分割
Y. George, M. Aldeen, R. Garnavi
{"title":"A Pixel-Based Skin Segmentation in Psoriasis Images Using Committee of Machine Learning Classifiers","authors":"Y. George, M. Aldeen, R. Garnavi","doi":"10.1109/DICTA.2017.8227398","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227398","url":null,"abstract":"Skin segmentation, which involves detecting human skin areas in an image, is an important process for skin disease analysis. The aim of this paper is to identify the skin regions in a newly collected set of psoriasis images. For this purpose, we present a committee of machine learning (ML) classifiers. A psoriasis training set is first collected by using pixel values in five different color spaces. Experiments are then performed to investigate the impact of both the size of the training set and the number of features per pixel, on the performance of each skin classifier. A committee of classifiers is constructed by combining the classification results obtained from seven distinct skin classifiers using majority voting. Also, we propose a refinement method using morphological operations to improve the resulted skin map. We use a set of 100 psoriasis images for training and testing. For comparative evaluation, we consider 3283 face skin images. Finally, F-measure and accuracy are used to evaluate the performance of the classifiers. The experimental results show that the size of the training set does not greatly influence the overall performance. The results also indicate that the feature vector using pixel values in the five color spaces has higher performance than any subset of these spaces. Comparative study suggests that the proposed method performs reasonably with both psoriasis and faces skin images, with accuracy of 97.4% and 80.41% respectively.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115199548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Learning Geometric Equivalence between Patterns Using Embedding Neural Networks 使用嵌入神经网络学习模式之间的几何等价
Olga Moskvyak, F. Maire
{"title":"Learning Geometric Equivalence between Patterns Using Embedding Neural Networks","authors":"Olga Moskvyak, F. Maire","doi":"10.1109/DICTA.2017.8227457","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227457","url":null,"abstract":"Despite impressive results in object classification, verification and recognition, most deep neural network based recognition systems become brittle when the view point of the camera changes dramatically. Robustness to geometric transformations is highly desirable for applications like wild life monitoring where there is no control on the pose of the objects of interest. The images of different objects viewed from various observation points define equivalence classes where by definition two images are said to be equivalent if they are views from the same object. These equivalence classes can be learned via embeddings that map the input images to vectors of real numbers. During training, equivalent images are mapped to vectors that get pulled closer together, whereas if the images are not equivalent their associated vectors get pulled apart. In this work, we present an effective deep neural network model for learning the homographic equivalence between patterns. The long term aim of this research is to develop more robust manta ray recognizers. Manta rays bear unique natural spot patterns on their bellies. Visual identification based on these patterns from underwater images enables a better understanding of habitat use by monitoring individuals within populations. We test our model on a dataset of artificially generated patterns that resemble natural patterning. Our experiments demonstrate that the proposed architecture is able to discriminate between patterns subjected to large homographic transformations.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124011938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Spectral-Spatial Hyperspectral Image Classification via Boundary-Adaptive Deep Learning 基于边界自适应深度学习的光谱-空间高光谱图像分类
Atif Mughees, L. Tao
{"title":"Spectral-Spatial Hyperspectral Image Classification via Boundary-Adaptive Deep Learning","authors":"Atif Mughees, L. Tao","doi":"10.1109/DICTA.2017.8227490","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227490","url":null,"abstract":"Deep learning based hyperspectral image (HSI) classification have recently shown promising performance. However, complex network architecture, tedious training process and effective utilization of spatial/contextual information in deep network limits the application and performance of deep learning. In this paper, for an effective spectral-spatial feature extraction , an improved deep network, spatial adaptive network (SANet) approach is proposed which exploits spatial contextual information and spectral characteristics to construct a more simplified deep network which leads to more powerful feature representation for effective HSI classification. SANet is established from the simple structure of a principal component analysis network. First spatial structural information is extracted and combined with informative spectral channels followed by an object-level classification using SANet based decision fusion approach. It integrates spatial-contextual outcome and spectral characteristics into a SANet framework for robust spectral-spatial HSI classification. Integration of local structural regularity and spectral similarity into simplified deep SANet has significant effect on the classification performance. Experimental results on popular standard HSI datasets reveal that proposed SANet technique produce better classification results than existing well known techniques.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123685011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Action Parsing Using Context Features 使用上下文特征的动作解析
N. Mehrseresht
{"title":"Action Parsing Using Context Features","authors":"N. Mehrseresht","doi":"10.1109/DICTA.2017.8227399","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227399","url":null,"abstract":"We propose an action parsing algorithm to parse a video sequence containing an unknown number of actions into its action segments. We argue that context information, particularly the temporal information about other actions in the video sequence, is valuable for action segmentation. The proposed parsing algorithm temporally segments the video sequence into action segments. The optimal temporal segmentation is found using a dynamic programming search algorithm that optimizes the overall classification confidence score. The classification score of each segment is determined using local features calculated from that segment as well as context features calculated from other candidate action segments of the sequence. Experimental results on the Breakfast activity data-set showed improved segmentation accuracy compared to existing state-of-the-art parsing techniques.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123725915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Mapping for 3D Geometric Models without Reference Image Locations 无参考图像位置的3D几何模型的颜色映射
Fei Li, Yunfan Du, Hu Tian, Rujie Liu
{"title":"Color Mapping for 3D Geometric Models without Reference Image Locations","authors":"Fei Li, Yunfan Du, Hu Tian, Rujie Liu","doi":"10.1109/DICTA.2017.8227418","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227418","url":null,"abstract":"Color mapping for 3D models with captured images is a classical problem in computer vision. Typically, registration between 3D model and images is assumed to be provided, otherwise corresponding points need to be labeled. For many applications, 3D model and images are acquired from different devices, since registration cannot be directly obtained, manual labeling has to be adopted. In this paper, a novel color mapping approach is proposed to deal with the case when only a 3D model and a set of multi-view color images are given. A new 3D model is reconstructed by all the images. By aligning the reconstructed model with the input model, the reference image locations are calculated. After camera pose refinement, the color value of each point in the input model is finally determined. In the whole process, we only need to select several point pairs between the two models, and all the other processes are conducted automatically. Experimental results demonstrate the effectiveness of our proposal.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":" 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113952867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信