Georgios C. Manikis, K. Marias, E. Alissandrakis, L. Perrotto, E. Savvidaki, N. Vidakis
{"title":"Pollen Grain Classification using Geometrical and Textural Features","authors":"Georgios C. Manikis, K. Marias, E. Alissandrakis, L. Perrotto, E. Savvidaki, N. Vidakis","doi":"10.1109/IST48021.2019.9010563","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010563","url":null,"abstract":"This study presents an image analysis framework coupled with machine learning algorithms for the classification of microscopy pollen grain images. Pollen grain classification has received notable attention concerning a wide range of applications such as paleontology and honey certification, forecasting of allergies caused of airborne pollen and food technology. It requires an extensive qualitative process that is mostly performed manually by an expert. Although manual classification shows satisfactory performance, it may suffer from intra and inter-observer variability and it is time consuming. This study benefits from the advances of image processing and machine learning and proposes a fully-automated analysis pipeline aiming to: A) calculate morphological characteristics from the images using a cost-effective microscope, and b) classify images into 6 pollen classes. A private dataset from the Department of Agriculture of the Hellenic Mediterranean University in Crete containing 564 images was used in this study. A Random Forest (RF) classifier was utilized to classify images. A repeated nested cross-validation (nested-CV) schema was used to estimate the generalization performance and prevent overfitting. Image preprocessing, extraction of geometric and textural characteristics and feature selection were implemented prior to the assessment of the classification performance and a mean accuracy of 88.24% was reported.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125908413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Islam R. Abdelmaksoud, M. Ghazal, A. Shalaby, M. Elmogy, A. Aboulfotouh, M. El-Ghar, R. Keynton, A. El-Baz
{"title":"An Accurate System for Prostate Cancer Localization from Diffusion-Weighted MRI","authors":"Islam R. Abdelmaksoud, M. Ghazal, A. Shalaby, M. Elmogy, A. Aboulfotouh, M. El-Ghar, R. Keynton, A. El-Baz","doi":"10.1109/IST48021.2019.9010552","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010552","url":null,"abstract":"This paper proposes a computer-aided diagnosis (CAD) system for localizing prostate cancer from diffusion-weighted magnetic resonance imaging (DW-MRI). This system uses DW-MRI data sets that were acquired at four b-values: 100, 200, 300, and 400 smm −2. The first step in the proposed system is prostate segmentation using a level set method. The evolution of this level set is guided not only by the intensity of the prostate voxels but also the shape prior of the prostate and the voxels spatial relationships. The second step in the proposed system calculates the apparent diffusion coefficient (ADC) maps of the prostate regions as a discriminating feature between malignant and healthy cases. These ADC maps are used in the last step of the CAD system to fine-tune a pretrained convolutional neural network (CNN) to identify the ADC maps with malignant tumors. The accuracy of the proposed system was evaluated using 40% of the ADC maps while the other 60% are used to fine-tune the pretrained CNN model. The proposed CAD system resulted in an average area under the curve (AUC) of 0.95 at the four b-values.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130633405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaofeng Liang, Lijun Xu, W. Tian, Yuedong Xie, Jiangtao Sun
{"title":"Effect of stimulation patterns on bladder volume measurement based on fringe effect of EIT sensors","authors":"Xiaofeng Liang, Lijun Xu, W. Tian, Yuedong Xie, Jiangtao Sun","doi":"10.1109/IST48021.2019.9010111","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010111","url":null,"abstract":"Real-time monitoring of bladder volume is necessary for patients with bladder dysfunction. Electrical impedance tomography (EIT) has the potential to be used for bladder volume measurement due to its advantages of noninvasive and real-time sensing. To overcome the sensitiveness of conventional EIT measurement methods to the urine conductivity, fringe effect of EIT sensors is explored for bladder volume measurement. In order to find the best stimulation pattern, this paper simulates seven stimulation patterns (using an integer value A as indicator) with a 16-electrodes EIT sensor. Meanwhile, it is investigated how stimulation patterns act with two typical electrodes arrangement, i.e. ring and semicircle. Sensitivity distribution and characteristic values related to the fringe effect of EIT sensors are used as evaluation criteria. The results show that when A=2 and the electrodes arrangement is semicircle, the bladder region has the highest mean sensitivity. Using semicircle arrangement ensures that all stimulation patterns except A=6 and 7 have satisfactory measurement sensitivities and less errors under changes of urine conductivity.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"472 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122919391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of machine learning methods for multiphase flowrate prediction","authors":"Zhenyu Jiang, Haokun Wang, Yunjie Yang, Yi Li","doi":"10.1109/IST48021.2019.9010450","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010450","url":null,"abstract":"In this paper, three prevailing machine learning methods, i.e. Deep Neural Network (DNN), Support Vector Machine (SVM) and Gradient Boosting Decision Tree (GBDT) models were investigated and compared to estimate the flowrate of oil/gas/water three-phase flow. The time-series differential pressure signals collected from Venturi tube together with pressure and temperature measurements were utilized as input. Multiphase flow experiments were conducted on a laboratory-scale multiphase flow facility. Experimental results suggest that DNN and SVM based methods were able to achieve accurate and reliable estimation of multiphase flowrate, whilst GBDT failed to fit the estimation process well. Another finding emerged from this study is that volumetric gas phase flowrate can also be accurately predicted by implementing SVM model.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126872309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. S. Bernardelli, E. N. Santos, R. Morales, D. Pipa, M. J. Silva
{"title":"GPU-accelerated Simulator for Optical Tomography applied to Two-Phase Flows","authors":"R. S. Bernardelli, E. N. Santos, R. Morales, D. Pipa, M. J. Silva","doi":"10.1109/IST48021.2019.9010472","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010472","url":null,"abstract":"Optical tomography (OT) is a modality of tomographic imaging that can provide cross-sectional imaging of phase distributions in two-phase pipe flows, thus having potential application in process monitoring. Due to the strong effect of refraction and reflection on phase boundaries (e.g. gas-liquid interfaces), OT measurements of two-phase flows cannot be modeled using hard-field assumptions. This fact renders the use of traditional tomographic reconstruction techniques unsuitable for this problem. In this paper, we present a GPU-accelerated system capable of providing simulation of light transport in an OT imaging system. Sinograms of real OT measurements taken of a phantom are visually compared with simulations, yielding positive results. Quantitative validation of the simulator is left for future works.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"399 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133910838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Adversarially Enhanced Heatmaps for Aorta Segmentation in CTA","authors":"Wenji Wang, Haogang Zhu","doi":"10.1109/IST48021.2019.9010225","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010225","url":null,"abstract":"In this work, we propose a method to combine ADversarially enhanced HeatMaps (short for AD-HM) to segment the aorta from CTA (Computed Tomography Angiography). The intuition of the AD-HM is that heatmaps encompass rich information on locations of the targets. The positions of the aorta are relatively regular in CTA, thus training with heatmaps exploits the positional information to boost the segmentation results. The quality of heatmaps can be further enhanced with adversarial learning to refine the performance. The AD-HM can embed almost any state-of-the-art deep segmentation networks off the shelf. We collect 111 CTA volumes counting to 79082 slices to verify the effectiveness of our method. The training set is constituted of 104 volumes drawn from the dataset accounting to 74000 slices. The remaining 5082 slices from 7 CTA samples are reserved for validating the algorithm and the results are reported on the validation set. Our experiments with 7 state-of-the-art deep segmentation networks demonstrate the effectiveness of our method. The absolute improvement on IOU(Intersection-over-Union) of the aorta from the 7 models is 1.77% on average, with minimum improvement of 0.8% (UNet: 86.5%−> 87.3%) and maximum improvement of 3.4% (SegNet: 83.8%−> 87.2%).","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"203 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114004808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification System with Capability to Reject Unknowns","authors":"Soma Shiraishi, Katsumi Kikuchi, K. Iwamoto","doi":"10.1109/IST48021.2019.9010169","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010169","url":null,"abstract":"In this paper, we propose a novel method for object classification with capability to reject unknown inputs. In the real world application such as an image-recognition-based checkout system, it is crucial to reject unknown inputs while correctly classifying registered objects. Conventional deep-learning-based classification systems with softmax output suffer from overconfident score on unknown objects. We tackled the problem by the following two approaches. First, we incorporated a metric-learning-based method proposed for face verification into object classification. Second, we utilize available unregistered objects (known unknowns) in the training phase by proposing a novel “Margined Unknown Loss”. In the experiment, we showed the effectiveness of the proposed method by confirming that it outperformed conventional softmax-based approaches which also use the known unknowns, on two datasets, MNIST dataset and a retail product dataset, in terms of Recall at a low false positive rate.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117070643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and Optimization of Liquid Level Sensor based on Electrical Tomography","authors":"Jingdai Cheng, Shui Liu, Yi Li","doi":"10.1109/IST48021.2019.9010148","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010148","url":null,"abstract":"The three-phase separator, containing three layers of dynamically changing oil-water emulsion, is an indispensable equipment in the petroleum industry. However, there exist lots of shortcomings in the current mainstream liquid level measurement methods. Even though they are of high cost, they cannot be visualized, or detect multi-layer medium liquid level. In this paper, a more reliable liquid level detection method is obtained by combining ECT and ERT technology, optimizing the structure and size of the liquid level sensor, and using image reconstruction method.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116871146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resolution Improvement in Ground-Mapping Car-Borne Radar Imaging Systems","authors":"D. Valuyskiy, S. Vityazev, V. Vityazev","doi":"10.1109/IST48021.2019.9010537","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010537","url":null,"abstract":"The problem of radar imaging is considered in this paper. An automobile is used as a platform for radar system mounting. It produces some special requirements for the data acquisition and processing, including the necessity for motion compensation. A motion compensation technique is offered in the paper and applied to the real-life data. The results demonstrate the efficiency of the suggested processing technique.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123738002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accurate motion plan of ultrasonic linear array transducer for non-destructive 3D endoscopy of iconic cultural heritage assets","authors":"G. Karagiannis, S. Amanatiadis, Evdoxios Mimis","doi":"10.1109/IST48021.2019.9010447","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010447","url":null,"abstract":"In the present work an instrumentation that combines ultrasonic tomography with accurate motion control is presented. The acquisition of high-response tomographic images is achieved using transducers in a linear array through the efficient control of their phase characteristics both in transmit and receive modes. Moreover, an accurate mechanical adaptation is designed in order to move the ultrasonic probe with a constant velocity and a sequence of tomographic images is recorded. Then, the 3D endoscopic characteristics of the measured object are extracted via the reconstruction of the 3D volume. Finally, the functionality of the proposed device is validated in a test-case that simulates a realistic mosaic that is covered by mortar.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131960834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}