{"title":"Ultrasound Nerve Segmentation of Brachial Plexus Based on Optimized ResU-Net","authors":"Rui Wang, Hui Shen, Meng Zhou","doi":"10.1109/IST48021.2019.9010317","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010317","url":null,"abstract":"The accurate ultrasound nerve segmentation has attracted wide attention, for it is beneficial to ensure the efficacy of regional anesthesia, reducing surgical injury, and speeding up the recovery of surgery. However, because of the characteristics of high noise and low contrast in ultrasonic images, it is difficult to achieve accurate neural ultrasound segmentation even with U-Net, which is one of the mainstream network in medical image segmentation and has achieved remarkable results in Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Optical Coherence Tomography (OCT). Addressing this problem, an optimized and effective ResU-Net variation to segment the ultrasound nerve of brachial plexus is proposed. In our proposed method, median filtering is first employed to reduce the speckle noise which is spatially correlated multiplicative noise inherited in ultrasound images. And then the Dense Atrous Convolution (DAC) and Residual Multi-kernel Pooling (RMP) modules are integrated into the ResU-Net architecture to reduce the loss of spatial information and improve the robustness of the segmentation with different scales, thus boosting the accuracy of segmentation. Our fully mechanism improves the segmentation performance in the public dataset NSD with the dice coefficient 0.7093, about 3% higher compared to that of the state-of-the-art models.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127643602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting stabbing by a deep learning method from surveillance videos","authors":"Chunguang Liu, Peng Liu, Chuanxin Xiao","doi":"10.1109/IST48021.2019.9010206","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010206","url":null,"abstract":"Stabbing is one of culprits threatening public safety. Once it happens, it will make immeasurable consequences in a short period of time. In order to strengthen the supervision of public safety and prevent the emergence of stabbing, the tool detection technology can play a vital role. The existing methods for tool detection are metal detector and X-ray detector, which are applicable to stations, airports and other specific areas but not feasible in public areas crowds of people. This paper proposes the use of a deep learning method with high precision and speed for tool detection and by comparison finally chooses the YOLOV3 method for tool detection in public areas. To validate the performance of YOLOV3 method, a total of 1,738 images of different tools are acquired by simulating real scenes and the web crawler technology. Meanwhile, the number of samples are amplified by image enhancement techniques, and a datasets of 21,000 images are filtered. To improve tool detection accuracy, this paper proposes a method that combines hand features and tool features into new features. Experiments have shown that the detection accuracy is improved by 2.57 % with these new features.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132263224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojun Yu, Xingduo Wang, Chi Hu, Shiqi Fan, Yong Guo, Linbo Liu
{"title":"High-Speed Three-Dimensional Glioma Morphology Imaging and Grade Discrimination using Micro-Optical Coherence Tomography","authors":"Xiaojun Yu, Xingduo Wang, Chi Hu, Shiqi Fan, Yong Guo, Linbo Liu","doi":"10.1109/IST48021.2019.9010090","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010090","url":null,"abstract":"Glioma is one of the most common types of central nervous system (CNS) tumor with an average survival of 1.5 to 2 years. One way to improve the patient survival is to identify and excise the glioma tumor precisely and completely to seek for subsequent treatment. Due to the system complexity and limited performances of the existing diagnostic tools, however, the identification of glioma tumor is difficult, and therefore, it is imperative to develop new diagnostic imaging tools that could be able to identify glioma rapidly and reliably. In this study, we construct a free-spaced micro-optical coherence tomography (μOCT) system, which achieves a spatial resolution of ~ 2.0 μm, for glioma imaging, and then, evaluate its capability for identifying the cellular/sub-cellular structures of glioma lesions. Imaging results demonstrate that the μOCT system is not only able to acquire cellular/sub-cellular glioma microstructure images, but it is also able to differentiate between the low-grade and high-grade glioma lesions with the three-dimensional (3D) tissue morphology appearances. The low system complexity enables μOCT to be integrated onto surgical pick tip and utilized as an intraoperative diagnostic tool, while the high-resolution imaging capability of μOCT could help neurosurgeons identify the interfaces between glioma lesions and non-cancerous tissues fast and reliably, and thus, help neurosurgeons make appropriate treatment decisions. Such results convincingly demonstrate the potential of μOCT for neurosurgery in clinical practice.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":" 24","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132012245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nan Li, Hangben Du, Mingchen Cao, Lina Wang, Xiaojun Yu
{"title":"Observed Phenomenon in Grouting Duct Detection Progress using Capacitive Sensing Technique","authors":"Nan Li, Hangben Du, Mingchen Cao, Lina Wang, Xiaojun Yu","doi":"10.1109/ist48021.2019.9010594","DOIUrl":"https://doi.org/10.1109/ist48021.2019.9010594","url":null,"abstract":"Capacitive sensing technique was applied to detect multiphase-flow, grouting duct and many other fields. Some interesting phenomena were observed during the practical detection on grouting duct. The aim of this paper was to figure out these problems and provide guides for these issues. In this paper, 8 models of experiments were designed to determine the influence factors on the material of grouting duct, which caused the interesting phenomena. The experimental results show that three models, the water, steel bar-water, and steel bar-water-cement have influence of different degrees on the instability of measured results. The grounded wire of experimental specimens also could lead to the unstable measured values. Besides, the excessive angle of capacitive plates affect the measured results. The experimental results provide possible factors which caused the interesting phenomena and present the further research direction for the phenomena.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131987001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Millimeter Wave Imaging of Surface Defects and Corrosion under Paint using V-band Reflectometer","authors":"Mohammed Saif ur Rahman, M. Abou-Khousa","doi":"10.1109/IST48021.2019.9010500","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010500","url":null,"abstract":"Periodic inspection of critical metal structures that are used in numerous industries worldwide, is imperative to guarantee structural integrity. These structures are vulnerable to near surface defects and corrosion, which can cause severe repercussions if unattended. An effective, reliable and real time technique for evaluation of anomalies in these metallic structures is the need of the hour. In this paper, millimeter wave imaging of surface defects of practical importance such as holes and notches and corrosion under paint for metallic structures is presented. A reflectometer operating in V-band (50–75 GHz) is employed for imaging and the images produced by the millimeter wave system are benchmarked with the popular Phased Array Ultrasonic Testing (PAUT). It is demonstrated that millimeter wave imaging system is effective in detecting surface defects as well as corrosion under paint and renders high quality images, comparable to PAUT.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"59 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114011915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Vetrekar, Raghavendra Ramachandra, K. Raja, R. Gad
{"title":"Multi-spectral Imaging To Detect Artificial Ripening Of Banana: A Comprehensive Empirical Study","authors":"N. Vetrekar, Raghavendra Ramachandra, K. Raja, R. Gad","doi":"10.1109/IST48021.2019.9010525","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010525","url":null,"abstract":"Naturally, ripened fruits contain essential nutrients, but with the increasing demand and consumer benefits, the artificial ripening of fruits is practiced in recent times in the market chain. Compared to natural ripening, artificial ripening significantly reduces the quality of fruits at the same time, increases the health-related risks. Especially, Calcium Carbide (CaC2), which has the carcinogenic properties are consistently being used as a ripening agent. Considering the significance of this problem, in this paper, we present the multi-spectral imaging approach to acquire the spatial and spectral eight narrow spectrum bands across VIS and NIR wavelength range to detect the artificial ripened banana. To present this study, we introduced our newly constructed multi-spectral images dataset for naturally and artificially ripened banana samples. Further, the extensive set of experimental results computed on our large scale database of 5760 banana samples observes the 94.66% average classification accuracy presenting the significance of using multi-spectral imaging to detect artificially ripened fruits.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125174856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Nowak, Alexandros E. Tzikas, G. Giakos, Anthony Beninati, Nicolas Douard, Joe Lanzi, Natalie Lanzi, Ridwan Hussain, Yi Wang, S. Shrestha, C. Bolakis
{"title":"A Cognitive Radar for Classification of Resident Space Objects (RSO) operating on Polarimetric Retina Vision Sensors and Deep Learning","authors":"Martin Nowak, Alexandros E. Tzikas, G. Giakos, Anthony Beninati, Nicolas Douard, Joe Lanzi, Natalie Lanzi, Ridwan Hussain, Yi Wang, S. Shrestha, C. Bolakis","doi":"10.1109/IST48021.2019.9010272","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010272","url":null,"abstract":"A novel cognitive radar, operating on Polarimetric Dynamic Vision Sensor (pDVS) and deep learning principles, aimed at discriminating moving targets, based on their motion patterns, is presented. The system consists of an asynchronous event-based neuromorphic imaging sensor coupled with polarization filters which enable better discrimination; a spinning light modulating wheel, operating at varying angular frequency, is placed in front of a static object. A pipeline has been designed and implemented in order to train a neural network for motion pattern classification using event data. This pipeline first extracts features using a pre-trained convolutional neural network and then feeds these features into a single-layer long short-term memory recurrent neural network. The outcome of this study indicates that deep learning combined with pDVS principles is well suited to classify accurately motion pattern-based targets using limited set of data; thus opening the way to many innovative bioinspired-based vision applications where feature extraction is complex or precognitive vision-based applications for the detection of salient features. The proposed cognitive radar would be able to operate at high speeds and low bandwidth, while maintaining low storage capabilities, low power consumption, and high-processing speed.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127717469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojun Yu, Xingduo Wang, Ting Yang, Nan Li, Qianshan Ding, Linbo Liu
{"title":"Non-Invasive Discrimination of Colorectal Adenomas and Non-Neoplastic Polyps with Micro-Optical Coherence Tomography Imaging","authors":"Xiaojun Yu, Xingduo Wang, Ting Yang, Nan Li, Qianshan Ding, Linbo Liu","doi":"10.1109/IST48021.2019.9010369","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010369","url":null,"abstract":"As one of the most common cancers and a leading cause of worldwide cancer-related mortality, colorectal cancer (CRC) imposes a huge burden on both patients and social healthcare systems. Currently, although colonoscopy screening has been widely utilized for CRC diagnosis, the commonly adopted “resect and discard” strategy for colorectal diminutive polyps treatment in CRC diagnostic process still suffers from the risk of missing the unrecognized potential malignant lesions, like adenomatous polyps. In this study, we explore and validate the feasibility of micro-optical coherence tomography (μOCT) as an intraoperative imaging tool to perform optical biopsy in gastroenterology, and thus, to improve the diagnostic accuracy of colorectal lesions. Specifically, a lab-customized μOCT system that achieves a spatial resolution of ~2.0 μm was built first, and then, was applied to acquire both cross-sectional and 3D images of the fresh tissue samples obtained from patients with colorectal polyps or colorectal cancer and just received endoscopic therapy or laparoscopic surgery. Finally, those acquired images are compared to their corresponding HE sections for discrimination of colorectal adenomas and non-neoplastic polyps. A new diagnostic strategy has also been established to determine the diagnosis sensitivity, specificity and accuracy for using μOCT to differentiate between benign polyps and adenomas. Results show that the μOCT system is capable of clearly illustrating the cellular/sub-cellular microstructure differences between colorectal adenomas and non-neoplastic polyps with the cross-sectional and en face images. While with the new diagnostic criteria applied for all 58 cases of polyps, the diagnosis accuracy, sensitivity and specificity reach up to 94.83%, 96.88% and 92.31% with a 95% confidence interval of (85.30%−98.79%), (82.89%−99.99%) and (74.74%−98.98%), respectively. Such satisfactory results demonstrate the potential of μOCT as an intraoperative diagnostic imaging tool for endoscopists to perform “optical biopsy”, and thus, make appropriate clinical decisions in clinical practice.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121072155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Elmogy, A. Khalil, A. Shalaby, Ali H. Mahmoud, M. Ghazal, A. El-Baz
{"title":"Chronic Wound Healing Assessment System Based on Color and Texture Analysis","authors":"M. Elmogy, A. Khalil, A. Shalaby, Ali H. Mahmoud, M. Ghazal, A. El-Baz","doi":"10.1109/IST48021.2019.9010586","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010586","url":null,"abstract":"Chronic wounds (CWs) detection and diagnosis are deemed as significant social and economic problems in society, especially regarding elderly persons and bedridden. These problems and challenges due to their unpredictive healing procedure at an expected time. The cost of the CW diagnosis and treatment is very high as compared with other types of diseases. This paper presents a healing assessment computer-aided system (CAD) for CW. The proposed CAD system is based on extracting various significant features to help in detecting different tissue types from various CW categories. The proposed system extracted different color and texture features and then returned with the most significant features by applying the non-negative matrix factorization (NMF) technique. The resulting features are fused and supplied to the gradient boosted trees (GBT) technique to distinguish different types of tissues. After that, the healing percentage from each type of CW tissues are calculated. Finally, the proposed CAD system assesses the healing status of the CW. We trained and tested the proposed CAD system on 341 images from the Medetec CW dataset. The proposed CAD system fulfilled on average 94% accuracy. The experimental results are higher than all tested state-of-the-art techniques, which indicate promising results.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116832782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Parallel Binocular System for 3D Pose Measurement by a Single PTZ Camera","authors":"Rui Wang, Ran Huang, Zi-Hong Li","doi":"10.1109/IST48021.2019.9010458","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010458","url":null,"abstract":"A novel universal parallel binocular (UPB) system by precision linear moving a single PTZ (pan-tilt-zoom) camera is presented here to achieve the accurate pose (position and attitude) measurement in computer vision applications. Unlike traditional parallel binocular system, the optical axes of UPB is not necessarily always perpendicular to its baseline so that it can measure the 3D pose of the object in a wider view. The main sources of error in 3D measurement process using the UPB system are firstly analyzed. The online calibration of intrinsic parameters including the radial lens distortion coefficient of the PTZ camera without any calibration target are then introduced. Moreover, for carrying out the robust PTZ camera online calibration and accurate 3D pose measurement with classic eight-point algorithm, we develop an improved descriptor named as CSCD-SURF(Circular Coordinate Combining Shape-Color Descriptor Under Distortion Based SURF) to extract matching points. Experimental results have been enclosed to show the effectiveness and accuracies of the proposed UPB system.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"125 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116436115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}