2019 IEEE International Conference on Imaging Systems and Techniques (IST)最新文献

筛选
英文 中文
Semi-Automated Image Analysis Methodology to Investigate Intracellular Heterogeneity in Immunohistochemical Stained Sections 半自动化图像分析方法研究免疫组织化学染色切片细胞内异质性
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010370
R. Hamoudi, S. Hammoudeh, Arab M. Hammoudeh, S. Rawat
{"title":"Semi-Automated Image Analysis Methodology to Investigate Intracellular Heterogeneity in Immunohistochemical Stained Sections","authors":"R. Hamoudi, S. Hammoudeh, Arab M. Hammoudeh, S. Rawat","doi":"10.1109/IST48021.2019.9010370","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010370","url":null,"abstract":"The discovery of tissue heterogeneity revolutionized the existing knowledge regarding the cellular, molecular, and pathophysiological mechanisms in biomedicine. Therefore, basic science investigations were redirected to encompass observation at the classical and quantum biology levels. Various approaches have been developed to investigate and capture tissue heterogeneity; however, these approaches are costly and incompatible with all types of samples. In this paper, we propose an approach to quantify heterogeneous cellular populations through combining histology and images processing techniques. In this approach, images of immunohistochemically stained sections are processed through color binning of DAB-stained cells (in brown) and non-stained cells (in blue) to select cellular clusters expressing biomarkers of interest. Subsequently, the images were converted to a binary format through threshold modification (threshold ~ 60%) in the grey scale. The cell count was extrapolated from the binary images using the particle analysis tool in ImageJ. This approach was applied to quantify the level of progesterone receptor expression levels in a breast cancer cell line sample. The results of the proposed approach were found to closely reflect those of manual counting. Through this approach, quantitative measures can be added to qualitative observation of subcellular targets expression.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132963330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A New System for Lung Cancer Diagnosis based on the Integration of Global and Local CT Features 基于全局和局部CT特征集成的肺癌诊断新系统
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010466
A. Shaffie, A. Soliman, H. A. Khalifeh, F. Taher, M. Ghazal, N. Dunlap, Adel Said Elmaghraby, R. Keynton, A. El-Baz
{"title":"A New System for Lung Cancer Diagnosis based on the Integration of Global and Local CT Features","authors":"A. Shaffie, A. Soliman, H. A. Khalifeh, F. Taher, M. Ghazal, N. Dunlap, Adel Said Elmaghraby, R. Keynton, A. El-Baz","doi":"10.1109/IST48021.2019.9010466","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010466","url":null,"abstract":"Lung cancer leads deaths caused by cancer for both men and women worldwide, that is why creating systems for early diagnosis with machine learning algorithms and nominal user intervention is of huge importance. In this manuscript, a new system for lung nodule diagnosis, using features extracted from one computed tomography (CT) scan, is presented. This system integrates global and local features to give an implication of the nodule prior growth rate, which is the main point for diagnosis of pulmonary nodules. 3D adjustable local binary pattern and some basic geometric features are used to extract the nodule global features, and the local features are extracted using 3D convolutional neural networks (3D-CNN) because of its ability to exploit the spatial correlation of input data in an efficient way. Finally all these features are integrated using autoencoder to give a final diagnosis for the lung nodule whether benign or malignant. The system was evaluated using 727 nodules extracted from the Lung Image Database Consortium (LIDC) dataset. The proposed system diagnosis accuracy, sensitivity, and specificity were 92.20%,93.55%, and 91.20% respectively. The proposed framework demonstrated its promise as a valuable tool for lung cancer detection evidenced by its higher accuracy.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126772033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Welcome Message from the Chairman 主席致欢迎辞
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010565
J. Scharcanski
{"title":"Welcome Message from the Chairman","authors":"J. Scharcanski","doi":"10.1109/IST48021.2019.9010565","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010565","url":null,"abstract":"On behalf of the Technical and Local Committee of the 2019 IEEE International Conference on Imaging Systems and Techniques (IST 2019) and IEEE International School on Imaging, I welcome you to Abu Dhabi, UAE.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121143841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Human Activity Recognition Framework Based on Wearable IMU Wrist Sensors 基于可穿戴式IMU腕部传感器的高效人体活动识别框架
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010115
A. Ayman, Omneya Attallah, H. Shaban
{"title":"An Efficient Human Activity Recognition Framework Based on Wearable IMU Wrist Sensors","authors":"A. Ayman, Omneya Attallah, H. Shaban","doi":"10.1109/IST48021.2019.9010115","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010115","url":null,"abstract":"Lately, Human Activity Recognition (HAR) using wearable sensors has received extensive research attention for its great use in the human health performance evaluation across several domain. HAR methods can be embedded in a smart home healthcare model to assist patients and enhance their rehabilitation process. Several types of sensors are currently used for HAR amongst them are wearable wrist sensors, which have a great ability to deliver Valuable information about the patient's grade of ability. Some recent studies have proposed HAR using Machine Learning (ML) techniques. These studies have included non-invasive wearable wrist sensors, such as Accelerometer, Magnetometer and Gyroscope. In this paper, a novel framework for HAR using ML based on sensor-fusion is proposed. Moreover, a feature selection approach to select useful features based on Random Forest (RF), Bagged Decision Tree (DT) and Support Vector Machine (SVM) classifiers is employed. The proposed framework is investigated on two publicly available datasets. Numerical results show that our framework based on sensor-fusion outperforms other methods proposed in the literature.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114116049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Extraction of Radiomic Features from Breast DCE-MRI Responds to Pathological Changes in Patients During Neoadjuvant Chemotherapy Treatment 乳腺DCE-MRI放射学特征提取对新辅助化疗期间患者病理变化的响应
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010068
Priscilla Dinkar Moyya, Mythili Asaithambi, A. K. Ramaniharan
{"title":"Extraction of Radiomic Features from Breast DCE-MRI Responds to Pathological Changes in Patients During Neoadjuvant Chemotherapy Treatment","authors":"Priscilla Dinkar Moyya, Mythili Asaithambi, A. K. Ramaniharan","doi":"10.1109/IST48021.2019.9010068","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010068","url":null,"abstract":"Breast cancer disorders are leading cause of morbidity and mortality worldwide. Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is the most common method of assessing the response to chemotherapy in breast cancer treatment monitoring. Radiomic features obtained from MR images have potential in reflecting the tumor biology. In this work, an attempt has been made to investigate the clinical potential of breast DCE-MRI derived radiomic features and its response to Neoadjuvant Chemotherapy (NAC). The data used in this study (10 Patients with 20 studies (Visit-1 & Visit-2) were obtained from public domain Quantitative Imaging Network (QIN) Breast DCE-MRI database. Using Mazda software, the radiomic features were extracted from whole breast region to quantify the pathological variations during visit-1 and visit-2. Totally, 176 texture and shape features were extracted and analyzed statistically using student's t test. Result shows that, the radiomic features were able to differentiate the variations in the tumor biology during visit-1 and visit-2 due to NAC. The features such as GeoW2, GeoW3, GeoW4, GeoRs, GeoRc, GeoRm, 50 percentile of histogram intensity and Theta1 were found to be statistically significant with p values ranging from 0.03 to 0.08. Hence it appears that, the radiomic features could be used as adjunct measure in reflecting the pathological response during NAC and thus this study seems to be clinically significant.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116145539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Studies on a Video Surveillance System Designed for Deep Learning 基于深度学习的视频监控系统研究
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010234
Chunfang Xue, Peng Liu, Weiping Liu
{"title":"Studies on a Video Surveillance System Designed for Deep Learning","authors":"Chunfang Xue, Peng Liu, Weiping Liu","doi":"10.1109/IST48021.2019.9010234","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010234","url":null,"abstract":"This paper proposes a new video surveillance system designed for Deep Learning. The proposed system uses three steps to transfer RTSP streams to pictures for Deep Learning. First it decapsulates the streams, then decodes and converts color space & extracts frames. The proposed system has two ways to decode RTSP streams, hardware decoding and software decoding. By checking the processor's version of CPU firstly, system chooses a better way to decode. The proposed system has GPU and CPU. CPU is used to process RTSP streams, extract frames and do human-machine interaction. GPU is used for computing and analyzing the algorithms of Deep Learning. So the complex computing does not run on the CPU. The proposed system runs on Linux system and has Python interface, so it can easily connect with the models of Deep Learning. By running on multiple machines, the result shows that the proposed system can process up to 16 channels of stream. After 7*24 hours of testing on several machines, this system can run continuously without downtime and the delay time is less than 7 seconds.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115418164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Driver Fatigue Detection with Single EEG Channel Using Transfer Learning 基于迁移学习的单脑电通道驾驶员疲劳检测
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010483
W. Shalash
{"title":"Driver Fatigue Detection with Single EEG Channel Using Transfer Learning","authors":"W. Shalash","doi":"10.1109/IST48021.2019.9010483","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010483","url":null,"abstract":"Decreasing road accidents rate and increasing road safety have been the major concerns for a long time as traffic accidents expose the divers, passengers, properties to danger. Driver fatigue and drowsiness are one of the most critical factors affecting road safety, especially on highways. EEG signal is one of the reliable physiological signals used to perceive driver fatigue state but wearing a multi-channel headset to acquire the EEG signal limits the EEG based systems among drivers. The current work suggested using a driver fatigue detection system using transfer learning, depending only on one EEG channel to increase system usability. The system firstly acquires the signal and passing it through preprocessing filtering then, converts it to a 2D spectrogram. Finally, the 2D spectrogram is classified with AlexNet using transfer learning to classify it either normal or fatigue state. The current study compares the accuracy of seven EEG channel to select one of them as the most accurate channel to depend on it for classification. The results show that the channels FP1 and T3 are the most effective channels to indicate the drive fatigue state. They achieved an accuracy of 90% and 91% respectively. Therefore, using only one of these channels with the modified AlexNet CNN model can result in an efficient driver fatigue detection system.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"233 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115502339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Big Data driven U-Net based Electrical Capacitance Image Reconstruction Algorithm 基于大数据驱动的U-Net电容图像重构算法
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010423
Xinmeng Yang, Chaojie Zhao, Bing Chen, Maomao Zhang, Yi Li
{"title":"Big Data driven U-Net based Electrical Capacitance Image Reconstruction Algorithm","authors":"Xinmeng Yang, Chaojie Zhao, Bing Chen, Maomao Zhang, Yi Li","doi":"10.1109/IST48021.2019.9010423","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010423","url":null,"abstract":"An efficiency electrical capacitance image reconstruction method which combines fully connected neural network and U-Net structure, is put forward for the first time in electrical capacitance tomography (ECT) area in this paper. Since the target of ECT image reconstruction can also be considered as an image segmentation problem-which U-Net structure is designed for. In this paper, the Convolutional Neural Network (CNN) based U-Net structure is used to improve the quality of images reconstructed by ECT. Firstly, about 60,000 data samples with different patterns are generated by the cosimulation of COMSOL Multiphysic and MATLAB. Then a fully connected neural network (FC) is used to pre-process these samples to get initial reconstructions which are not accurate enough. Finally, U-Net structure is used to further process those pre-trained images and will output reconstructed images with both high speed and quality. The robustness, generalization and practicability ability of the U-Net structure is proved. As stated in Section2, it illustrates that U-Net structure matches properly with ECT image reconstruction problems due to its antoencoder strcture. The preliminary results show that the image reconstruction results obtained by the U-Net network are much better than that of the fully connected neural network algorithm, the traditional Linear back projection (LBP) algorithm and the Landweber iteration method.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123104885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Anomaly Detection Combining Discriminative and Generative Models 结合判别和生成模型的异常检测
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010139
Kyota Higa, Hideaki Sato, Soma Shiraishi, Katsumi Kikuchi, K. Iwamoto
{"title":"Anomaly Detection Combining Discriminative and Generative Models","authors":"Kyota Higa, Hideaki Sato, Soma Shiraishi, Katsumi Kikuchi, K. Iwamoto","doi":"10.1109/IST48021.2019.9010139","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010139","url":null,"abstract":"This paper proposes a method to accurately detect anomaly from an image by combining features extracted by discriminative and generative models. Automatic anomaly detection is a key factor for reducing operation costs of visual inspection in a wide range of domains. The proposed method consists of three sub-networks. The first sub-network is convolutional neural networks as a discriminative model for extracting features to distinguish between anomaly and normal. The second subnetwork is a variational autoencoder as a generative model to extract features representing normal. The third sub-network is neural networks to discriminate between anomaly and normal on the basis of features from the discriminative and generative models. Experiments were conducted using pseudo anomalous images generated by superimposing anomaly which was manually extracted from real images. Results of the experiments show that the proposed method improves the area under the curve by 0.08-0.33 points compared with that of a conventional method. With high accuracy, automatic visual inspection systems can be implemented for reducing operation costs.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"375 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124687338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Classification of different vehicles in traffic using RGB and Depth images: A Fast RCNN Approach 使用RGB和深度图像对交通中不同车辆进行分类:一种快速RCNN方法
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010357
Mohan Kashyap Pargi, B. Setiawan, Y. Kazama
{"title":"Classification of different vehicles in traffic using RGB and Depth images: A Fast RCNN Approach","authors":"Mohan Kashyap Pargi, B. Setiawan, Y. Kazama","doi":"10.1109/IST48021.2019.9010357","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010357","url":null,"abstract":"The Fast RCNN framework utilizes the region proposals generated from the RGB images in general for object classification and detection. This paper describes about the vehicle classification employing the Fast RCNN framework and utilizing the information provided from the combination of depth images and RGB images in the form of region proposals for object detection and classification. We use this underlying system architecture to perform evaluation on the Indian and Thailand vehicle traffic datasets. Overall, we achieve a mAP of 72.91% using RGB region proposals, and mAP of 73.77% using RGB combined with depth proposals, for the Indian dataset; and mAP of 80.61% on RGB region proposals, and mAP of 81.25% on RGB combined with depth region proposals, for the Thailand dataset. Our results show that RGB combined with depth region proposals mAP performance is slightly better than the region proposals generated using RGB images only. Furthermore, we provide insights on the performance of AP(Average Precision) for each vehicle on Thailand dataset and how effective region proposals generation is crucial for object detection using the FastRCNN framework.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124243746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信