{"title":"A Novel Automated Classification and Segmentation for COVID-19 using 3D CT Scans","authors":"Shiyi Wang, Guang Yang","doi":"10.1109/IPAS55744.2022.10052819","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052819","url":null,"abstract":"Medical image classification and segmentation based on deep learning (DL) are emergency research topics for diagnosing variant viruses of the current COVID-19 situation. In COVID-19 computed tomography (CT) images of the lungs, ground glass turbidity is the most common finding that requires specialist diagnosis. Based on this situation, some researchers propose the relevant DL models which can replace professional diagnostic specialists in clinics when lacking expertise. However, although DL methods have a stunning performance in medical image processing, the limited datasets can be a challenge in developing the accuracy of diagnosis at the human level. In addition, deep learning algorithms face the challenge of classifying and segmenting medical images in three or even multiple dimensions and maintaining high accuracy rates. Consequently, with a guaranteed high level of accuracy, our model can classify the patients' CT images into three types: Normal, Pneumonia and COVID. Subsequently, two datasets are used for segmentation, one of the datasets even has only a limited amount of data (20 cases). Our system combined the classification model and the segmentation model together, a fully integrated diagnostic model was built on the basis of ResNet50 and 3D U-Net algorithm. By feeding with different datasets, the COVID image segmentation of the infected area will be carried out according to classification results. Our model achieves 94.52% accuracy in the classification of lung lesions by 3 types: COVID, Pneumonia and Normal. For 2 labels (ground truth, lung lesions) segmentation, the model gets 99.57% of accuracy, 0.2191 of train loss and $0.78pm 0.03$ of MeanDice±Std, while the 4 labels (ground truth, left lung, right lung, lung lesions) segmentation achieves 98.89% of accuracy, 0.1132 of train loss and $0.83pm 0.13$ of MeanDice±Std. For future medical use, embedding the model into the medical facilities might be an efficient way of assisting or substituting doctors with diagnoses, therefore, a broader range of the problem of variant viruses in the COVID-19 situation may also be successfully solved.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"Five 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131338943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanna Loppi, Jennifer A. Frye, Jacob C. Zbesko, H. Morrison, Marco, Tavera-Garcia, Frankie G. Garcia, N. Scholpa, R. Schnellmann, K. Doyle
{"title":"Plenary Speakers","authors":"Sanna Loppi, Jennifer A. Frye, Jacob C. Zbesko, H. Morrison, Marco, Tavera-Garcia, Frankie G. Garcia, N. Scholpa, R. Schnellmann, K. Doyle","doi":"10.1109/IPAS55744.2022.10053053","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10053053","url":null,"abstract":"In this talk, we will discuss how Video Analytics can be applied to human monitoring using as input a video stream. Existing work has either focused on simple activities in real-life scenarios","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"278 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114943637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Venkat Margapuri, Trevor W. Rife, Chaney Courtney, B. Schlautman, Kai Zhao, Michael L. Neilsen
{"title":"Fractional Vegetation Cover Estimation using Hough Lines and Linear Iterative Clustering","authors":"Venkat Margapuri, Trevor W. Rife, Chaney Courtney, B. Schlautman, Kai Zhao, Michael L. Neilsen","doi":"10.1109/IPAS55744.2022.10052996","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052996","url":null,"abstract":"A common requirement of plant breeding programs across the country is companion planting – growing different species of plants in close proximity so they can mutually benefit each other. However, the determination of companion plants requires meticulous monitoring of plant growth. The technique of ocular monitoring is often laborious and error prone. The availability of image processing techniques can be used to address the challenge of plant growth monitoring and provide robust solutions that assist plant scientists to identify companion plants. This paper presents a new image processing algorithm to determine the amount of vegetation cover present in a given area, called fractional vegetation cover. The proposed technique draws inspiration from the trusted Daubenmire method for vegetation cover estimation and expands upon it. Briefly, the idea is to estimate vegetation cover from images containing multiple rows of plant species growing in close proximity separated by a multi-segment PVC frame of known size. The proposed algorithm applies a Hough Transform and Simple Linear Iterative Clustering (SLIC) to estimate the amount of vegetation cover within each segment of the PVC frame. When applied as a longitudinal study on a 177 field image dataset, this analysis provides crucial insights into plant growth. As a means of comparison, the proposed algorithm is compared with SamplePoint and Canopeo, two trusted applications used for vegetation cover estimation. The comparison shows a 99% similarity with both SamplePoint and Canopeo demonstrating the accuracy and feasibility of the algorithm for fractional vegetation cover estimation.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129792857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Book of Abstract","authors":"Peter Onuk, F. Melcher","doi":"10.1109/cogart.2009.5167213","DOIUrl":"https://doi.org/10.1109/cogart.2009.5167213","url":null,"abstract":"","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127590751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}