{"title":"Breast Cancer Detection Using Random Forest Classifier","authors":"Pavithra Suchindran, R. Vanithamani, J. Justin","doi":"10.4018/978-1-7998-6690-9.ch005","DOIUrl":"https://doi.org/10.4018/978-1-7998-6690-9.ch005","url":null,"abstract":"Breast cancer is the second most prevalent type of cancer among women. Breast ultrasound (BUS) imaging is one of the most frequently used diagnostic tools to detect and classify abnormalities in the breast. To improve the diagnostic accuracy, computer-aided diagnosis (CAD) system is helpful for breast cancer detection and classification. Normally, a CAD system consists of four stages: pre-processing, segmentation, feature extraction, and classification. In this chapter, the pre-processing step includes speckle noise removal using speckle reducing anisotropic diffusion (SRAD) filter. The goal of segmentation is to locate the region of interest (ROI) and active contour-based segmentation and fuzzy C means segmentation (FCM) are used in this work. The texture features are extracted and fed to a classifier to categorize the images as normal, benign, and malignant. In this work, three classifiers, namely k-nearest neighbors (KNN) algorithm, decision tree algorithm, and random forest classifier, are used and the performance is compared based on the accuracy of classification.","PeriodicalId":346792,"journal":{"name":"Handbook of Research on Deep Learning-Based Image Analysis Under Constrained and Unconstrained Environments","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121289517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Facial Expression","authors":"E. Lyakso, O. Frolova, Yuri N. Matveev","doi":"10.4018/978-1-7998-6690-9.ch014","DOIUrl":"https://doi.org/10.4018/978-1-7998-6690-9.ch014","url":null,"abstract":"The description of the results of five psychophysiological studies using automatic coding facial expression in adults and children (from 4 to 16 years) in the FaceReader software version 8.0 is presented. The model situations of reading the emotional text and pronouncing emotional phrases and words, natural interaction in mother-child dyads, child and adult (experimenter), and interaction of children with each other were analyzed. The difficulties of applying the program to analyze the behavior of children in natural conditions, to analyze the emotional facial expressions of the children with autism spectrum disorders and children with Down syndrome are described. The ways to solve them are outlined.","PeriodicalId":346792,"journal":{"name":"Handbook of Research on Deep Learning-Based Image Analysis Under Constrained and Unconstrained Environments","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133592946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vijayarajan Rajangam, S. N., K. R., K. Mallikarjuna
{"title":"Performance Analysis of VGG19 Deep Learning Network Based Brain Image Fusion","authors":"Vijayarajan Rajangam, S. N., K. R., K. Mallikarjuna","doi":"10.4018/978-1-7998-6690-9.ch008","DOIUrl":"https://doi.org/10.4018/978-1-7998-6690-9.ch008","url":null,"abstract":"Multimodal imaging systems assist medical practitioners in cost-effective diagnostic methods in clinical pathologies. Multimodal imaging of the same organ or the region of interest reveals complementing anatomical and functional details. Multimodal image fusion algorithms integrate complementary image details into a composite image that reduces clinician's time for effective diagnosis. Deep learning networks have their role in feature extraction for the fusion of multimodal images. This chapter analyzes the performance of a pre-trained VGG19 deep learning network that extracts features from the base and detail layers of the source images for constructing a weight map to fuse the source image details. Maximum and averaging fusion rules are adopted for base layer fusion. The performance of the fusion algorithm for multimodal medical image fusion is analyzed by peak signal to noise ratio, structural similarity index, fusion factor, and figure of merit. Performance analysis of the fusion algorithms is also carried out for the source images with the presence of impulse and Gaussian noise.","PeriodicalId":346792,"journal":{"name":"Handbook of Research on Deep Learning-Based Image Analysis Under Constrained and Unconstrained Environments","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116415465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}