{"title":"Hybrid Pattern Extraction with Deep Learning-Based Heart Disease Diagnosis Using Echocardiogram Images","authors":"Nagashetteppa Biradar","doi":"10.1142/s0219467823500249","DOIUrl":"https://doi.org/10.1142/s0219467823500249","url":null,"abstract":"Echocardiography represents a noninvasive diagnostic approach that offers information concerning hemodynamics and cardiac function. It is a familiar cardiovascular diagnostic test apart from chest X-ray and echocardiography. The medical knowledge is enhanced by the Artificial Intelligence (AI) approaches like deep learning and machine learning because of the increase in the complexity as well as the volume of the data that in turn unlocks the clinically significant information. Similarly, the usage of developing information as well as communication technologies is becoming important for generating a persistent healthcare service via which the chronic disease and elderly patients get their medical facility at their home that in turn enhances the life quality and avoids hospitalizations. The main intention of this paper is to design and develop a novel heart disease diagnosis using speckle-noise reduction and deep learning-based feature learning and classification. The datasets gathered from the hospital are composed of both the images and the video frames. Since echocardiogram images suffer from speckle noise, the initial process is the speckle-noise reduction technique. Then, the pattern extraction is performed by combining the Local Binary Pattern (LBP), and Weber Local Descriptor (WLD) referred to as the hybrid pattern extraction. The deep feature learning is conducted by the optimized Convolutional Neural Network (CNN), in which the features are extracted from the max-pooling layer, and the fully connected layer is replaced by the optimized Recurrent Neural Network (RNN) for handling the diagnosis of heart disease, thus proposed model is termed as CRNN. The novel Adaptive Electric Fish Optimization (A-EFO) is used for performing feature learning and classification. In the final step, the best accuracy is achieved with the introduced model, while a comparative analysis is accomplished over the traditional models. From the experimental analysis, FDR of A-EFO-CRNN at 75% learning percentage is 21.05%, 15%, 48.89%, and 71.95% progressed than CRNN, CNN, RNN, and NN, respectively. Thus, the performance of the A-EFO-CRNN is enriched than the existing heuristic-oriented and classifiers in terms of the image dataset.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127250213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Ainapure, M. Boopathi, Dr. Chandra Sekhar Kolli, C. Jackulin
{"title":"Deep Ensemble Model for Spam Classification in Twitter via Sentiment Extraction: Bio-Inspiration-Based Classification Model","authors":"B. Ainapure, M. Boopathi, Dr. Chandra Sekhar Kolli, C. Jackulin","doi":"10.1142/s0219467823500341","DOIUrl":"https://doi.org/10.1142/s0219467823500341","url":null,"abstract":"Twitter Spam has turned out to be a significant predicament of these days. Current works concern on exploiting the machine learning models to detect the spams in Twitter by determining the statistic features of the tweets. Even though these models result in better success, it is hard to sustain the performances attained by the supervised approaches. This paper intends to introduce a deep learning-assisted spam classification model on twitter. This classification is based on sentiments and topics modeled in it. The initial step is data collection. Subsequently, the collected data are preprocessed with “stop word removal, stemming and tokenization”. The next step is feature extraction, wherein, the post tagging, headwords, rule-based lexicon, word length, and weighted holoentropy features are extracted. Then, the proposed sentiment score extraction is carried out to analyze their variations in nonspam and spam information. At last, the diffusions of spam data on Twitter are classified into spam and nonspams. For this, an Optimized Deep Ensemble technique is introduced that encloses “neural network (NN), support vector machine (SVM), random forest (RF) and convolutional neural network (DNN)”. Particularly, the weights of DNN are optimally tuned by an arithmetic crossover-based cat swarm optimization (AC-CS) model. At last, the supremacy of the developed approach is examined via evaluation over extant techniques. Accordingly, the proposed AC-CS [Formula: see text] ensemble model attained better accuracy value when the learning percentage is 80, which is 18.1%, 14.89%, 11.7%, 12.77%, 10.64%, 6.38%, 6.38%, and 6.38% higher than SVM, DNN, RNN, DBN, MFO [Formula: see text] ensemble model, WOA [Formula: see text] ensemble model, EHO [Formula: see text] ensemble model and CSO [Formula: see text] ensemble model models.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129527375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Certainty-Based Deep Fused Neural Network Using Transfer Learning and Adaptive Movement Estimation for the Diagnosis of Cardiomegaly","authors":"N. Sasikaladevi, A. Revathi","doi":"10.1142/s021946782350033x","DOIUrl":"https://doi.org/10.1142/s021946782350033x","url":null,"abstract":"Cardiomegaly is a radiographic abnormality, and it has significant prognosis importance in the population. Chest X-ray images can identify it. Early detection of cardiomegaly reduces the risk of congestive heart failure and systolic dysfunction. Due to the lack of radiologists, there is a demand for the artificial intelligence tool for the early detection of cardiomegaly. The cardiomegaly X-ray dataset is extracted from the cheXpert database. Totally, 46195 X-ray records with a different view such as AP view, PA views, and lateral views are used to train and validate the proposed model. The artificial intelligence app named CardioXpert is constructed based on deep neural network. The transfer learning approach is adopted to increase the prediction metrics, and an optimized training method called adaptive movement estimation is used. Three different transfer learning-based deep neural networks named APNET, PANET, and LateralNET are constructed for each view of X-ray images. Finally, certainty-based fusion is performed to enrich the prediction accuracy, and it is named CardioXpert. As the proposed method is based on the largest cardiomegaly dataset, hold-out validation is performed to verify the prediction accuracy of the proposed model. An unseen dataset validates the model. These deep neural networks, APNET, PANET, and LateralNET, are individually validated, and then the fused network CardioXpert is validated. The proposed model CardioXpert provides an accuracy of 93.6%, which is the highest at this time for this dataset. It also yields the highest sensitivity of 94.7% and a precision of 97.7%. These prediction metrics prove that the proposed model outperforms all the state-of-the-art deep transfer learning methods for diagnosing cardiomegaly thoracic disorder. The proposed deep learning neural network model is deployed as the web app. The cardiologist can use this prognostic app to predict cardiomegaly disease faster and more robust in the early state by using low-cost and chest X-ray images.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116543906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Systematic Survey on Photorealistic Computer Graphic and Photographic Image Discrimination","authors":"G. Birajdar, Mukesh D. Patil","doi":"10.1142/s0219467823500377","DOIUrl":"https://doi.org/10.1142/s0219467823500377","url":null,"abstract":"The advent in graphic rendering software and technological progress in hardware can generate or modify photorealistic computer graphic (CG) images that are difficult to identify by human observers. Computer-generated images are used in magazines, film and advertisement industry, medical and insurance agencies, social media, and law agencies as an information carrier. The forged computer-generated image created by the malicious user may distort social stability and impacts on public opinion. Hence, the precise identification of computer graphic and photographic image (PG) is a significant and challenging task. In the last two decades, several researchers have proposed different algorithms with impressive accuracy rate, including a recent addition of deep learning methods. This comprehensive survey presents techniques dealing with CG and PG image classification using machine learning and deep learning. In the beginning, broad classification of all the methods in to five categories is discussed in addition to generalized framework of CG detection. Subsequently, all the significant works are surveyed and are grouped into five types: image statistics methods, acquisition device properties-based techniques, color, texture, and geometry-based methods, hybrid methods, and deep learning methods. The advantages and limitations of CG detection methods are also presented. Finally, major challenges and future trends in the CG and PG image identification field are discussed.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"56 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114032251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Review on Deep Learning Classifier for Hyperspectral Imaging","authors":"Neelam Dahiya, Sartajvir Singh, Sheifali Gupta","doi":"10.1142/s0219467823500365","DOIUrl":"https://doi.org/10.1142/s0219467823500365","url":null,"abstract":"Nowadays, hyperspectral imaging (HSI) attracts the interest of many researchers in solving the remote sensing problems especially in various specific domains such as agriculture, snow/ice, object detection and environmental monitoring. In the previous literature, various attempts have been made to extract the critical information through hyperspectral imaging which is not possible through multispectral imaging (MSI). The classification in image processing is one of the important steps to categorize and label the pixels based on some specific rules. There are various supervised and unsupervised approaches which can be used for classification. Since the past decades, various classifiers have been developed and improved to meet the requirement of remote sensing researchers. However, each method has its own merits and demerits and is not applicable in all scenarios. Past literature also concluded that deep learning classifiers are more preferable as compared to machine learning classifiers due to various advantages such as lesser training time for model generation, handle complex data and lesser user intervention requirements. This paper aims to perform the review on various machine learning and deep learning-based classifiers for HSI classification along with challenges and remedial solution of deep learning with hyperspectral imaging. This work also highlights the various limitations of the classifiers which can be resolved with developments and incorporation of well-defined techniques.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129454674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Underwater Video Enhancement Using Manta Ray Foraging Lion Optimization-Based Fusion Convolutional Neural Network","authors":"Pooja Honnutagi, Y. S. Laitha, V. D. Mytri","doi":"10.1142/s0219467823500316","DOIUrl":"https://doi.org/10.1142/s0219467823500316","url":null,"abstract":"Due to the significance of aquatic robotics and marine engineering, the underwater video enhancement has gained huge attention. Thus, a video enhancement method, namely Manta Ray Foraging Lion Optimization-based fusion Convolutional Neural Network (MRFLO-based fusion CNN) algorithm is developed in this research for enhancing the quality of the underwater videos. The MRFLO is developed by merging the Lion Optimization Algorithm (LOA) and Manta Ray Foraging Optimization (MRFO). The blur in the input video frame is detected and estimated through the Laplacian’s variance method. The fusion CNN classifier is used for deblurring the frame by combining both the input frame and blur matrix. The fusion CNN classifier is tuned by the developed MRFLO algorithm. The pixel of the deblurred frame is enhanced using the Type II Fuzzy system and Cuckoo Search optimization algorithm filter (T2FCS filter). The developed MRFLO-based fusion CNN algorithm uses the metrics, Underwater Image Quality Measure (UIQM), Underwater Color Image Quality Evaluation (UCIQE), Structural Similarity Index Measure (SSIM), Mean Square Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) for the evaluation by varying the blur intensity. The proposed MRFLO-based fusion CNN algorithm acquired a PSNR of 38.9118, SSIM of 0.9593, MSE of 3.2214, UIQM of 3.0041 and UCIQE of 0.7881.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134176566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Design of Occlusion-Invariant Face Recognition Using Optimal Pattern Extraction and CNN with GRU-Based Architecture","authors":"Pankaj, P. K. Bharti, B. Kumar","doi":"10.1142/s0219467823500298","DOIUrl":"https://doi.org/10.1142/s0219467823500298","url":null,"abstract":"Face detection is a computer technology being used in a variety of applications that identify human faces in digital images. In many face recognition challenges, Convolutional Neural Networks (CNNs) are regarded as a problem solver. Occlusion is determined as the most common challenge of face recognition in realistic applications. Several studies are undergoing to obtain face recognition without any challenges. However, the occurrence of noise and occlusion in the image reduces the achievement of face recognition. Hence, various researches and studies are carried out to solve the challenges involved with the occurrence of occlusion and noise in the image, and more clarification is needed to acquire high accuracy. Hence, a deep learning model is intended to be developed in this paper using the meta-heuristic approach. The proposed model covers four main steps: (a) data acquisition, (b) pre-processing, (c) pattern extraction and (d) classification. The benchmark datasets regarding the face image with occlusion are gathered from a public source. Further, the pre-processing of the images is performed by contrast enhancement and Gabor filtering. With these pre-processed images, pattern extraction is done by the optimal local mesh ternary pattern. Here, the hybrid Whale–Galactic Swarm Optimization (WGSO) algorithm is used for developing the optimal local mesh ternary pattern extraction. By inputting the pattern-extracted image, the new deep learning model namely “CNN with Gated Recurrent Unit (GRU)” network performs the recognition process to maximize the accuracy, and also it is used to enhance the face recognition model. From the results, in terms of accuracy, the proposed WGSO-[Formula: see text] model is better by 4.02%, 3.76% and 2.17% than the CNN, SVM and SRC, respectively. The experimental results are presented by performing their comparative analysis on a standard dataset, and they assure the efficiency of the proposed model. However, many challenging problems related to face recognition still exist, which offer excellent opportunities to face recognition researchers in the future.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134297698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Illumination Invariance Adaptive Sidewalk Detection Based on Unsupervised Feature Learning","authors":"Wang Zhiyu, Weili Ding, Mingkui Wang","doi":"10.1142/s0219467823500274","DOIUrl":"https://doi.org/10.1142/s0219467823500274","url":null,"abstract":"To solve the problem of road recognition when the robot is driving on the sidewalk, a novel sidewalk detection algorithm from the first-person perspective is proposed, which is crucial for robot navigation. The algorithm starts from the illumination invariance graph of the sidewalk image, and the sidewalk “seeds” are selected dynamically to get the sidewalk features for unsupervised feature learning. The final sidewalk region will be extracted by multi-threshold adaptive segmentation and connectivity processing. The key innovations of the proposed algorithm are the method of illumination invariance based on PCA and the unsupervised feature learning for sidewalk detection. With the PCA-based illumination invariance, it can calculate the lighting invariance angle dynamically to remove the impact of illumination and different brick colors’ influence on sidewalk detection. Then the sidewalk features are selected dynamically using the parallel geometric structure of the sidewalk, and the confidence region of the sidewalk is obtained through unsupervised feature learning. The proposed method can effectively suppress the effects of shadows and different colored bricks in the sidewalk area. The experimental result proves that the F-measure of the proposed algorithm can reach 93.11% and is at least 7.7% higher than the existing algorithm.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114292433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection and Localization of Copy-Move Forgery in Digital Images: Review and Challenges","authors":"G. Suresh, Chanamallu Srinivasa Rao","doi":"10.1142/s0219467823500250","DOIUrl":"https://doi.org/10.1142/s0219467823500250","url":null,"abstract":"Copy move forgery in digital images became a common problem due to the wide accessibility of image processing algorithms and open-source editing software. The human visual system cannot identify the traces of forgery in the tampered image. The proliferation of such digital images through the internet and social media is possible with a finger touch. These tampered images have been used in news reports, judicial forensics, medical records, and financial statements. In this paper, a detailed review has been carried on various copy-move forgery detection (CMFD) and localization techniques. Further, challenges in the research are identified along with possible solutions.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121499221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Challenges and Imperatives of Deep Learning Approaches for Detection of Melanoma: A Review","authors":"E. Gayatri, S. Aarthy","doi":"10.1142/s0219467822400125","DOIUrl":"https://doi.org/10.1142/s0219467822400125","url":null,"abstract":"Recently, melanoma became one of the deadliest forms of skin cancer due to ultraviolet rays. The diagnosis of melanoma is very crucial if it is not identified in the early stages and later on, in the advanced stages, it affects the other organs of the body, too. Earlier identification of melanoma plays a major role in the survival chances of a human. The manual detection of tumor thickness is a very difficult task so dermoscopy is used to measure the thickness of the tumor which is a non-invasive method. Computer-aided diagnosis is one of the greatest evolutions in the medical sector, this system helps the doctors for the automated diagnosis of the disease because it improves accurate disease detection. In the world of digital images, some phases are required to remove the artifacts for achieving the best accurate diagnosis results such as the acquisition of an image, pre-processing, segmentation, feature selection, extraction and finally classification phase. This paper mainly focuses on the various deep learning techniques like convolutional neural networks, recurrent neural networks, You Only Look Once for the purpose of classification and prediction of the melanoma and is also focuses on the other variant of melanomas, i.e. ocular melanoma and mucosal melanoma because it is not a matter where the melanoma starts in the body.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121198544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}