K. Sudha, V. C. Castro, G. Muthulakshmii, T. I. Parithi, S. Raja
{"title":"A Chaotic Encryption System Based on DNA Coding Using a Deep Neural Network","authors":"K. Sudha, V. C. Castro, G. Muthulakshmii, T. I. Parithi, S. Raja","doi":"10.1142/s0219467823500201","DOIUrl":"https://doi.org/10.1142/s0219467823500201","url":null,"abstract":"Critical to computer vision applications, deep learning demands a massive volume of training data for great performance. However, encrypting the sensitive information in a photograph is fraught with difficulty, despite rapid technological advancements. The Advanced Encryption System (AES) is the bedrock of classical encryption technologies. The Data Encryption Standard (DES) has low sensitivity, with weak anti-hacking capabilities. In a chaotic encryption system, a chaotic logistic map is employed to generate a key double logistic sequence, and deoxyribonucleic acid (DNA) matrices are created by DNA coding. The XOR operation is carried out between the DNA sequence matrix and the key matrix. Finally, the DNA matrix is decoded to obtain an encrypted image. Given that encrypted images are susceptible to attacks, a rapid and efficient Convolutional Neural Network (CNN) denoiser is used that enhances the robustness of the algorithm by maximizing the resolution of rebuilt images. The use of a key mixing percentage factor gives the proposed system vast key space and great key sensitivity. Its implementation is examined using statistical techniques such as histogram analysis, information entropy, key space analysis and key sensitivity. Experiments have shown that the suggested system is secure and robust to statistical and noise attacks.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120981013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"T2FRF Filter: An Effective Algorithm for the Restoration of Fingerprint Images","authors":"Joycy K. Antony, K. Kanagalakshmi","doi":"10.1142/s0219467823500043","DOIUrl":"https://doi.org/10.1142/s0219467823500043","url":null,"abstract":"Images captured in dim light are hardly satisfactory and increasing the International Organization for Standardization (ISO) for a short duration of exposure makes them noisy. The image restoration methods have a wide range of applications in the field of medical imaging, computer vision, remote sensing, and graphic design. Although the use of flash improves the lighting, it changed the image tone besides developing unnecessary highlight and shadow. Thus, these drawbacks are overcome using the image restoration methods that recovered the image with high quality from the degraded observation. The main challenge in the image restoration approach is recovering the degraded image contaminated with the noise. In this research, an effective algorithm, named T2FRF filter, is developed for the restoration of the image. The noisy pixel is identified from the input fingerprint image using Deep Convolutional Neural Network (Deep CNN), which is trained using the neighboring pixels. The Rider Optimization Algorithm (ROA) is used for the removal of the noisy pixel in the image. The enhancement of the pixel is performed using the type II fuzzy system. The developed T2FRF filter is measured using the metrics, such as correlation coefficient and Peak Signal to Noise Ratio (PSNR) for evaluating the performance. When compared with the existing image restoration method, the developed method obtained a maximum correlation coefficient of 0.7504 and a maximum PSNR of 28.2467dB, respectively.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"2344 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127475277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Lohith, Yoga Suhas Kuruba Manjunath, M. N. Eshwarappa
{"title":"Multimodal Biometric Person Authentication Using Face, Ear and Periocular Region Based on Convolution Neural Networks","authors":"M. Lohith, Yoga Suhas Kuruba Manjunath, M. N. Eshwarappa","doi":"10.1142/s0219467823500195","DOIUrl":"https://doi.org/10.1142/s0219467823500195","url":null,"abstract":"Biometrics is an active area of research because of the increase in need for accurate person identification in numerous applications ranging from entertainment to security. Unimodal and multimodal are the well-known biometric methods. Unimodal biometrics uses one biometric modality of a person for person identification. The performance of an unimodal biometric system is degraded due to certain limitations such as: intra-class variations and nonuniversality. The person identification using more than one biometric modality of a person is multimodal biometrics. This method of identification has gained more interest due to resistance on spoof attacks and more recognition rate. Conventional methods of feature extraction have difficulty in engineering features that are liable to more variations such as illumination, pose and age variations. Feature extraction using convolution neural network (CNN) can overcome these difficulties because large dataset with robust variations can be used for training, where CNN can learn these variations. In this paper, we propose multimodal biometrics at feature level horizontal fusion using face, ear and periocular region biometric modalities and apply deep learning CNN for feature representation and also we propose face, ear and periocular region dataset that are robust to intra-class variations. The evaluation of the system is made by using proposed database. Accuracy, Precision, Recall and [Formula: see text] score are calculated to evaluate the performance of the system and had shown remarkable improvement over existing biometric system.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132023253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. S. Kumar, S. Jothilakshmi, B. C. James, M. Prakash, N. Arulkumar, C. Rekha
{"title":"HHO-Based Vector Quantization Technique for Biomedical Image Compression in Cloud Computing","authors":"T. S. Kumar, S. Jothilakshmi, B. C. James, M. Prakash, N. Arulkumar, C. Rekha","doi":"10.1142/s0219467822400083","DOIUrl":"https://doi.org/10.1142/s0219467822400083","url":null,"abstract":"In the present digital era, the exploitation of medical technologies and massive generation of medical data using different imaging modalities, adequate storage, management, and transmission of biomedical images necessitate image compression techniques. Vector quantization (VQ) is an effective image compression approach, and the widely employed VQ technique is Linde–Buzo–Gray (LBG), which generates local optimum codebooks for image compression. The codebook construction is treated as an optimization issue solved with utilization of metaheuristic optimization techniques. In this view, this paper designs an effective biomedical image compression technique in the cloud computing (CC) environment using Harris Hawks Optimization (HHO)-based LBG techniques. The HHO-LBG algorithm achieves a smooth transition among exploration as well as exploitation. To investigate the better performance of the HHO-LBG technique, an extensive set of simulations was carried out on benchmark biomedical images. The proposed HHO-LBG technique has accomplished promising results in terms of compression performance and reconstructed image quality.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125156844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Ensemble Stacking Classification of Genetic Variations Using Machine Learning Algorithms","authors":"Y. Jahnavi, Poongothai Elango, S. Raja, P. Kumar","doi":"10.1142/s0219467823500158","DOIUrl":"https://doi.org/10.1142/s0219467823500158","url":null,"abstract":"Genetics is the clinical review of congenital mutation, where the principal advantage of analyzing genetic mutation of humans is the exploration, analysis, interpretation and description of the genetic transmitted and inherited effect of several diseases such as cancer, diabetes and heart diseases. Cancer is the most troublesome and disordered affliction as the proportion of cancer sufferers is growing massively. Identification and discrimination of the mutations that impart to the enlargement of tumor from the unbiased mutations is difficult, as majority tumors of cancer are able to exercise genetic mutations. The genetic mutations are systematized and categorized to sort the cancer by way of medical observations and considering clinical studies. At the present time, genetic mutations are being annotated and these interpretations are being accomplished either manually or using the existing primary algorithms. Evaluation and classification of each and every individual genetic mutation was basically predicated on evidence from documented content built on medical literature. Consequently, as a means to build genetic mutations, basically, depending on the clinical evidences persists a challenging task. There exist various algorithms such as one hot encoding technique is used to derive features from genes and their variations, TF-IDF is used to extract features from the clinical text data. In order to increase the accuracy of the classification, machine learning algorithms such as support vector machine, logistic regression, Naive Bayes, etc., are experimented. A stacking model classifier has been developed to increase the accuracy. The proposed stacking model classifier has obtained the log loss 0.8436 and 0.8572 for cross-validation data set and test data set, respectively. By the experimentation, it has been proved that the proposed stacking model classifier outperforms the existing algorithms in terms of log loss. Basically, minimum log loss refers to the efficient model. Here the log loss has been reduced to less than 1 by using the proposed stacking model classifier. The performance of these algorithms can be gauged on the basis of the various measures like multi-class log loss.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124106084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DCT Coefficients Weighting (DCTCW)-Based Gray Wolf Optimization (GWO) for Brightness Preserving Image Contrast Enhancement","authors":"Saorabh Kumar Mondal, Arpitam Chatterjee, B. Tudu","doi":"10.1142/s0219467823500183","DOIUrl":"https://doi.org/10.1142/s0219467823500183","url":null,"abstract":"Image contrast enhancement (CE) is a frequent image enhancement requirement in diverse applications. Histogram equalization (HE), in its conventional and different further improved ways, is a popular technique to enhance the image contrast. The conventional as well as many of the later versions of HE algorithms often cause loss of original image characteristics particularly brightness distribution of original image that results artificial appearance and feature loss in the enhanced image. Discrete Cosine Transform (DCT) coefficient mapping is one of the recent methods to minimize such problems while enhancing the image contrast. Tuning of DCT parameters plays a crucial role towards avoiding the saturations of pixel values. Optimization can be a possible solution to address this problem and generate contrast enhanced image preserving the desired original image characteristics. Biological behavior-inspired optimization techniques have shown remarkable betterment over conventional optimization techniques in different complex engineering problems. Gray wolf optimization (GWO) is a comparatively new algorithm in this domain that has shown promising potential. The objective function has been formulated using different parameters to retain original image characteristics. The objective evaluation against CEF, PCQI, FSIM, BRISQUE and NIQE with test images from three standard databases, namely, SIPI, TID and CSIQ shows that the presented method can result in values up to 1.4, 1.4, 0.94, 19 and 4.18, respectively, for the stated metrics which are competitive to the reported conventional and improved techniques. This paper can be considered a first-time application of GWO towards DCT-based image CE.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116090111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Modal Medical Image Fusion Using 3-Stage Multiscale Decomposition and PCNN with Adaptive Arguments","authors":"Mummadi Gowthami Reddy, P. V. Reddy, P. Reddy","doi":"10.1142/s0219467822400101","DOIUrl":"https://doi.org/10.1142/s0219467822400101","url":null,"abstract":"In the current era of technological development, medical imaging plays an important role in many applications of medical diagnosis and therapy. In this regard, medical image fusion could be a powerful tool to combine multi-modal images by using image processing techniques. But, conventional approaches failed to provide the effective image quality assessments and robustness of fused image. To overcome these drawbacks, in this work three-stage multiscale decomposition (TSMSD) using pulse-coupled neural networks with adaptive arguments (PCNN-AA) approach is proposed for multi-modal medical image fusion. Initially, nonsubsampled shearlet transform (NSST) is applied onto the source images to decompose them into low frequency and high frequency bands. Then, low frequency bands of both the source images are fused using nonlinear anisotropic filtering with discrete Karhunen–Loeve transform (NLAF-DKLT) methodology. Next, high frequency bands obtained from NSST are fused using PCNN-AA approach. Now, fused low frequency and high frequency bands are reconstructed using NSST reconstruction. Finally, band fusion rule algorithm with pyramid reconstruction is applied to get final fused medical image. Extensive simulation outcome discloses the superiority of proposed TSMSD using PCNN-AA approach as compared to state-of-the-art medical image fusion methods in terms of fusion quality metrics such as entropy (E), mutual information (MI), mean (M), standard deviation (STD), correlation coefficient (CC) and computational complexity.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126264489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Integrated Double Hybrid Fusion Approach for Image Smoothing","authors":"Anchal Kumawat, S. Panda","doi":"10.1142/s0219467823500031","DOIUrl":"https://doi.org/10.1142/s0219467823500031","url":null,"abstract":"Often in practice, during the process of image acquisition, the acquired image gets degraded due to various factors like noise, motion blur, mis-focus of a camera, atmospheric turbulence, etc. resulting in the image unsuitable for further analysis or processing. To improve the quality of these degraded images, a double hybrid restoration filter is proposed on the two same sets of input images and the output images are fused to get a unified filter in combination with the concept of image fusion. First image set is processed by applying deconvolution using Wiener Filter (DWF) twice and decomposing the output image using Discrete Wavelet Transform (DWT). Similarly, second image set is also processed simultaneously by applying Deconvolution using Lucy–Richardson Filter (DLR) twice followed by the above procedure. The proposed filter gives a better performance as compared to DWF and DLR filters in case of both blurry as well as noisy images. The proposed filter is compared with some standard deconvolution algorithms and also some state-of-the-art restoration filters with the help of seven image quality assessment parameters. Simulation results prove the success of the proposed algorithm and at the same time, visual and quantitative results are very impressive.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133638184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Evaluation of Convolutional Neural Network Using Synthetic Medical Data Augmentation Generated by GAN","authors":"Ramesh Adhikari, Suresh Pokharel","doi":"10.1142/s021946782350002x","DOIUrl":"https://doi.org/10.1142/s021946782350002x","url":null,"abstract":"Data augmentation is widely used in image processing and pattern recognition problems in order to increase the richness in diversity of available data. It is commonly used to improve the classification accuracy of images when the available datasets are limited. Deep learning approaches have demonstrated an immense breakthrough in medical diagnostics over the last decade. A significant amount of datasets are needed for the effective training of deep neural networks. The appropriate use of data augmentation techniques prevents the model from over-fitting and thus increases the generalization capability of the network while testing afterward on unseen data. However, it remains a huge challenge to obtain such a large dataset from rare diseases in the medical field. This study presents the synthetic data augmentation technique using Generative Adversarial Networks to evaluate the generalization capability of neural networks using existing data more effectively. In this research, the convolutional neural network (CNN) model is used to classify the X-ray images of the human chest in both normal and pneumonia conditions; then, the synthetic images of the X-ray from the available dataset are generated by using the deep convolutional generative adversarial network (DCGAN) model. Finally, the CNN model is trained again with the original dataset and augmented data generated using the DCGAN model. The classification performance of the CNN model is improved by 3.2% when the augmented data were used along with the originally available dataset.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126395267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Descriptive Survey on Face Emotion Recognition Techniques","authors":"B. Devi, M. Preetha","doi":"10.1142/s0219467823500080","DOIUrl":"https://doi.org/10.1142/s0219467823500080","url":null,"abstract":"Recognition of natural emotion from human faces has applications in Human–Computer Interaction, image and video retrieval, automated tutoring systems, smart environment as well as driver warning systems. It is also a significant indication of nonverbal communication among the individuals. The assignment of Face Emotion Recognition (FER) is predominantly complex for two reasons. The first reason is the nonexistence of a large database of training images, and the second one is about classifying the emotions, which can be complex based on the static input image. In addition, robust unbiased FER in real time remains the foremost challenge for various supervised learning-based techniques. This survey analyzes diverse techniques regarding the FER systems. It reviews a bunch of research papers and performs a significant analysis. Initially, the analysis depicts various techniques that are contributed in different research papers. In addition, this paper offers a comprehensive study regarding the chronological review and performance achievements in each contribution. The analytical review is also concerned about the measures for which the maximum performance was achieved in several contributions. Finally, the survey is extended with various research issues and gaps that can be useful for the researchers to promote improved future works on the FER models.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129857320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}