{"title":"Deep convolutional neural network-based effective model for 2D ear recognition using data augmentation","authors":"Ravishankar Mehta, K. K. Singh","doi":"10.1080/13682199.2023.2206763","DOIUrl":"https://doi.org/10.1080/13682199.2023.2206763","url":null,"abstract":"In the pandemic of COVID-19, identifying a person from their face became difficult due to wearing of mask. In regard to the given circumstances, the authors have remarkably put effort on identifying a person using 2D ear images based on deep convolutional neural network (CNNs). They investigated the challenges of limited data and varying environmental conditions in this regards. To deal with such challenges, the authors developed an augmentation-based light-weight CNN model using CPU enabled machine so that it can be ported into embedded devices. While applying data augmentation technique to enhance the quality and size of training dataset, the authors analysed and discussed the different augmentation parameters (rotation, flipping, zooming, and fill mode) that are effective for generating the large number of sample images of different variability. The model works well on constrained and unconstrained ear datasets and achieves good recognition accuracy. It also reduces the problem of overfitting. [ FROM AUTHOR] Copyright of Imaging Science Journal is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)","PeriodicalId":22456,"journal":{"name":"The Imaging Science Journal","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88332064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hany A. AbdElghfar, Abdelmoty M. Ahmed, A. A. Alani, Hammam M. Abdelaal, B. Bouallegue, M. Khattab, Hassan A. Youness
{"title":"QSLRS-CNN: Qur'anic sign language recognition system based on convolutional neural networks","authors":"Hany A. AbdElghfar, Abdelmoty M. Ahmed, A. A. Alani, Hammam M. Abdelaal, B. Bouallegue, M. Khattab, Hassan A. Youness","doi":"10.1080/13682199.2023.2202576","DOIUrl":"https://doi.org/10.1080/13682199.2023.2202576","url":null,"abstract":"","PeriodicalId":22456,"journal":{"name":"The Imaging Science Journal","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88870990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid optimization enabled deep learning model for Parkinson's disease classification","authors":"M. Dharani, R. Thamilselvan","doi":"10.1080/13682199.2023.2200060","DOIUrl":"https://doi.org/10.1080/13682199.2023.2200060","url":null,"abstract":"","PeriodicalId":22456,"journal":{"name":"The Imaging Science Journal","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86441838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resnet-Unet-FSOA based cranial nerve segmentation and medial axis extraction using MRI images","authors":"A. Vivekraj, S. Sumathi","doi":"10.1080/13682199.2023.2195097","DOIUrl":"https://doi.org/10.1080/13682199.2023.2195097","url":null,"abstract":"ABSTRACT This paper proposes a Resnet-UNet-Fractional Snake Optimization Algorithm (Res-UNet-FSOA) for cranial nerve segmentation. Firstly, MRI images are considered as input, and thereafter preprocessing is conducted utilizing median filtering. In the module of pre-processing, the image enhancement is carried out based upon improved multiscale vesselness that is in identifying local tubular portions of an image. After that, cranial nerve segmentation is done employing Res-UNet, which is an amalgamation of Resnet and UNet. The network is then trained by a devised optimization approach namely, FSOA. The FSOA is proposed by incorporating Fractional Calculus (FC) and Snake Optimizer (SO). Then, start point and end point extraction is executed utilizing deep seeded region growing (DSRG). At last, medial axis extraction is performed using tensor voting and non-maximum suppression (TV-NMS) method. Furthermore, the proposed approach obtained segmentation accuracy of 0.930, Jaccard coefficient of 0.947, and dice coefficient of 0.950.","PeriodicalId":22456,"journal":{"name":"The Imaging Science Journal","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73264837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An intelligent unsupervised anomaly detection in videos using inception capsule auto encoder","authors":"Harshad S. Modi, D. Parikh","doi":"10.1080/13682199.2023.2202577","DOIUrl":"https://doi.org/10.1080/13682199.2023.2202577","url":null,"abstract":"","PeriodicalId":22456,"journal":{"name":"The Imaging Science Journal","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91030407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed Hallek, Randa Khemiri, Ali Algarwi, Abdellatif Mtibaa, Mohamed Atri
{"title":"Colour-weighted rank transform and improved dynamic programming for fast and accurate stereo matching","authors":"Mohamed Hallek, Randa Khemiri, Ali Algarwi, Abdellatif Mtibaa, Mohamed Atri","doi":"10.1080/13682199.2023.2202096","DOIUrl":"https://doi.org/10.1080/13682199.2023.2202096","url":null,"abstract":"Real-time stereo matching with high accuracy is a dynamic research topic; it is attractive in diverse computer vision applications. This paper presents a stereo-matching algorithm that produces high-quality disparity map while maintaining real-time performance. The proposed stereo-matching method is based on three per-pixel difference measurements with adjustment elements. The absolute differences and the gradient matching are combined with a colour-weighted extension of complete rank transform to reduce the effect of radiometric distortion. The disparity calculation is realized using improved dynamic programming that optimizes along and across all scanlines. It solves the inter-scanline inconsistency problem and increases the matching accuracy. The proposed algorithm is implemented on parallel high-performance graphic hardware using the Compute Unified Device Architecture to reach over 240 million disparity evaluations per second. The processing speed of our algorithm reaches 98 frames per second on 240 × 320-pixel images and 32 disparity levels. Our method ranks fourth in terms of accuracy and runtime for quarter-resolution images in the Middlebury stereo benchmark.","PeriodicalId":22456,"journal":{"name":"The Imaging Science Journal","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134955934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dry gelatin ‘Photo-mechanical’ plates – their significance in the evolution of scientific & technical photography","authors":"Alan Hodgson","doi":"10.1080/13682199.2023.2195701","DOIUrl":"https://doi.org/10.1080/13682199.2023.2195701","url":null,"abstract":"","PeriodicalId":22456,"journal":{"name":"The Imaging Science Journal","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85503762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}