{"title":"Transfer learning based hybrid feature learning framework for enhanced skin cancer diagnosis using deep feature integration","authors":"Maridu Bhargavi , Sivadi Balakrishna","doi":"10.1016/j.jestch.2025.102135","DOIUrl":null,"url":null,"abstract":"<div><div>Skin cancer continues to be a major health problem worldwide, with excessive misdiagnosis of skin cancer among dermatologists resulting in delayed treatment and poor patient outcomes. To improve survival chances, skin cancer must be identified accurately and promptly. However, current diagnostic methods lack feature representation and model generalization. Among the primary challenges in automated skin cancer classification are addressing differences in lesion appearance, occlusions, and data class imbalance that impact model performance and reliability. To address these issues, this research proposes the DRMv2Net model, a feature fusion deep learning-based technique that integrates multiple pre-trained convolutional neural networks to enhance skin cancer diagnosis. The method applies a systematic pipeline that includes pre-processing, feature extraction, fusion, and classification. The pre-processing techniques such as adaptive thresholding for hair artifact removal, image inpainting to remove occlusions, and data augmentation for class balancing, were applied to enhance the quality of inputs. Using DenseNet201, ResNet101, and MobileNetV2, diverse features like edges, texture, and color change were extracted and concatenated to build a rich feature representation, followed by fully connected layers for classification. The two benchmark datasets, ISIC 2357 and PAD-UFES 20, are used extensively in testing the DRMv2Net model. A comparison with standalone CNN models such as DenseNet201, ResNet101, MobileNetV2, VGG19, and Xception shows that feature fusion had better accuracy results of 96.11 % on ISIC 2357 and 96.17 % on PAD-UFES 20 respectively when compared to values obtained by existing standalone models. These results demonstrate the strength of feature fusion and pre-processing in boosting how accurately skin cancer is identified and offer a robust and scalable automatic medical image classification solution.</div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"69 ","pages":"Article 102135"},"PeriodicalIF":5.1000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Science and Technology-An International Journal-Jestech","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2215098625001909","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Skin cancer continues to be a major health problem worldwide, with excessive misdiagnosis of skin cancer among dermatologists resulting in delayed treatment and poor patient outcomes. To improve survival chances, skin cancer must be identified accurately and promptly. However, current diagnostic methods lack feature representation and model generalization. Among the primary challenges in automated skin cancer classification are addressing differences in lesion appearance, occlusions, and data class imbalance that impact model performance and reliability. To address these issues, this research proposes the DRMv2Net model, a feature fusion deep learning-based technique that integrates multiple pre-trained convolutional neural networks to enhance skin cancer diagnosis. The method applies a systematic pipeline that includes pre-processing, feature extraction, fusion, and classification. The pre-processing techniques such as adaptive thresholding for hair artifact removal, image inpainting to remove occlusions, and data augmentation for class balancing, were applied to enhance the quality of inputs. Using DenseNet201, ResNet101, and MobileNetV2, diverse features like edges, texture, and color change were extracted and concatenated to build a rich feature representation, followed by fully connected layers for classification. The two benchmark datasets, ISIC 2357 and PAD-UFES 20, are used extensively in testing the DRMv2Net model. A comparison with standalone CNN models such as DenseNet201, ResNet101, MobileNetV2, VGG19, and Xception shows that feature fusion had better accuracy results of 96.11 % on ISIC 2357 and 96.17 % on PAD-UFES 20 respectively when compared to values obtained by existing standalone models. These results demonstrate the strength of feature fusion and pre-processing in boosting how accurately skin cancer is identified and offer a robust and scalable automatic medical image classification solution.
期刊介绍:
Engineering Science and Technology, an International Journal (JESTECH) (formerly Technology), a peer-reviewed quarterly engineering journal, publishes both theoretical and experimental high quality papers of permanent interest, not previously published in journals, in the field of engineering and applied science which aims to promote the theory and practice of technology and engineering. In addition to peer-reviewed original research papers, the Editorial Board welcomes original research reports, state-of-the-art reviews and communications in the broadly defined field of engineering science and technology.
The scope of JESTECH includes a wide spectrum of subjects including:
-Electrical/Electronics and Computer Engineering (Biomedical Engineering and Instrumentation; Coding, Cryptography, and Information Protection; Communications, Networks, Mobile Computing and Distributed Systems; Compilers and Operating Systems; Computer Architecture, Parallel Processing, and Dependability; Computer Vision and Robotics; Control Theory; Electromagnetic Waves, Microwave Techniques and Antennas; Embedded Systems; Integrated Circuits, VLSI Design, Testing, and CAD; Microelectromechanical Systems; Microelectronics, and Electronic Devices and Circuits; Power, Energy and Energy Conversion Systems; Signal, Image, and Speech Processing)
-Mechanical and Civil Engineering (Automotive Technologies; Biomechanics; Construction Materials; Design and Manufacturing; Dynamics and Control; Energy Generation, Utilization, Conversion, and Storage; Fluid Mechanics and Hydraulics; Heat and Mass Transfer; Micro-Nano Sciences; Renewable and Sustainable Energy Technologies; Robotics and Mechatronics; Solid Mechanics and Structure; Thermal Sciences)
-Metallurgical and Materials Engineering (Advanced Materials Science; Biomaterials; Ceramic and Inorgnanic Materials; Electronic-Magnetic Materials; Energy and Environment; Materials Characterizastion; Metallurgy; Polymers and Nanocomposites)